Comments
yourfanat wrote: I am using another tool for Oracle developers - dbForge Studio for Oracle. This IDE has lots of usefull features, among them: oracle designer, code competion and formatter, query builder, debugger, profiler, erxport/import, reports and many others. The latest version supports Oracle 12C. More information here.
Cloud Expo on Google News
SYS-CON.TV
Cloud Expo & Virtualization 2009 East
PLATINUM SPONSORS:
IBM
Smarter Business Solutions Through Dynamic Infrastructure
IBM
Smarter Insights: How the CIO Becomes a Hero Again
Microsoft
Windows Azure
GOLD SPONSORS:
Appsense
Why VDI?
CA
Maximizing the Business Value of Virtualization in Enterprise and Cloud Computing Environments
ExactTarget
Messaging in the Cloud - Email, SMS and Voice
Freedom OSS
Stairway to the Cloud
Sun
Sun's Incubation Platform: Helping Startups Serve the Enterprise
POWER PANELS:
Cloud Computing & Enterprise IT: Cost & Operational Benefits
How and Why is a Flexible IT Infrastructure the Key To the Future?
Click For 2008 West
Event Webcasts
Meh. It's Just Data.
All the applause over Google’s Data Liberation Front announcement and blogs is making my head hurt

All the applause over Google’s Data Liberation Front announcement and blogs is making my head hurt. Or maybe that’s the lack of sleep. dlf Either way, it’s disconcerting to me that so many bright people are choosing to make much of what is just a baby step – if that - toward a much larger, much more difficult goal. After all, data without an application to interpret and make use of it is about as useful as a Netbook without a network connection.

There seems to suddenly be a lot of focus on “data” and the ability for users consumers to pack up their data and take it wherever they want. Except for people attached to their i-Thing. I think users of i-Things were approached about the concept but were unable to get past the revelation that there are other “i-Things” out there from other vendors in the first place. Regardless, the core concept appears a laudable goal and rational desire. After all, the data was probably created by the consumer and thus, by most people’s definitions, they own the data. It’s theirs, so they should be able to move it hither and fro at will. But what is “data”?

This definition seems as good as all the others, summing up what most people – at least those able to articulate in technical terms – consider data:

DEFINITION - (1) In computing, data is information that has been translated into a form that is more convenient to move or process. Relative to today's computers and transmission media, data is information converted into binary digital form.

Data is bits and bytes; it’s a digital representation of something else – a photo, a document, a presentation, audio, video, etc… On its own data is meaningless. Data is nothing more than a collection of 1s and 0s used for processing and storage. It isn’t anything useful until (wait for it, wait for it) an application interprets that data and does something interesting with it. It’s a representation, and nothing more.

That said, Google’s Data Liberation Front is going to free data from its products and give users consumers a way to move it, store it, manipulate it, whatever they want because, as we all agree, it’s their data. The brouhaha over data actually began much earlier than Microsoft’s already infamous Sidekick data loss, but that event is what really turned the spotlight on data liberation. Reuven Cohen of Enomaly sums it up nicely in his blog post “Google’s Data Liberation Front”. But even Reuven falls short in his analysis.

The site also points out that there shouldn't be an additional charge to export your data. Beyond that, if it takes you many hours to get your data out, it's almost as bad as not being able to get your data out at all. I would also add if your data isn't usable. For example a 1tb text file is (almost) just as bad as not getting your data at all.

The
FAQ answers some interesting questions including that of Data Standards saying "We're working to use existing open standards formats wherever possible, and to document how we use those formats in a clear simple manner."

Personally, I applaud this move by Google, lets hope others in the cloud space follow Google's lead.

Reuven gets to the heart of the problem, “a 1tb text file is (almost) just as bad as not getting your data at all” but then seems to accept that “open standards formatsis a good enough answer to the problem. In some cases, in some very few limited cases like those where the data represents documents and photos and e-mail, those standard formats are enough. Even I am satisfied with that. But for data for which there is no “standard format” this isn’t enough, and it’s the latter half of the statement from the FAQ that becomes relevant: “to document how we use those formats in a clear simple manner.” This is where the phrase “open standards formats” really ends up meaning open meta-data standard formats like XML.


IT’S NOT ABOUT FORMAT, IT’S ABOUT CONNECTIONS


In other words, your “data” is going to be portable – which I interpret to mean usable - only if there already exists an established – either by image committee, by agency, or de facto – standard in which it can be stored, transferred, and interpreted. If I export a presentation from Google Docs I expect that it will be readable by Microsoft Office and OpenOffice and any other number of applications capable of importing that format. But if it isn’t in something at least akin to a standardized format it’s going to take a lot of translation work on the part of cloud application providers to import that data and make it usable for the user, if it even can. There’s bound to be data loss as not every piece of data will translate exactly to another piece of data in a similar but still very different internally application.

Take Twitter and Plurk. You remember Plurk, right? Okay, never mind. Take Facebook and MySpace. Two completely different systems accomplishing essentially the same thing. Both have comparable functionality and features, but are almost certainly designed and implemented using completely different methodologies and use different internal representations to link together all the posts, photos, friends, etc… together.

Taking all that data – the representation of your friends, followers, posts, messages, etc… – out of one of those applications may be simple, but the way in which MySpace developers will describe it using “open standards formats” is likely to be completely different than the way in which Facebook developers will describe it using those same “open standards formats”. That means Facebook has to develop a way to import and translate data from MySpace, and vice-versa. And it won’t be a simple process because some of the internal linkage between you and your “friends” is going to necessarily be lost.

Look at it this way, there is only one “Lori MacVittie” on Facebook. Really. So if “Lori MacVittie” is your friend on MySpace and on Facebook, the mapping between the two might be pretty easy. But there are 57 “John Smiths” on Facebook. So which John Smith is the same on MySpace and Facebook? Applications internally use unique identifiers to distinguish between users because of this problem, and often use e-mail addresses to further delineate when it’s not obvious. But what if I signed up for MySpace using a different e-mail account than I did Facebook?

And that’s just relationships, we’re not even going to dig into how to translate posts/notes/messages/tags/descriptions from one application to another and maintain integrity. Until we have that much we won’t even discuss how to translate privacy policies and settings so that my settings in Facebook stay the same when I move to “insert new social networking fad here”.


WHAT ABOUT THE OTHER APPS?


It’s easy to export documents and photos and even e-mail, but what about “my maps”? I spent several long years working in GIS and manipulating map data and let me tell you there is very little “standard” about it. Very little. What about my Waves? What about the applications I custom developed with my own special data? And that’s just Google. What about the plethora of other applications consumers are using to store data that don’t have a recognized “standard” schema? What about all the applications yet to be developed that are new and exciting and certainly don’t have anything resembling a standardized data schema?

That’s really what’s eating at me: exporting data in an “open standards format” is all well and good and it’s certainly better than not allowing me to retrieve it, but unless there’s a way to import that data into another application – and maintain its internal and external integrity – then simply freeing the data is not really worthy of all this applause.

It’s like being the only vendor to implement a new networking standard. Until interoperability with another product implementing the same standards is proven, no one really gets all that excited about it.

Follow me on Twitter View Lori's profile on SlideShare friendfeed icon_facebook

AddThis Feed Button Bookmark and Share

Related blogs & articles:

Read the original blog entry...

About Lori MacVittie
Lori MacVittie is responsible for education and evangelism of application services available across F5’s entire product suite. Her role includes authorship of technical materials and participation in a number of community-based forums and industry standards organizations, among other efforts. MacVittie has extensive programming experience as an application architect, as well as network and systems development and administration expertise. Prior to joining F5, MacVittie was an award-winning Senior Technology Editor at Network Computing Magazine, where she conducted product research and evaluation focused on integration with application and network architectures, and authored articles on a variety of topics aimed at IT professionals. Her most recent area of focus included SOA-related products and architectures. She holds a B.S. in Information and Computing Science from the University of Wisconsin at Green Bay, and an M.S. in Computer Science from Nova Southeastern University.

Latest Cloud Developer Stories
Smart cities have the potential to change our lives at so many levels for citizens: less pollution, reduced parking obstacles, better health, education and more energy savings. Real-time data streaming and the Internet of Things (IoT) possess the power to turn this vision into a ...
In his general session at 21st Cloud Expo, Greg Dumas, Calligo’s Vice President and G.M. of US operations, discussed the new Global Data Protection Regulation and how Calligo can help business stay compliant in digitally globalized world. Greg Dumas is Calligo's Vice President a...
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected ...
In his keynote at 18th Cloud Expo, Andrew Keys, Co-Founder of ConsenSys Enterprise, provided an overview of the evolution of the Internet and the Database and the future of their combination – the Blockchain. Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSy...
Is advanced scheduling in Kubernetes achievable?Yes, however, how do you properly accommodate every real-life scenario that a Kubernetes user might encounter? How do you leverage advanced scheduling techniques to shape and describe each scenario in easy-to-use rules and configura...
Subscribe to the World's Most Powerful Newsletters
Subscribe to Our Rss Feeds & Get Your SYS-CON News Live!
Click to Add our RSS Feeds to the Service of Your Choice:
Google Reader or Homepage Add to My Yahoo! Subscribe with Bloglines Subscribe in NewsGator Online
myFeedster Add to My AOL Subscribe in Rojo Add 'Hugg' to Newsburst from CNET News.com Kinja Digest View Additional SYS-CON Feeds
Publish Your Article! Please send it to editorial(at)sys-con.com!

Advertise on this site! Contact advertising(at)sys-con.com! 201 802-3021



SYS-CON Featured Whitepapers
ADS BY GOOGLE