Welcome!

Artificial Intelligence Authors: Yeshim Deniz, Elizabeth White, Pat Romanski, Zakia Bouachraoui, William Schmarzo

RSS Feed Item

RE: What makes a data component good forstandardizing?

Roger, your example is weak.  If the name is not useful outside its
context, then neither is the address.  Is it a shipping address? Home
address? Birth place? Current residence? Future residence? For sale? For
rent? Target? Party house?  Yes, you can put it in your contact list,
but you can do that with the name as well, and with just about as much
value.

If I understand him, I agree with Len, what is or isn't worth
standardizing is negotiated with at least a surrogate for the
anticipated but as yet unknown user.  Who buys after-market motorcycle
parts?  You might not yet know the person, but you know the role they
play in the transaction you build into your system based on the
established market dynamics that you've studied carefully and documented
fully for your SBA loan approval.  That's the context you use to decide
what to standardize.  Without that, you don't even have a market, and
therefore no competitors or related services, so why bother
standardizing anything since there aren't any other business services to
interoperate with.

Some data clumps have a context larger than a specific market segment,
such as name and address and credit card info.  They represent the
social infrastructure that makes interoperability in a specific market
segment possible.  A particular user (role player) might be
unanticipated, but the role they play is not.

The value in standardization is in reducing friction in a transaction,
which in a market is measured by changes in profit.  In an academic
context, standardization might mean reducing effort in getting
published, or the cost of publication, all tending to improve one's
standing in the community.  In every case I can think of, the decision
to standardize or not would be a cost vs. benefit for effort involved.
If the data already has a standard representation, someone has answered
that question in the affirmative.  They've done so on the basis of some
abstract model of the transaction, probably derived from historical
records, regulations, custom, or other past experience.  Surely, some
such decisions are just wrong, due to poor analysis or based on poor,
tacit, or seat-of-the-pants models.

What to standardize for the sake of interoperability is based on the
model of the operation, which has to include all the roles for potential
users, even in the hypothetical nirvana of the semantic web.  That stuff
just pushes it to a higher level of abstraction, making it all the more
difficult to achieve since there will be less chance of agreement on the
model and (I'm going to really stick my neck out here) absolutely no
chance of basing the model on any actual human behavior.  How can we
expect people to behave in accord with an abstract model that is
completely foreign to them?  Does this mean that the semantic web
actually makes interoperability more difficult rather than less?  Hmmm.
But that's completely off topic.  Sorry.

Enough rambling.

Bruce B Cox
Manager, Standards Development Division
OCIO/SDMG
571-272-9004


-----Original Message-----
From: Len [mailto:[email protected]] 
Sent: Saturday, January 10, 2009 11:49 AM
To: 'Costello, Roger L.'; xml-dev@l...
Subject: RE:  What makes a data component good for
standardizing?

The user anticipates you.

It is a negotiation.

len

-----Original Message-----
From: Costello, Roger L. [mailto:[email protected]] 
Sent: Saturday, January 10, 2009 8:34 AM
To: '[email protected]'
Subject:  What makes a data component good for standardizing?


Hi Folks,

Suppose you set out to create some standard data components. Your goal
is to
improve interoperability by creating standardized data components.
Particularly, you want these standardized data components to improve
interoperability between systems that weren't a priori coded to
understand
each other's data exchange format (i.e. you want to improve
interoperability
with the "unanticipated user").

What makes one data component good, and another bad? 

(By "good" I mean the data component would in fact help improve
interoperability with the unanticipated user. By "bad" I mean the data
component would do little, if anything, to improving interoperability
with
the unanticipated user.)

I'll share my initial thoughts. I'd like your feedback on my initial
thoughts, and I'd also like to hear your thoughts.

Note: by "data component" I mean a chunk of markup that can be reused in
multiple XML vocabularies.


MY INITIAL THOUGHTS

I think that some data components would be good to standardize, while
others
would not be useful. 

I'll start with two examples of data components would be good to
standardize.

Think about a postal address. It would be a good data component to
standardize. It's a useful data component even if I don't understand the
context in which it's being used. 
 
      For example, suppose some nuclear physicist unexpectedly sends 
      me a document containing data that I have no clue 
      what it means, but embedded in it is a postal address. 
      I may not be able to process all that data about 
      subatomic particles (quarks, neutrinos, etc), but I can 
      pluck out the postal address and store it in my address book. 
 
That's interoperability between unanticipated users, albeit limited.
 
Another example of a useful data component is a business card (vcard).
Again, that's a useful data component that I can immediately utilize,
even
if I have no clue what the rest of the document is talking about.
 
These data components are useful independent of their context. I can use
the
data components even if I can't use all the stuff that they reside in.
 
Now I'll give an example of a data component which I think would not be
useful to standardize.

Both postal address and vcard gives a person's name (along with other
data).
Suppose I decide that I want data components with finer granularity than
postal address or vcard. Would "person name" make a good component for
standardizing?

I think not. A person's name would not be useful independent of context.

 
      For example, the same nuclear physicist above 
      sends me the same document but containing the
      standardized PersonName data component, about
      a person named "Jim Brown.
      I am PersonName-aware so I am able to pluck out that 
      Jim Brown information, even though I have 
      no clue what the rest of the document says. 
      Have I gained anything? No. It could be Jim Brown 
      the ex-football player or some other person by that name.
      To make sense of the data component I need to
      understand its context. 
 
I propose these two metrics for evaluating the usefulness of data
component:
 
    1. The data component must be standardized
       and broadly adopted (see below).
    2. If I can meaningfully use the data component
       without understanding any of the context in 
       which it resides then it is a good data
       component. If I must understand its context
       then it is a bad data component. 
  
Standardizing is good. It enables two parties to understand each other,
i.e., interoperate. 
 
But standardization is not enough. I want more than interoperability
between
two parties that have a priori agreed to a data interchange format. I
want
interoperability between two parties that haven't a priori agreed to a
data
interchange format. I want interoperability between unanticipated
parties.
 
So the key is to not only standardize, but standardize the right things.
 

SUMMARY 

We would go a long way toward advancing interoperability of
unanticipated
systems if we focused on creating standardized components that are
useful
independent of context.

What do you think?
 
/Roger  
_______________________________________________________________________

XML-DEV is a publicly archived, unmoderated list hosted by OASIS
to support XML implementation and development. To minimize
spam in the archives, you must subscribe before posting.

[Un]Subscribe/change address: http://www.oasis-open.org/mlmanage/
Or unsubscribe: [email protected]
subscribe: [email protected]
List archive: http://lists.xml.org/archives/xml-dev/
List Guidelines: http://www.oasis-open.org/maillists/guidelines.php



Read the original blog entry...

IoT & Smart Cities Stories
René Bostic is the Technical VP of the IBM Cloud Unit in North America. Enjoying her career with IBM during the modern millennial technological era, she is an expert in cloud computing, DevOps and emerging cloud technologies such as Blockchain. Her strengths and core competencies include a proven record of accomplishments in consensus building at all levels to assess, plan, and implement enterprise and cloud computing solutions. René is a member of the Society of Women Engineers (SWE) and a m...
Andrew Keys is Co-Founder of ConsenSys Enterprise. He comes to ConsenSys Enterprise with capital markets, technology and entrepreneurial experience. Previously, he worked for UBS investment bank in equities analysis. Later, he was responsible for the creation and distribution of life settlement products to hedge funds and investment banks. After, he co-founded a revenue cycle management company where he learned about Bitcoin and eventually Ethereal. Andrew's role at ConsenSys Enterprise is a mul...
In his general session at 19th Cloud Expo, Manish Dixit, VP of Product and Engineering at Dice, discussed how Dice leverages data insights and tools to help both tech professionals and recruiters better understand how skills relate to each other and which skills are in high demand using interactive visualizations and salary indicator tools to maximize earning potential. Manish Dixit is VP of Product and Engineering at Dice. As the leader of the Product, Engineering and Data Sciences team at D...
Dynatrace is an application performance management software company with products for the information technology departments and digital business owners of medium and large businesses. Building the Future of Monitoring with Artificial Intelligence. Today we can collect lots and lots of performance data. We build beautiful dashboards and even have fancy query languages to access and transform the data. Still performance data is a secret language only a couple of people understand. The more busine...
Nicolas Fierro is CEO of MIMIR Blockchain Solutions. He is a programmer, technologist, and operations dev who has worked with Ethereum and blockchain since 2014. His knowledge in blockchain dates to when he performed dev ops services to the Ethereum Foundation as one the privileged few developers to work with the original core team in Switzerland.
Whenever a new technology hits the high points of hype, everyone starts talking about it like it will solve all their business problems. Blockchain is one of those technologies. According to Gartner's latest report on the hype cycle of emerging technologies, blockchain has just passed the peak of their hype cycle curve. If you read the news articles about it, one would think it has taken over the technology world. No disruptive technology is without its challenges and potential impediments t...
If a machine can invent, does this mean the end of the patent system as we know it? The patent system, both in the US and Europe, allows companies to protect their inventions and helps foster innovation. However, Artificial Intelligence (AI) could be set to disrupt the patent system as we know it. This talk will examine how AI may change the patent landscape in the years to come. Furthermore, ways in which companies can best protect their AI related inventions will be examined from both a US and...
Bill Schmarzo, Tech Chair of "Big Data | Analytics" of upcoming CloudEXPO | DXWorldEXPO New York (November 12-13, 2018, New York City) today announced the outline and schedule of the track. "The track has been designed in experience/degree order," said Schmarzo. "So, that folks who attend the entire track can leave the conference with some of the skills necessary to get their work done when they get back to their offices. It actually ties back to some work that I'm doing at the University of San...
When talking IoT we often focus on the devices, the sensors, the hardware itself. The new smart appliances, the new smart or self-driving cars (which are amalgamations of many ‘things'). When we are looking at the world of IoT, we should take a step back, look at the big picture. What value are these devices providing. IoT is not about the devices, its about the data consumed and generated. The devices are tools, mechanisms, conduits. This paper discusses the considerations when dealing with the...
Bill Schmarzo, author of "Big Data: Understanding How Data Powers Big Business" and "Big Data MBA: Driving Business Strategies with Data Science," is responsible for setting the strategy and defining the Big Data service offerings and capabilities for EMC Global Services Big Data Practice. As the CTO for the Big Data Practice, he is responsible for working with organizations to help them identify where and how to start their big data journeys. He's written several white papers, is an avid blogge...