#Future of postal interoperability
Explore tagged Tumblr posts
worldpostday · 6 months ago
Text
Shaping the future of postal interoperability.
Tumblr media
Ready to Market Interoperability Group (RMIG): shaping the future of postal interoperability
The Universal Postal Union (UPU) has launched a groundbreaking initiative called the Ready to Market Interoperability Group (RMIG). This initiative strengthens cooperation between Designated Operators (DOs) and wider postal sector players (WPSPs) to support growth, innovation and the sustainability in the postal ecosystem while directly addressing the needs of customers.
Read the full story.
0 notes
xsoldier · 1 year ago
Text
It's like he's TRYING to broadcast to the entire world how little he knows about any of the things he's doing and how he's eliminated all of the competent staff who would normally prevent this kind of thing.
Like "Twitter" is one of the most globally recognizable brands in the WORLD, and trying to achieve that type of marketing awareness is INSANELY difficult, not to mention that effective SEO for a single letter is basically impossible to achieve.
For example: think about searching for something like "Jack Black Twitter" and now think about the search for "Jack Black X" how likely are you to land on the result you're trying to find? There's a REASON companies choose names that are natural sounding but slightly obscure. Google, Bing, Yahoo, MySpace, Facebook, Twitter, Tumblr, TikTok etc. are names designed to effectively optimize the ability for search engines to locate and elevate the results when people are looking for them. Now think about how many other pages have "X" in them and how a search engine can effectively surface that.
The worst part of this is that this is what the ultra rich are capable of doing to an entire communications system at their whim. Imagine if this was a private postal service, but if it suddenly shut down, every letter or piece of mail that you'd ever sent with them would also vanish. Like, there will be Internet Archive backups, but that's mostly it.
Back in the early 2000s, if your admin went fucking nuts & privated the whole community, if they became mad with power and drove everything into the ground, if someone just didn't renew the domain name or the server owner didn't allow you to keep being hosted — the community could still find backups of the information and restart it somewhere else. I know this because we did exactly that in the early 2000s with the forums I've been an admin of now for almost 15 years.
Twitter can't really do that and Threads or other pop-up replacements are just solutions of other mega corporations trying to monetize upon an unstable market and reinforce their own existing ecosystem, and further gain control over more of those things in a single location that's incentivized AGAINST allowing multi-party interoperability like Reddit cutting off API access to third parties after Twitter did the same. Yes, tumblr is a refuge but for how long?
This is also why online communities moving off webforms and on to Discord gets worrying, because when THOSE communities go dark, the totality of that information up and vanishes. No Web Archive backups. Nothing. There is a worrying volatility about historic information these days, and while I know online platforms don't seem all that important sometimes, it's important to remember random documents and manuscripts are historically significant.
Conversation about preserving digital video games is getting more prominent as companies stop supporting the distribution platforms and the games just up and COMPLETELY vanish at the whims of some random rich idiot who doesn't know what the fuck they're doing, or who does know and doesn't care.
Not just that, but a lot of modern social media platforms are usually the only methods of correspondence we have with at least a few people, and there's a wealthy third party who essentially owns your ability to stay connected to them. It's kind of terrifying to watch how easy it is for someone to sabotage that out of idiocy and ignorance, but it should be more worrying to consider how much easier that would be for someone with the full intent to do so.
Capitalists & oligarchs don't care about the things they preside over, and while it's fun to point and laugh at their expense when the extension of the thing they represent is catching fire, it's important to also consider exactly what the big picture of that means for the future.
I slept in and just woke up, so here's what I've been able to figure out while sipping coffee:
Twitter has officially rebranded to X just a day or two after the move was announced.
The official branding is that a tweet is now called "an X", for which there are too many jokes to make.
The official account is still @twitter because someone else owns @X and they didn't reclaim the username first.
The logo is 𝕏 which is the Unicode character Unicode U+1D54F so the logo cannot be copyrighted and it is highly likely that it cannot be protected as a trademark.
Outside the visual logo, the trademark for the use of the name "X" in social media is held by Meta/Facebook, while the trademark for "X" in finance/commerce is owned by Microsoft.
The rebranding has been stopped in Japan as the term "X Japan" is trademarked by the band X JAPAN.
Elon had workers taking down the "Twitter" name from the side of the building. He did not have any permits to do this. The building owner called the cops who stopped the crew midway through so the sign just says "er".
He still plans to call his streaming and media hosting branch of the company as "Xvideo". Nobody tell him.
This man wants you to give him control over all of your financial information.
158K notes · View notes
karza-technologies · 2 years ago
Text
What will the fintech landscape look like in 2023? Take a look at the upcoming trends!
Tumblr media
The global Fintech market is expected to grow exponentially and reach a colossal market value of approximately $324 billion by 2026.
Overview
Introduction to the fintech ecosystem in India
Rise of Digital Banking and Cashless Transactions
Explosion of Embedded Finance Solutions
Era of Decentralisation
Open Banking
There is a growing trend toward people managing their money and business online rather than putting up with the glacial pace, red tape, and bureaucracy of traditional financial institutions.
The fintech sector is booming, with traditional financial institutions ramping up their investments in financial technologies to usurp startups and deliver financial services faster and more efficiently.
"The global Fintech market is expected to grow exponentially and reach a colossal market value of approximately $324 billion by 2026. Here is a comprehensive list of significant trends that will shape and define the industry’s future"
Here are the top trends in fintech that we need to watch out for
Rise of Digital Banking and Cashless Transactions
In the field of personal finance, consumers are increasingly seeking seamless, unrestricted and simple access to their bank accounts on their smartphones. There is fierce competition among traditional banks and financial institutions to integrate their products and services into mobile banking and provide seamless user experiences.
With the unprecedented rise of digital-first banks and Neo-banks, traditional banks and financial institutions are racing to develop mobile banking applications as well as provide periodic updates to increase efficiency and effectiveness in their pursuit to supplant the competition faced by its new adversaries in the form of these neobanks.
With low fees, convenient mobile banking and an enhanced customer experience, Neo-banks are attracting customers in droves, taking an unassailable lead over traditional banks.
Panademic has given digital transformation in banks a big boost
In this pandemic-stricken economy, the desire for cashless transactions is slated to rise, opening a world of opportunities for digital wallet providers. ACI Worldwide reports that more than $70.3 billion in real-time payments transactions were processed globally in 2020, up 41% over the previous year.
Digitally linked transactions not only increase transaction volume, but also serve as a repository for business data. Using the information gathered and compiled, new services can be developed. Consequently, new revenue streams are generated, unlimited data offers are offered, and cost-effective business setups are established.
Post offices will be an integral part of the core banking ecosystem under Budget 2022. This will propel the long-held objective of the government of financial inclusion and account accessibility via net banking, mobile banking, ATMs as well as online funds transfers between post office accounts and bank accounts.
This will be especially beneficial to farmers and senior citizens in rural areas as India Post is the most widely distributed postal system in the world, with offices in the nooks and crannies of the country, thereby enabling interoperability and financial inclusion.
The move would allow unbanked customers to receive direct transfers of government subsidies thereby bringing them into the formal banking ecosystem.
Explosion of Embedded Finance Solutions
The concept of embedded finance or embedded banking refers to the seamless integration of banking services and financial solutions into a traditionally non-financial platform.
Embedded finance has been a growing trend in the past years as numerous banks look to become service providers to non-bank and non-financial institutions to deliver a customer experience and service proposition involving financial services as a significant component of a larger offering. Financial services are increasingly being integrated into the portfolios of non-banks and non-financial institutions.
For customers, this means getting access to financial services offered by banks from the platform itself. This may also mean making payments at the touch of a button, leading to faster checkout and settlement process thereby providing an unmatched payment experience.
The increasing demand for embedded finance has led financial institutions to offer banking as a service, often white-labelled or co-branded with a larger offering, so that customers can make payments, secure a loan to purchase something on an eCommerce website, and take out insurance to protect their products against unforeseen risks, anything that nonbanks or nonfinancial institutions can offer to their customers in order to meet their financial, lending, and payment needs, which are included as an integral part of a customer experience involving a larger offering.
Last year saw the illustrious boom of Buy-Now-Pay-Later offerings from companies like Simpl and Postpe that offers easy instalments for buying particular merchandise while shopping online.
This trend is well-positioned to grow exponentially as more and more banks and financial institutions explore opportunities to become service providers to the non-banks and non-financial companies.
Era of Decentralisation
Most of the functions of banking, lending, and trading are managed and governed by centralised systems operated by governing bodies and gatekeepers today. Historically, customers have been forced to deal with a variety of financial middlemen to gain access to banking products and services such as mortgages, auto loans, equities trading, etc.
Due to this, customers have fewer avenues to access banking and financial services directly. They cannot bypass middlemen like banks, exchanges and lenders which earn a small commission on every financial and banking transaction as profit. It is a pay-to-play game for all of us.
The concept of decentralised finance is an emerging technology that uses blockchain technologies to recreate financial services, typically Ethereum and smart contracts. With DeFi, users are able to carry out a variety of financial transactions, such as transfers, lending, investing, trading or saving, without the permission of companies whose interests may not align with theirs.
With DeFi, centralised financial systems are threatened by eliminating the need for intermediaries and middlemen to facilitate banking and financial transactions. Instead, people can transact directly by bypassing the middlemen via peer-to-peer exchanges.
Increasing adoption of cryptocurrency, tokenization, and non-fungible tokens have boosted the market. Blockchain technology and digital currencies gained massive traction last year as banks explored their potential.
Due to the growing popularity of cryptocurrencies, central banks are exploring the possibility of launching their own centrally-backed digital currency. Financial institutions are expected to build more solutions around Blockchain in 2022 as mainstream adoption continues.
Open Banking
A bank has always been a repository for customer information and had an indelible monopoly over it. They not only gathered and collected financial information by purviewing the transaction history of the customers but also hold intimate personal information such as marital status, etc.
Unauthorised access to this information would have resulted in tangible and adverse financial consequences and would have damaged the bank's reputation and goodwill. To deal with such a huge responsibility and secure the trust and faith imposed upon the banks by their customers, banks selected the most effective method at the time.
They isolated and insulated themselves from others and did not share their information to protect themselves and their customers.
Since most transactions happen online, there is a tremendous amount of publicly available information that banks or merchants can use to create new products and services or market existing products and services. Open banking had finally arrived.
Open Banking is a term used for an online banking system that provides third-party providers uninhibited access to customers’ financial information or bank accounts upon his/her explicit and expressed permission.
In addition to allowing third parties to develop better personal finance management applications, open banking encourages incumbent banks to enhance and improve their offerings.
As a result of open banking, competition is fostered in the marketplace, as the customer's data can be viewed by any bank with the customer's consent enabling banks to provide hyper-personalised products and services based on the customer's data.
The Indian government and private companies have joined forces to create a hybrid open banking system. A group of companies known as "Account Aggregators" has been granted regulatory authority by the Reserve Bank of India. As middlemen, these companies gain unrestricted access to customer data.
A large number of private banks are partnering with fintech companies to develop alliances and provide better and more quality services to their customers as part of the ecosystem. This is driving Fintech in India to grow.
Conclusion
As traditional financial institutions are challenged and threatened by digital-only banks and neo-banks, the fintech market is becoming increasingly competitive.
According to the new research, Digital Banking: Banking-as-a-Service, Open Banking & Digital Transformation 2020-2024, digital-only banks have captured a substantial market share from traditional banks by providing superior user experiences and tightly focused USPs.
By 2024, digital banking users are expected to reach 3.6 billion, up from 2.4 billion in 2020, a 54% increase. Established banks must personalise the app experience, using AI-powered personal financial management tools to hold their fort and fight against digital-only bank innovation.
The conclusion we can draw from this is that traditional banks and digital-only banks must invest heavily in their digital infrastructures and utilize indispensable data in order to provide cutting-edge products and services encapsulated in and accessible via mobile apps.
About Karza:
Karza Technologies is the largest data, analytics, automation, and decisioning solution provider to FIs, catering to the entire lending lifecycle from onboarding to diligence & monitoring to collections. Karza Technologies solutions enable systemic fraud prevention, risk management, compliance & automation through superior data engineering and deep tech applications.
In a nutshell, Karza stands on the trifecta of digitization: automation, enhanced diligence, and robust decisioning for straight-through processing; thus, creating a state-of-the-art digitization process without compromising on security and quality. Karza Technologies is a pioneer in the services it offers and has successfully acquired a very diverse portfolio of 300+ live clients, spanning the largest gamut of use cases in the industry.
— — — — — — — — — — — — — — — — — — -
Call us at 9890157934 to connect with sales team at Karza.
0 notes
seedfinance · 3 years ago
Text
US digital dollar: Will fiat currency ride the crypto wave? | Business and Economy News
The US dollar has been the leading global reserve currency for many decades, backed by the full confidence and creditworthiness of the US government and the value of its powerful economy.
But in recent years, the world’s dominant currency unit has faced an increasing number of threats, from its own inefficient financial infrastructure to the introduction of the Chinese e-yuan to the specter of Facebook’s Diem and cryptocurrencies like Bitcoin.
As countries like the Bahamas, China and Sweden test the viability of the central bank digital currency (CBDC), US policymakers are taking stock.
Two US-based efforts are exploring the concept of a digital fiat dollar at a time when cash is no longer king.
The Digital Currency Initiative at the Massachusetts Institute of Technology (MIT) is working with the Federal Reserve Bank of Boston on research to accelerate the hypothetical currency shift in an experiment called Project Hamilton. The President of the Boston Fed has said that a future “Fedcoin” would mix the functions of Venmo and Apple Pay.
And the Digital Dollar Project, a collaboration between the non-profit Digital Dollar Foundation and consulting firm Accenture Plc, is launching five pilot projects next year. This company hopes to stimulate public discussion and take practical steps towards a CBDC.
“There are a number of reasons why central banks around the world are seriously considering CBDC, including data collection and economic data protection, financial system modernization, financial inclusion, precision in government execution and monetary policy,” said Christopher Giancarlo, co-founder the Digital Dollar Foundation and former US government regulator.
Giancarlo told Al Jazeera that “geopolitical concerns, competition from stablecoins” [like Diem] with bank payment systems and the leadership role in setting standards for the global interoperability of digital currencies ”are also motivating the US.
“The private sector has been exploring the possibilities of digital assets like bitcoin for over a decade, and now the public sector is trying to catch up,” he said.
“Doesn’t need mining blocks”
There is no guarantee the US will successfully adopt a digital dollar, Chris Ostrowski, executive director of the UK-based Digital Monetary Institute, told Al Jazeera.
Progressive, libertarian, and business-minded technologists all have different ideas about what a CBDC should look like without agreeing on goals, design, or functionality.
Digital dollar advocates like Rohan Gray, president of the Modern Money Network and professor at Willamette University College of Law, argue that a nationwide approach can advance a coordinated, multi-path framework.
A number of U.S. lawmakers have proposed digital wallets to help Americans with and without bank accounts get benefits and make payments [File: J Scott Applewhite/AP Photo]Gray believes US President Joe Biden should work with Congress to find a dollar digitization path, just as government agencies – at least in theory – join forces to address complex issues like the coronavirus pandemic, economic recovery, and climate change .
“We’re not talking about an instrument or platform or technology, we’re talking about a wide range of legislative changes,” Gray told Al Jazeera.
One of the controversial solutions being considered by progressives is a proposal by Senator Sherrod Brown, an Ohio Democrat, to create a FedAccount digital dollar wallet so that every American can receive benefits and make payments.
The system would be easily accessible at local banks and would have no fees. This ties in with a bill co-financed by Senators Bernie Sanders and Kirsten Gillibrand for the US Postal Service to provide private customer business.
A related idea is the Public Banking Act, introduced by Representatives Rashida Tlaib and Alexandria Ocasio-Cortez, which aims to “introduce banking as a public utility, a globally proven model for keeping money local and reducing costs by bringing Wall Street middlemen, shareholders and high-paid executives “.
Tumblr media
Some digital tokens like Bitcoin rely on energy-sucking Proof of Work (PoW) consensus mechanisms to validate transactions and mint new coins, which involves running thousands of computers working in unison [File: Bloomberg]Gray sees future “eCash” solutions as a populist tool to combat inequality and make money more democratic by offering token-based digital currencies on prepaid cards, alongside account-based ledger technology, in which people hold assets directly at the central bank hold.
Like many left-wing advocates of the digital dollar, Gray says that blockchain – the technology behind cryptocurrencies – is not needed where there is enough centralized trust.
“Blockchain is supposed to be a consensus among a number of colleagues on a common state of affairs, but that’s not the question you’re trying to resolve here,” he said, referring to the way crypto networks verify transactions. “No mining blocks or proof of work are required for this.”
Decentralized crypto and CBDC could one day coexist. But either way, private Bitcoin has accelerated the discussion about the latter partially replacing paper bills and metal coins.
“Advantages and Risks”
Regardless of whether a digital dollar is ultimately based on blockchain or is only influenced by the principles of cryptocurrency, US politicians of all stripes seem to largely agree on the desire not to fall behind in a global competition for the digitization of the greenback .
Many see a U.S. economy that has always pioneered the internet and fintech sectors and fear that at some point Beijing will shift from limited national implementation of its CBDC to a replacement for the popular Alipay and WeChat payment systems that already dominate much of Asia.
On the whole, despite concerns about illegal funding and money laundering, US officials have criticized the impact of the Chinese model on surveillance and consumer data collection.
Federal Reserve Chairman Jerome Powell and Treasury Secretary Janet Yellen have both signaled growing support for a digital dollar, though their comments about the lack of complete user anonymity have cast doubt – especially among conservative lawmakers.
Some libertarian members of the US House of Representatives are optimistic about cryptocurrency but pessimistic about major governments. At a June 15 Task Force on Financial Technology hearing, senior Republican member Warren Davidson, an Ohio congressman, suggested that officials are still “learning” about a digital dollar.
Describing the current financial infrastructure as “already secure, effective, dynamic and efficient,” he said the US should push digitization “for the right reasons, not just to put pressure on ourselves.”
Tumblr media
Federal Reserve Chairman Jerome Powell has signaled increasing support for a digital dollar [File:  Graeme Jennings/Pool via Reuters]Davidson’s main criticism is of “healthy money” – concerns that digital dollars could potentially undermine stability and prosperity. However, he also warns against moving away from the “permissionless” aspects of cash that allow privacy in peer-to-peer transactions.
He told Al Jazeera that the digital dollar should be created “not as a control tool, but as a store of value and a medium of exchange.”
Patrick McHenry, senior Republican on the House of Representatives Financial Services Committee, called for a thorough study of both the “benefits and risks” and agreed to the Fed’s commitment to “get it right.” [rather] to be the first ”.
Other Republicans focus on the inflationary potential of “printing” too much money or the pitfalls of the public sector trying to mimic what commercial banks are already doing. From a cybersecurity perspective, too, holding the intermediary banks could isolate the Fed.
“Convenient addition to cash”
In connection with the 21st Century Dollar Act, a bipartisan bill requiring the US Treasury Secretary to post status updates on the dominance of the dollar and the progress of the CBDC development, a consensus with Wall Street and financial technology companies could emerge.
About 80 percent of central banks are actively studying the CBDC concept, including the European Central Bank and the Bank of England, which recently published papers on the subject.
Last October, the Bahamas rolled out the “sand dollar,” the first national introduction of such a technology by a central bank. In April, the Central Bank of the East Caribbean presented its DCash. And the Bank of Jamaica plans to launch a digital currency next year.
The Jamaican program uses Ireland-based eCurrency Mint Inc as its technology provider. That company’s chief markets officer, Miles Au Yeung, suggests the US could do the same.
He told Al Jazeera that only the Treasury Department and the Federal Reserve should have the authority to create, issue and distribute this new form of legal tender.
“Each CBDC must be able to function within the financial system’s existing payment channels, including bank accounts, apps and payment cards, while expanding to smartphones, QR codes and other innovative ways to store digital assets,” said Yeung.
“The digital currency needs to achieve instant and final billing,” he added, saying it should be able to scale “massively with minimal energy consumption.”
In the Bahamas, local company NZIA Ltd implemented the new digital currency, which General Counsel John Kim described to Al Jazeera as “the most mature, advanced system of its kind”.
While Jamaica’s CBDC is not based on blockchain, the Bahamas model is a “best of both worlds” hybrid that combines blockchain and centralized systems, says Kim, who adds that he takes a “wait and see” approach to the digital US dollar tracked.
“When redesigning a mission-critical national infrastructure,” he said, “readiness is just as important as readiness.”
source https://seedfinance.net/2021/07/12/us-digital-dollar-will-fiat-currency-ride-the-crypto-wave-business-and-economy-news/
0 notes
cool-cillian-murphy · 4 years ago
Text
Truck Platooning Systems Market Is Fast Approaching, Says Research
Brief Summary of
Truck Platooning Systems
Trucks are the major type of vehicle adopting the platooning system. Platooning is a method where a group of two or more vehicles travels behind one another safely to experience less fuel consumption, reduced air drag, optimal use of road space thereby avoiding traffic congestion. There has been 4-5% fuel consumption shown by the leading truck using this method.  Truck platooning is recognized as the future of the transportation industry. Platoon is similar to the train’s compartment arrangement but with physical disconnects. If truck platoons are permitted to operate on truck-only highways, it is expected to lead to an integrated business model between the infrastructure service provider and transportation service provider and may also result in several public-private partnerships for financing the concept of truck platooning.
Free Sample Report + All Related Graphs & Charts @ :
https://www.advancemarketanalytics.com/sample-report/6117-global-truck-platooning-systems-market
Latest Research Study on
Global Truck Platooning Systems
Market published by AMA, offers a detailed overview of the factors influencing the global business scope. Truck Platooning Systems Market research report shows the latest market insights with upcoming trends and breakdown of the products and services.
The report provides key statistics on the market status, size, share, growth factors, Challenges and Current Scenario Analysis of the Truck Platooning Systems. This Report also covers the emerging player’s data, including: competitive situation, sales, revenue and global market share of top manufacturers are Peloton Technology (United States), Volvo Group (Sweden), Scania (Sweden), Daimler (Germany), Navistar (United States), Toyota (Japan), Uber (United States), Bendix Commercial Vehicle Systems (United States), Continental AG (Germany), IVECO (Italy).
Truck Platooning Systems Market Report offers a detailed overview of this market and discusses the dominant factors affecting the growth of the market. The impact of Porter's five armies on the market over the next few years has been discussed for a long time in this study. We will also forecast global market size and market outlook over the next few years.
Types of Products, Applications and Truck Platooning Systems Market Report Geographical Scope taken as the Main Parameter for Market Analysis. This Research Report Conducts an assessment of the industry chain supporting this market. It also provides accurate information on various aspects of this market, such as production capacity, available production capacity utilization, industrial policies affecting the manufacturing chain and market growth.
The
Global
Truck Platooning Systems
Market segments and Market Data Break Down are illuminated below:
by Type (Autonomous, DATP (Driver-Assistive Tuck Platooning)), Application (Heavy Trucks, Light Trucks, Other), Services (Telematics, ACE, Tracking, Diagnostics), Systems (Adaptive Cruise Control (ACC), Automated Emergency Braking (AEB), Blind Spot Warning (BSW), Forward Collision Warning (FCW), Global Positioning System (GPS), Human Machine Interface (HMI), Lane Keep Assist (LKA), Others), Sensor (Lidar, Radar, Image)
What's Trending in Market:
Anticipated changes in the truck platoon business models.
The emergence of new businesses due to this market.
Challenges:
Undefined safety parameters.
Increasing user and public acceptance and concerns over Data privacy and cybersecurity regulations.
Restraints:
High cost of hardware technologies such as Adaptive cruise control (ACC), Blind-spot warming (BSW), Forward collision warning (FCW) which increases the overall cost of vehicles.
Lack of Infrastructure in the transport system in many countries.
Market Growth Drivers:
Interoperability of truck platooning method and platforms integrating connected technologies.
Rising adoption of advanced driver assistance system (ADAS) features and significant platooning system.
Advancement in technology and stringent vehicle norms in emerging economies.
Reducing CO2 emission and reduction in fuel consumption.
Region Included are:
North America, Europe, Asia Pacific, Oceania, South America, Middle East & Africa
Country Level Break-Up:
United States, Canada, Mexico, Brazil, Argentina, Colombia, Chile, South Africa, Nigeria, Tunisia, Morocco, Germany, United Kingdom (UK), the Netherlands, Spain, Italy, Belgium, Austria, Turkey, Russia, France, Poland, Israel, United Arab Emirates, Qatar, Saudi Arabia, China, Japan, Taiwan, South Korea, Singapore, India, Australia and New Zealand etc.
Enquire for customization in Report @:
https://www.advancemarketanalytics.com/enquiry-before-buy/6117-global-truck-platooning-systems-market
Strategic Points Covered in Table of Content of Global Truck Platooning Systems Market:
Chapter 1:
Introduction, market driving force product Objective of Study and Research Scope the Truck Platooning Systems market
Chapter 2:
Exclusive Summary – the basic information of the Truck Platooning Systems Market.
Chapter 3:
Displaying the Market Dynamics- Drivers, Trends and Challenges & Opportunities of the Truck Platooning Systems
Chapter 4:
Presenting the Truck Platooning Systems Market Factor Analysis, Post COVID Impact Analysis, Porters Five Forces, Supply/Value Chain, PESTEL analysis, Market Entropy, Patent/Trademark Analysis.
Chapter 5:
Displaying the by Type, End User and Region/Country 2015-2020
Chapter 6:
Evaluating the leading manufacturers of the Truck Platooning Systems market which consists of its Competitive Landscape, Peer Group Analysis, BCG Matrix & Company Profile
Chapter 7:
To evaluate the market by segments, by countries and by Manufacturers/Company with revenue share and sales by key countries in these various regions (2021-2026)
Chapter 8 & 9:
Displaying the Appendix, Methodology and Data Source
Finally, Truck Platooning Systems Market is a valuable source of guidance for individuals and companies in their decision framework.
Data Sources & Methodology
The primary sources involves the industry experts from the Global Truck Platooning Systems Market including the management organizations, processing organizations, analytics service providers of the industry’s value chain. All primary sources were interviewed to gather and authenticate qualitative & quantitative information and determine the future prospects.
In the extensive primary research process undertaken for this study, the primary sources – Postal Surveys, telephone, Online & Face-to-Face Survey were considered to obtain and verify both qualitative and quantitative aspects of this research study. When it comes to secondary sources Company's Annual reports, press Releases, Websites, Investor Presentation, Conference Call transcripts, Webinar, Journals, Regulators, National Customs and Industry Associations were given primary weight-age.
Get More Information:
https://www.advancemarketanalytics.com/reports/6117-global-truck-platooning-systems-market
What benefits does AMA research studies provides?
Supporting company financial and cash flow planning
Latest industry influencing trends and development scenario
Open up New Markets
To Seize powerful market opportunities
Key decision in planning and to further expand market share
Identify Key Business Segments, Market proposition & Gap Analysis
Assisting in allocating marketing investments
Definitively, this report will give you an unmistakable perspective on every single reality of the market without a need to allude to some other research report or an information source. Our report will give all of you the realities about the past, present, and eventual fate of the concerned Market.
Thanks for reading this article; you can also get individual chapter wise section or region wise report version like North America, Europe or Asia.
About Author:
Advance Market Analytics is Global leaders of Market Research Industry provides the quantified B2B research to Fortune 500 companies on high growth emerging opportunities which will impact more than 80% of worldwide companies' revenues.
Our Analyst is tracking high growth study with detailed statistical and in-depth analysis of market trends & dynamics that provide a complete overview of the industry. We follow an extensive research methodology coupled with critical insights related industry factors and market forces to generate the best value for our clients. We Provides reliable primary and secondary data sources, our analysts and consultants derive informative and usable data suited for our clients business needs. The research study enable clients to meet varied market objectives a from global footprint expansion to supply chain optimization and from competitor profiling to M&As.
Contact Us:
Craig Francis (PR & Marketing Manager) AMA Research & Media LLP Unit No. 429, Parsonage Road Edison, NJ New Jersey USA – 08837 Phone: +1 (206) 317 1218
Connect with us at
https://www.linkedin.com/company/advance-market-analytics
https://www.facebook.com/AMA-Research-Media-LLP-344722399585916
https://twitter.com/amareport
0 notes
ukbitsolutions · 5 years ago
Text
IoT Devices List – D
New Post has been published on https://www.ukbitsolutions.com/blog/iot-devices-list-d/
IoT Devices List – D
Tumblr media
IoT Devices List – D Glossary Terms
Introduction
IoT has taken over world and in all our lives. You are here means you already know about it.
Whether you are looking for smart homes, IIoT, smart cities, cutting edge computing. We’ve compiled this IoT devices list, IoT protocols list, & Internet Of Things – related phrases that you should be aware of while you dive into connected future. Let us know if we missed something in the comments & provide us with any new terminologies with a good definition. This dictionary of IoT is yours.
IT is part of new era of various endless list of devices. Thus we are only indexing here terms and devices starting with alphabet “D”. Keep following our website for updated IoT list.
   The Glossary of Terms – D
Data Filtration
Its a part of Edge Layer which reduces the amount of transmitted information, at same time retains the meaning of it.
  Dashboard
A user interface that showcases key information of particular area in a summarized form, often as graphs or other related widgets. This term is derived from the automobile dashboard, the design of interface depends on what IoT related information needs to be monitored or measured.
  Data Center
Its a collective term representing the physical site, systems, network elements etc., that supports computing & network services.
  Data-Driven Decision Management (DDDM)
It’s an approach to business governance where valuing decisions that can be backed up by verifiable data
  Data Janitor
It’s one of subtask of data science where the cleaning up of dirty data or duplicative data is to be done. In general janitor must get sort data into the correct columns.
  Datakinesis
It’s a term coined by Marc Blackmer, data kinesis occurs when any action is taken in cyberspace has resulted in physical world. For example, Industrial Control Systems, which are vulnerable to datakinetic attacks where physical equipment such as valves &  sensors are compromised & damaged by hackers. Stuxnet is one of such example.
  Data Lake
This term is Coined by Pentaho’s CTO James Dixon, a data lake is a massive data repository that is designed to with-hold raw data until it’s needed & to retain data attributes so as to not preclude any future uses or/and analysis. The data lake is stored on generally inexpensive hardware, & Hadoop can also be used to manage data, replacing OLAP as means to answer specific questions. Sometimes also referred to as “enterprise data hub,”. The data lake & its retention of native formats sits in contrast to traditional data warehouse concept
  Data Scientist
It’s a job that combines statistics & programming by using languages such as R, this is to make sense of massive data sets. Example – IoT devices and sensors, create huge amount of data, & the data scientist’s role is to extract out of it valuable information & detect any anomalies.
  DDS
Acronym as Digital Data Storage. This format was used to store computer data on magnetic audio tape. This technology was developed by HP & Sony in 1989 & is based on the digital audio tape (DAT) format. It was widely used technology in 1990s.
  De-identification
The stripping away of any personally identifiable information from data before its use. The process must include removal of both i.e. direct identifiers (like name, email address, etc.) & also proper handling of quasi-identifiers (like sex, marital status, profession, postal code, etc.).
  Device-Agnostic Control
Part of the Edge Layer which provides site abstraction that allows the server and/or cloud applications to be agnostic to device implementation it controls.
  Degrees of Freedom (DoF)
It’s an engineering concept that is used in MEMS. It describes the directions in which an object can move & generally the number of independent variables in a dynamic system.
  Demand Response (DR)
The voluntary reduction of electricity usage by end users as a response to high-demand pricing. Demand response can reduce electrical price volatility during peak demand periods & help avoid system emergencies. Example of DR would be an utility paying Nest that have thermostats turn down air conditioners automatically in empty homes on a hot day.
  Device Attack
It’s an exploit that takes an advantage of a vulnerable device to gain access to a network.
  DIN Rail
A metal rail used for mounting electrical equipment & racks.
  Direct Messaging
A messaging mechanism in which sender & receiver are connected directly or can exchange messages via one or more intermediate hops. In this no hop take ownership of each message and just forward it (routing).
  Distributed Generation (DG)
Decentralized, modular, & flexible power generation that is located close to serviced loads. Distributed microgrids can control smaller areas of demand with distributed generation & storage capacity.
  DIY
Acronym as Do It  Yourself. Enthusiasts generally try out with gadgets or software to improve it’s functionality or to do some custom-install projects in their homes.
  DNP3 Protocol
An open, standards-based protocol that is for the electric utility industry with interoperability between substation computers, intelligent electronic devices), remote terminal units & master stations. Its like Groups of enabled things are well organized into namespaces.
  DoF
Degrees of Freedom
  DR
Demand Response.
  Domain Model
A model which contains all areas & terms related to a certain field of interest. It includes attributes, constrains, relations, acts, etc., which are relevant for particular task.
  Domotics
It’s a combination of “domestic” & “robotics.” Also a composite of the Latin domus & informatics. It includes home automation systems, home robots, whole house audio/visual systems & security systems. Domotic devices have ability to communicate with each other.
  Downlink
Abbreviated as DL or D/L. It is the process of downloading data onto end node from a server/target address. In a cellular network it would be seen as data being sent from a cellular base station to a mobile handset.
  DWG
It’s a format for different computer-aided design programs, majorly including AutoCAD. It is used to store two & three dimensional design data & meta data.
0 notes
suntecbusinesssolution · 3 years ago
Text
Impact of e-Invoicing on KSA’s Banking System
The General Authority of Zakat and Tax (GAZT) of the Kingdom of Saudi Arabia has proposed to make e-invoicing mandatory from December 4, 2021. This is in line with international best practices and is intended to reduce the shadow economy, increase tax compliance, and promote fair business practices. GAZT has suggested implementation of e-invoicing in two phases. First, businesses should be able to generate and store tax invoices and notes in a structured electronic format with no direct interaction with the tax authority. And second, the taxpayer’s e-invoicing software should be able to integrate with GAZT systems and move to a clearance-based compliance model that can share real-time data with GAZT systems. This regulation is applicable to all persons eligible to pay taxes in KSA as well as third parties who are issuing tax invoice on behalf of taxable residents. Based on the inputs received so far businesses, including banks and financial institutions must be prepared for a whole range of scenarios, including the possibility where approval may be required in real-time for individual invoices and transactions.
e-Invoicing or digital tax invoices does not include scanned or photocopies. These can be shared online and helps eliminate the paper-based invoicing process. e-Invoices must be securely transmitted and stored without compromising the authenticity or integrity of the electronic data. e-Invoicing can enhance transaction efficiency and make them seamless, cost effective and clear. In a system where e-invoicing is the norm, the government will have better and quicker insights into market conditions, be able to improve tax compliance and ensure greater transparency in commercial transactions. From a regulatory perspective, e-invoicing can help detect and reduce the shadow economy, ensure real-time monitoring of movement of goods and services and money. It can drive cost rationalization and reduction across the entire banking supply chain including printing, postal cost, storage, and processing cost. It can help improve banking transparency and financial reporting to clients. It can also give corporate treasurers an early overview of working capital requirements. It also helps in n data-based decision making related to corporate supply chain finance, while providing a tool to improve cash flow and reduce order to cash cycle. The flexibility of e-invoicing models can help automate data feeds into the treasury system which will in turn simplify and accelerate the reconciliation of account information. Once e-invoicing becomes the norm, there will be fewer transaction errors as it ensures faster integrity checks.
Of course, there are concerns about the lack of standardization in invoicing formats as well as security and privacy concerns. But the good news is that e-invoicing service providers are now investing in a common framework for all invoicing solutions to ensure seamless interoperability, format compatibility and maintain data integrity.
As e-invoicing becomes the norm across the world, the banking sector has a distinct edge over newer entrants in the field. A challenging business environment is forcing banking clients to pursue measures that can ensure an efficient working capital ecosystem to support their financial needs. And banks are forced to consider new transaction models to optimize costs. Clients working with these regulations will require seamless and fast flow of billing information between various entities. By leveraging e-invoicing regulations banks can streamline working capital management in the financial supply chain. This will help them optimize costs for both corporate and commercial clients.
Banks can develop e-invoicing systems in-house. But a better option is to partner with a third-party solution provider who can develop, manage and run a comprehensive and future forward e-invoicing solution. Right now, it makes business sense to work with an independent service provider on a revenue sharing model rather than invest in a proprietary platform. Given the short timelines for implementation in KSA, banks need to quickly identify a trusted and capable partner to manage this new system. Banks in KSA must consider partners with experience of working in the banking sector, such as VAT solution providers, who also have an e-invoicing solution that can be integrated seamlessly into their systems. They must also focus on data security, privacy, web access and data encryption to comply with KSA regulations. Banks must evaluate the tax environment and requirements and invest in a solution that can manage tax information and be able to interact seamlessly with regulator systems as well. Any solution deployed by a bank must help them provide open APIs related to authorization, accounts, and transaction data like payments.
Digital transformation of business operations is a reality. e-Invoicing is not just a regulatory requirement but also an effective way to improve the financial supply chain. The modern technology landscape offers banks a significant opportunity to serve customers and markets better. They need to analyze their systems to identify gaps and opportunities and then work with the right solution partners who have the expertise and experience to help them leverage the e-invoicing opportunity. With right solution and alliances, banks can offer collaborative and efficient services which will improve their performance and strengthen their competitive positioning.
0 notes
worldpostday · 4 months ago
Text
Postal financial services.
One area set to benefit from deeper engagement with WPSPs is postal financial services. In Riyadh, several proposals were adopted in this area, with a focus on ambitious modernization and interconnection with wider postal financial service players (WPFSPs). Proposals passed will bring about changes to postal financial services on two fronts. First, they will usher in improvements to the UPU’s legal framework for postal payments, known as the Postal Payments Services Agreement (PPSA). These changes are directed at increasing the interoperability of the postal payments network, including the establishment of conditions for interconnection with WPSPs; prevention of money laundering, terrorist financing and financial crime; PPS*Clearing; remuneration and the global adoption of a trusted trademark, PosTransfer. The resolution also lays the groundwork for further broader changes to the UPU’s legal frameworks associated with postal payments and other postal financial services to identify opportunities related to diversification, the removal of outdated elements and the adoption of a more flexible approach to defining products and services, technologies, channels, and interoperability rules. This again includes rules for further engagement with WPFSPs. To promote the continued evolution of postal financial services worldwide, member countries have also agreed to start work towards establishing an advisory knowledge centre, subject to extrabudgetary funding. Prannoy Sharma, Deputy Director General (International Relations and Global Business), Department of Posts, Ministry of Communications, India, and co-chair of the UPU Congress topic on postal financial services, believes that a new knowledge centre for postal financial services will be especially beneficial for developing countries. “The knowledge centre will provide member countries with best practices, knowledge sharing, technical assistance, help with financial literacy, information on regulatory frameworks, and promote financial inclusion and gender equality,” he said. “Once we secure extra funding and develop the centre, it will become a vital resource for postal financial services.” Speaking about the postal financial services proposals adopted in Riyadh in general, Sharma’s co-chair, M’hamed EL Moussaoui, Managing Director, and member of the Executive Board of Al Barid Bank, Poste Maroc, added: “Regarding the future of financial services, we have taken a significant step forward during this Congress with the adoption of these reforms.
A second part of the postal financial services reforms will be presented at the Dubai Universal Postal Congress in 2025.
Tumblr media
0 notes
lauramalchowblog · 6 years ago
Text
Patient-Directed Access for Competition to Bend the Cost Curve
By ADRIAN GROPPER, MD
Many of you have received the email: Microsoft HealthVault is shutting down. By some accounts, Microsoft has spent over $1 Billion on a valiant attempt to create a patient-centered health information system. They were not greedy. They adopted standards that I worked on for about a decade. They generously funded non-profit Patient Privacy Rights to create an innovative privacy policy in a green field situation. They invited trusted patient surrogates like the American Heart Association to participate in the launch. They stuck with it for almost a dozen years. They failed. The broken market and promise of HITECH is to blame and now a new administration has the opportunity and the tools to avoid the rent-seekers’ trap.
The 2016 21st Century CURES Act is the law. It is built around two phrases: “information blocking” and “without special effort” that give the administration tremendous power to regulate anti-competitive behavior in the health information sector. The resulting draft regulation, February’s Notice of Proposed Rulemaking (NPRM) is a breakthrough attempt to bend the healthcare cost curve through patient empowerment and competition. It could be the last best chance to avoid a $6 Trillion, 20% of GDP future without introducing strict price controls.
This post highlights patient-directed access as the essential pro-competition aspect of the NPRM which allows the patient’s data to follow the patient to any service, any physician, any caregiver, anywhere in the country or in the world.
The NPRM is powerful regulation in the hands of an administration caught between anti-regulatory principles and an entrenched cabal of middlemen eager to keep their toll booth on the information highway. Readers interested in or frustrated by the evolution of patient-directed interoperability can review my posts on this over the HITECH years: 2012; 2013; 2013; 2014; 2015.
The struggle throughout has been a reluctance to allow digital patient-directed information exchange to bypass middlemen in the same way that fax or postal service information exchange does not introduce a rent-seeking intermediary capable of censorship over the connection.
Middlemen
Who are the middlemen? Simply put, they are everyone except the patient or the physician. Middlemen includes hospitals, health IT vendors, health information exchanges, certifiers like DirectTrust and CARIN Alliance, and a vast number of hidden data brokers like Surescripts, Optum, Lexis-Nexis, Equifax, and insurance rating services. The business model of the middlemen depends on keeping patients and physicians from bypassing their toll booth. In so doing, they are making it hard for new ventures to compete without paying the overhead imposed by the hospital or the fees imposed by the EHR vendors.
But what about data cleansing, search and discovery, outsourced security, and other value-added services these middlemen provide? A value-added service provider shouldn’t need to put barriers to bypass to stay in business. The doctor or patient should be able to choose which value-added services they want and pay for them in a competitive market. Information blocking and the requirement for special effort on the part of the patient or the physician would be illogical for any real value-added service provider.
In summary, patient-directed access is simply the ability for a patient to direct and control the access of information from one hospital system to another “without special effort”. Most of us know what that looks like because most of us already direct transfer of funds from one bank to another. We know how much effort is involved. We know that we need to sign-in to the sending bank portal in order to provide the destination address and to restrict how much money moves and whether it moves once or every month until further notice. We know that we can send this money not just to businesses but to anyone, including friends and family without censorship or restriction. In most cases today, these transfers don’t cost anything at all. Let’s call this kind of money interoperability “without special effort”.
Could interoperating money be even less effort than that? Yes. For instance, it’s obnoxious that each bank and each payee forces us to use a different user interface. Why can’t I just tell all of my banks and payees: use that managing agent or trustee that I choose? Why can’t we get rid of all of the different emails and passwords for each of the 50+ portals in our lives and replace them with a secure digital wallet on our phone with fingerprint or face recognition protection? This would further reduce the special effort but it does require more advanced standards. But, at least in payment, we can see it coming. Apple, for instance gives me a biometric wallet for my credit cards and person-to person payments. ApplePay also protects my privacy by not sharing my credit card info with the merchants. Beyond today’s walled garden solutions, self-sovereign identity standards groups are adding the next layer of privacy and security to password-less sign-in and control over credentials.
Rent Seekers
But healthcare isn’t banking because HITECH fertilized layers upon layers of middlemen that we, as patients and doctors, do not control and sometimes, as with Surescripts, don’t even know exist. You might say that Visa or American Express are middlemen but they are middlemen that compete fiercely for our consumer business. As patients we have zero market power over the EHR vendors, the health information exchanges, and even the hospitals that employ our doctors. Our doctors are in the same boat. The EHR they use is forced on them by the hospital and many doctors are unhappy about that but subject to gag orders unprecedented in medicine until recently.
This is what “information blocking” means for patients and doctors. This is what the draft NPRM is trying to fix by mandating “without special effort”. This is what the hospitals, EHR vendors, and health information exchanges are going to try to squash before the NPRM becomes final. After the NPRM becomes a final regulation, presumably later in 2019, the hospitals and middlemen will have two years to fix information blocking. That brings us to 2022. Past experience with HITECH and Washington politics assures us of many years of further foot dragging and delay. We’ve seen this before with HIPAA, misinterpreted by hospitals in ways that frustrate patients, families, and physicians for over a decade.
Large hospital systems have too much political power at the state and local level to be driven by mere technology regulations. They routinely ignore the regulations that are bad for business like the patient-access features of HIPAA and the Accounting for Disclosures rules. Patients have no private right of action in HIPAA and the federal government has not enforced provisions like health records access abuses or refusal to account for disclosures. Patients and physicians are not organized to counter regulatory capture by the hospitals and health IT vendors.
The one thing hospitals do care about is Medicare payments. Some of the information blocking provisions of the draft NPRM are linked to Medicare participation. Let’s hope these are kept and enforced after the final regulations.
Competition to Bend the Cost Curve
Government has two paths to bending the cost curve: setting prices or meaningful competition. The ACA and HITECH have done neither. In theory, the government could do some of both but let’s ignore the role of price controls because it can always be added on if competition proves inadequate. Anyway, we’re in an administration that wants to go the pro-competition path and they need visible progress for patients and doctors before the next election. Just blaming pharma for high costs is probably not enough.
Meaningful competition requires multiple easy choices for both the patients and the prescribers as well as transparency of quality and cost. This will require a reversal of the HITECH strategy that allows large hospitals and their large EHRs to restrict the choices offered and to obscure the quality and cost behind the choices that are offered. We need health records systems that make the choice of imaging center, lab, hospital, medical group practice, direct primary care practice, urgent care center, specialist, and even telemedicine equally easy. “Without special effort”.
The NPRM has the makings of a pro-competitive shift away from large hospitals and other rent-seeking intermediaries but the elements are buried in over a thousand pages of ONC and CMS jargon. This confuses implementers, physicians and advocates and should be fixed before the regulations are finalized. The fix requires a clear statement that middlemen are optional and the interoperability path that bypasses the middlemen as “data follows the patient” is the default and “without special effort”. What follows are the essential clarifications I recommend for the final information blocking regulations – the Regulation, below.
Covered Entity – A hospital or technology provider subject to the Regulation and/or to Medicare conditions of participation.
Patient-directed vs. HIPAA TPO – Information is shared by a covered entity either as directed by the patient vs. without patient consent under the HIPAA Treatment, Payment, or Operations.
FHIR – The standard for information to follow the patient is FHIR. The FHIR standard will evolve under industry direction, primarily to meet the needs of large hospitals and large EHR vendors. The FHIR standard serves both patient-directed and HIPAA TPO sharing.
FHIR API – FHIR is necessary but not synonymous with a standard Application Programming Interface (API). The FHIR API can be used for both patient-directed and TPO APIs. Under the Regulation, all patient information available for sharing under TPO will also be available for sharing under patient direction. Information sharing that does not use the FHIR API, such as bulk transfers or private interfaces with business partners will be regulated according to the information blocking provisions of the Regulations.
Server FHIR API – The FHIR API operated by a Covered Entity.
Client FHIR API – The FHIR API operated by a patient-designee. The patient designee can be anyone (doctor, family, service provider, research institution) anywhere in the world.
Patient-designee – A patient can direct a Covered Entity to connect to any Client FHIR API by specifying either the responsible user of a Client FHIR API or the responsible institution operating a Client FHIR API. Under no circumstances does the Regulation require the patient to use an intermediary such as a personal health record or data bank in order to designate a Client FHIR API connection. Patient-controlled intermediaries such as personal health records or data banks are just another Client FHIR API that happen to be owned, operated, or controlled by the patient themselves.
Dynamic Client Registration – The Server FHIR API will register the Client FHIR API without special effort as long as the patient clearly designates the operator of the Client. Examples of a clear designation would include: (a) a National Provider Identifier (NPI) as published in the NPPES https://npiregistry.cms.hhs.gov; (b) an email address; (c) an https://… FHIR API endpoint; (d) any other standardized identifier that is provided by the patient as part of a declaration digitally signed by the patient.
Digital Signature – The Client FHIR API must present a valid signed authorization token to the Server FHIR API. The authorization token may be digitally signed by the patient. The patient can sign such a token using: (a) a patient portal operated by the Server FHIR API; (b) a standard Authorization Server designated by the patient using the patient portal of the sever operator (e.g. the UMA standard referenced in the Interoperability Standards Advisory); (c) a software statement from the Client FHIR API that is digitally signed by the Patient-designee.
Refresh Tokens – Once the patient provides a digital signature that enables a FHIR API connection, that signed authorization should suffice for multiple future connections by that same Client FHIR API, typically for a year, or until revoked by the patient. The duration of the authorization can be set by the patient and revoked by the patient using the patient portal of the Server FHIR API.
Patient-designated Authorization Servers – The draft NPRM correctly recognizes the problem of patients having to visit multiple patient portals in order to review which Clients are authorized to receive what data and to revoke access authorization. A patient may not even know how many patient portals they have enabled and how to reach them to check for sharing authorizations. By allowing the patient to designate the FHIR Authorization Server, a Server FHIR API operator would enable the patient to choose one service provider that would then manage authorizations in one place. This would also benefit the operator of the Server FHIR API by reducing their cost and risk of operating an authorization server. UMA, as referenced in the Interoperability Standards Advisory is one candidate standard for enhancing FHIR APIs to enable a patient-designated authorization server.
Big Win for Patients and Physicians
As I read it, the 11 definitions above are consistent with the draft NPRM. Entrepreneurs, private investors, educators, and licensing boards stand ready to offer patients and physicians innovative services that compete with each other and with the incumbents that were so heavily subsidized by HITECH. To encourage this private-sector investment and provide a visible win to their constituents, Federal health architecture regulators and managers, including ONC, CMS, VA, and DoD would do well to reorganize the Regulations in a way that makes the opportunity to compete on the basis of patient-directed exchange as clear as possible. As an alternative to reorganizing the Regulations, guidance could be provided that makes clear the 11 definitions above. Furthermore, although it could take years for the private-sector covered entities to fully deploy patient-directed sharing, deployments directly controlled by the Federal government such as access to the Medicare database and VA-DoD information sharing could begin to implement patient-directed information sharing “without special effort” immediately. Give patients and doctors the power of modern technology.
Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. 
Patient-Directed Access for Competition to Bend the Cost Curve published first on https://venabeahan.tumblr.com
0 notes
kristinsimmons · 6 years ago
Text
Patient-Directed Access for Competition to Bend the Cost Curve
By ADRIAN GROPPER, MD
Many of you have received the email: Microsoft HealthVault is shutting down. By some accounts, Microsoft has spent over $1 Billion on a valiant attempt to create a patient-centered health information system. They were not greedy. They adopted standards that I worked on for about a decade. They generously funded non-profit Patient Privacy Rights to create an innovative privacy policy in a green field situation. They invited trusted patient surrogates like the American Heart Association to participate in the launch. They stuck with it for almost a dozen years. They failed. The broken market and promise of HITECH is to blame and now a new administration has the opportunity and the tools to avoid the rent-seekers’ trap.
The 2016 21st Century CURES Act is the law. It is built around two phrases: “information blocking” and “without special effort” that give the administration tremendous power to regulate anti-competitive behavior in the health information sector. The resulting draft regulation, February’s Notice of Proposed Rulemaking (NPRM) is a breakthrough attempt to bend the healthcare cost curve through patient empowerment and competition. It could be the last best chance to avoid a $6 Trillion, 20% of GDP future without introducing strict price controls.
This post highlights patient-directed access as the essential pro-competition aspect of the NPRM which allows the patient’s data to follow the patient to any service, any physician, any caregiver, anywhere in the country or in the world.
The NPRM is powerful regulation in the hands of an administration caught between anti-regulatory principles and an entrenched cabal of middlemen eager to keep their toll booth on the information highway. Readers interested in or frustrated by the evolution of patient-directed interoperability can review my posts on this over the HITECH years: 2012; 2013; 2013; 2014; 2015.
The struggle throughout has been a reluctance to allow digital patient-directed information exchange to bypass middlemen in the same way that fax or postal service information exchange does not introduce a rent-seeking intermediary capable of censorship over the connection.
Middlemen
Who are the middlemen? Simply put, they are everyone except the patient or the physician. Middlemen includes hospitals, health IT vendors, health information exchanges, certifiers like DirectTrust and CARIN Alliance, and a vast number of hidden data brokers like Surescripts, Optum, Lexis-Nexis, Equifax, and insurance rating services. The business model of the middlemen depends on keeping patients and physicians from bypassing their toll booth. In so doing, they are making it hard for new ventures to compete without paying the overhead imposed by the hospital or the fees imposed by the EHR vendors.
But what about data cleansing, search and discovery, outsourced security, and other value-added services these middlemen provide? A value-added service provider shouldn’t need to put barriers to bypass to stay in business. The doctor or patient should be able to choose which value-added services they want and pay for them in a competitive market. Information blocking and the requirement for special effort on the part of the patient or the physician would be illogical for any real value-added service provider.
In summary, patient-directed access is simply the ability for a patient to direct and control the access of information from one hospital system to another “without special effort”. Most of us know what that looks like because most of us already direct transfer of funds from one bank to another. We know how much effort is involved. We know that we need to sign-in to the sending bank portal in order to provide the destination address and to restrict how much money moves and whether it moves once or every month until further notice. We know that we can send this money not just to businesses but to anyone, including friends and family without censorship or restriction. In most cases today, these transfers don’t cost anything at all. Let’s call this kind of money interoperability “without special effort”.
Could interoperating money be even less effort than that? Yes. For instance, it’s obnoxious that each bank and each payee forces us to use a different user interface. Why can’t I just tell all of my banks and payees: use that managing agent or trustee that I choose? Why can’t we get rid of all of the different emails and passwords for each of the 50+ portals in our lives and replace them with a secure digital wallet on our phone with fingerprint or face recognition protection? This would further reduce the special effort but it does require more advanced standards. But, at least in payment, we can see it coming. Apple, for instance gives me a biometric wallet for my credit cards and person-to person payments. ApplePay also protects my privacy by not sharing my credit card info with the merchants. Beyond today’s walled garden solutions, self-sovereign identity standards groups are adding the next layer of privacy and security to password-less sign-in and control over credentials.
Rent Seekers
But healthcare isn’t banking because HITECH fertilized layers upon layers of middlemen that we, as patients and doctors, do not control and sometimes, as with Surescripts, don’t even know exist. You might say that Visa or American Express are middlemen but they are middlemen that compete fiercely for our consumer business. As patients we have zero market power over the EHR vendors, the health information exchanges, and even the hospitals that employ our doctors. Our doctors are in the same boat. The EHR they use is forced on them by the hospital and many doctors are unhappy about that but subject to gag orders unprecedented in medicine until recently.
This is what “information blocking” means for patients and doctors. This is what the draft NPRM is trying to fix by mandating “without special effort”. This is what the hospitals, EHR vendors, and health information exchanges are going to try to squash before the NPRM becomes final. After the NPRM becomes a final regulation, presumably later in 2019, the hospitals and middlemen will have two years to fix information blocking. That brings us to 2022. Past experience with HITECH and Washington politics assures us of many years of further foot dragging and delay. We’ve seen this before with HIPAA, misinterpreted by hospitals in ways that frustrate patients, families, and physicians for over a decade.
Large hospital systems have too much political power at the state and local level to be driven by mere technology regulations. They routinely ignore the regulations that are bad for business like the patient-access features of HIPAA and the Accounting for Disclosures rules. Patients have no private right of action in HIPAA and the federal government has not enforced provisions like health records access abuses or refusal to account for disclosures. Patients and physicians are not organized to counter regulatory capture by the hospitals and health IT vendors.
The one thing hospitals do care about is Medicare payments. Some of the information blocking provisions of the draft NPRM are linked to Medicare participation. Let’s hope these are kept and enforced after the final regulations.
Competition to Bend the Cost Curve
Government has two paths to bending the cost curve: setting prices or meaningful competition. The ACA and HITECH have done neither. In theory, the government could do some of both but let’s ignore the role of price controls because it can always be added on if competition proves inadequate. Anyway, we’re in an administration that wants to go the pro-competition path and they need visible progress for patients and doctors before the next election. Just blaming pharma for high costs is probably not enough.
Meaningful competition requires multiple easy choices for both the patients and the prescribers as well as transparency of quality and cost. This will require a reversal of the HITECH strategy that allows large hospitals and their large EHRs to restrict the choices offered and to obscure the quality and cost behind the choices that are offered. We need health records systems that make the choice of imaging center, lab, hospital, medical group practice, direct primary care practice, urgent care center, specialist, and even telemedicine equally easy. “Without special effort”.
The NPRM has the makings of a pro-competitive shift away from large hospitals and other rent-seeking intermediaries but the elements are buried in over a thousand pages of ONC and CMS jargon. This confuses implementers, physicians and advocates and should be fixed before the regulations are finalized. The fix requires a clear statement that middlemen are optional and the interoperability path that bypasses the middlemen as “data follows the patient” is the default and “without special effort”. What follows are the essential clarifications I recommend for the final information blocking regulations – the Regulation, below.
Covered Entity – A hospital or technology provider subject to the Regulation and/or to Medicare conditions of participation.
Patient-directed vs. HIPAA TPO – Information is shared by a covered entity either as directed by the patient vs. without patient consent under the HIPAA Treatment, Payment, or Operations.
FHIR – The standard for information to follow the patient is FHIR. The FHIR standard will evolve under industry direction, primarily to meet the needs of large hospitals and large EHR vendors. The FHIR standard serves both patient-directed and HIPAA TPO sharing.
FHIR API – FHIR is necessary but not synonymous with a standard Application Programming Interface (API). The FHIR API can be used for both patient-directed and TPO APIs. Under the Regulation, all patient information available for sharing under TPO will also be available for sharing under patient direction. Information sharing that does not use the FHIR API, such as bulk transfers or private interfaces with business partners will be regulated according to the information blocking provisions of the Regulations.
Server FHIR API – The FHIR API operated by a Covered Entity.
Client FHIR API – The FHIR API operated by a patient-designee. The patient designee can be anyone (doctor, family, service provider, research institution) anywhere in the world.
Patient-designee – A patient can direct a Covered Entity to connect to any Client FHIR API by specifying either the responsible user of a Client FHIR API or the responsible institution operating a Client FHIR API. Under no circumstances does the Regulation require the patient to use an intermediary such as a personal health record or data bank in order to designate a Client FHIR API connection. Patient-controlled intermediaries such as personal health records or data banks are just another Client FHIR API that happen to be owned, operated, or controlled by the patient themselves.
Dynamic Client Registration – The Server FHIR API will register the Client FHIR API without special effort as long as the patient clearly designates the operator of the Client. Examples of a clear designation would include: (a) a National Provider Identifier (NPI) as published in the NPPES https://npiregistry.cms.hhs.gov; (b) an email address; (c) an https://… FHIR API endpoint; (d) any other standardized identifier that is provided by the patient as part of a declaration digitally signed by the patient.
Digital Signature – The Client FHIR API must present a valid signed authorization token to the Server FHIR API. The authorization token may be digitally signed by the patient. The patient can sign such a token using: (a) a patient portal operated by the Server FHIR API; (b) a standard Authorization Server designated by the patient using the patient portal of the sever operator (e.g. the UMA standard referenced in the Interoperability Standards Advisory); (c) a software statement from the Client FHIR API that is digitally signed by the Patient-designee.
Refresh Tokens – Once the patient provides a digital signature that enables a FHIR API connection, that signed authorization should suffice for multiple future connections by that same Client FHIR API, typically for a year, or until revoked by the patient. The duration of the authorization can be set by the patient and revoked by the patient using the patient portal of the Server FHIR API.
Patient-designated Authorization Servers – The draft NPRM correctly recognizes the problem of patients having to visit multiple patient portals in order to review which Clients are authorized to receive what data and to revoke access authorization. A patient may not even know how many patient portals they have enabled and how to reach them to check for sharing authorizations. By allowing the patient to designate the FHIR Authorization Server, a Server FHIR API operator would enable the patient to choose one service provider that would then manage authorizations in one place. This would also benefit the operator of the Server FHIR API by reducing their cost and risk of operating an authorization server. UMA, as referenced in the Interoperability Standards Advisory is one candidate standard for enhancing FHIR APIs to enable a patient-designated authorization server.
Big Win for Patients and Physicians
As I read it, the 11 definitions above are consistent with the draft NPRM. Entrepreneurs, private investors, educators, and licensing boards stand ready to offer patients and physicians innovative services that compete with each other and with the incumbents that were so heavily subsidized by HITECH. To encourage this private-sector investment and provide a visible win to their constituents, Federal health architecture regulators and managers, including ONC, CMS, VA, and DoD would do well to reorganize the Regulations in a way that makes the opportunity to compete on the basis of patient-directed exchange as clear as possible. As an alternative to reorganizing the Regulations, guidance could be provided that makes clear the 11 definitions above. Furthermore, although it could take years for the private-sector covered entities to fully deploy patient-directed sharing, deployments directly controlled by the Federal government such as access to the Medicare database and VA-DoD information sharing could begin to implement patient-directed information sharing “without special effort” immediately. Give patients and doctors the power of modern technology.
Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. 
Patient-Directed Access for Competition to Bend the Cost Curve published first on https://wittooth.tumblr.com/
0 notes
isaacscrawford · 6 years ago
Text
Patient-Directed Access for Competition to Bend the Cost Curve
By ADRIAN GROPPER, MD
Many of you have received the email: Microsoft HealthVault is shutting down. By some accounts, Microsoft has spent over $1 Billion on a valiant attempt to create a patient-centered health information system. They were not greedy. They adopted standards that I worked on for about a decade. They generously funded non-profit Patient Privacy Rights to create an innovative privacy policy in a green field situation. They invited trusted patient surrogates like the American Heart Association to participate in the launch. They stuck with it for almost a dozen years. They failed. The broken market and promise of HITECH is to blame and now a new administration has the opportunity and the tools to avoid the rent-seekers’ trap.
The 2016 21st Century CURES Act is the law. It is built around two phrases: “information blocking” and “without special effort” that give the administration tremendous power to regulate anti-competitive behavior in the health information sector. The resulting draft regulation, February’s Notice of Proposed Rulemaking (NPRM) is a breakthrough attempt to bend the healthcare cost curve through patient empowerment and competition. It could be the last best chance to avoid a $6 Trillion, 20% of GDP future without introducing strict price controls.
This post highlights patient-directed access as the essential pro-competition aspect of the NPRM which allows the patient’s data to follow the patient to any service, any physician, any caregiver, anywhere in the country or in the world.
The NPRM is powerful regulation in the hands of an administration caught between anti-regulatory principles and an entrenched cabal of middlemen eager to keep their toll booth on the information highway. Readers interested in or frustrated by the evolution of patient-directed interoperability can review my posts on this over the HITECH years: 2012; 2013; 2013; 2014; 2015.
The struggle throughout has been a reluctance to allow digital patient-directed information exchange to bypass middlemen in the same way that fax or postal service information exchange does not introduce a rent-seeking intermediary capable of censorship over the connection.
Middlemen
Who are the middlemen? Simply put, they are everyone except the patient or the physician. Middlemen includes hospitals, health IT vendors, health information exchanges, certifiers like DirectTrust and CARIN Alliance, and a vast number of hidden data brokers like Surescripts, Optum, Lexis-Nexis, Equifax, and insurance rating services. The business model of the middlemen depends on keeping patients and physicians from bypassing their toll booth. In so doing, they are making it hard for new ventures to compete without paying the overhead imposed by the hospital or the fees imposed by the EHR vendors.
But what about data cleansing, search and discovery, outsourced security, and other value-added services these middlemen provide? A value-added service provider shouldn’t need to put barriers to bypass to stay in business. The doctor or patient should be able to choose which value-added services they want and pay for them in a competitive market. Information blocking and the requirement for special effort on the part of the patient or the physician would be illogical for any real value-added service provider.
In summary, patient-directed access is simply the ability for a patient to direct and control the access of information from one hospital system to another “without special effort”. Most of us know what that looks like because most of us already direct transfer of funds from one bank to another. We know how much effort is involved. We know that we need to sign-in to the sending bank portal in order to provide the destination address and to restrict how much money moves and whether it moves once or every month until further notice. We know that we can send this money not just to businesses but to anyone, including friends and family without censorship or restriction. In most cases today, these transfers don’t cost anything at all. Let’s call this kind of money interoperability “without special effort”.
Could interoperating money be even less effort than that? Yes. For instance, it’s obnoxious that each bank and each payee forces us to use a different user interface. Why can’t I just tell all of my banks and payees: use that managing agent or trustee that I choose? Why can’t we get rid of all of the different emails and passwords for each of the 50+ portals in our lives and replace them with a secure digital wallet on our phone with fingerprint or face recognition protection? This would further reduce the special effort but it does require more advanced standards. But, at least in payment, we can see it coming. Apple, for instance gives me a biometric wallet for my credit cards and person-to person payments. ApplePay also protects my privacy by not sharing my credit card info with the merchants. Beyond today’s walled garden solutions, self-sovereign identity standards groups are adding the next layer of privacy and security to password-less sign-in and control over credentials.
Rent Seekers
But healthcare isn’t banking because HITECH fertilized layers upon layers of middlemen that we, as patients and doctors, do not control and sometimes, as with Surescripts, don’t even know exist. You might say that Visa or American Express are middlemen but they are middlemen that compete fiercely for our consumer business. As patients we have zero market power over the EHR vendors, the health information exchanges, and even the hospitals that employ our doctors. Our doctors are in the same boat. The EHR they use is forced on them by the hospital and many doctors are unhappy about that but subject to gag orders unprecedented in medicine until recently.
This is what “information blocking” means for patients and doctors. This is what the draft NPRM is trying to fix by mandating “without special effort”. This is what the hospitals, EHR vendors, and health information exchanges are going to try to squash before the NPRM becomes final. After the NPRM becomes a final regulation, presumably later in 2019, the hospitals and middlemen will have two years to fix information blocking. That brings us to 2022. Past experience with HITECH and Washington politics assures us of many years of further foot dragging and delay. We’ve seen this before with HIPAA, misinterpreted by hospitals in ways that frustrate patients, families, and physicians for over a decade.
Large hospital systems have too much political power at the state and local level to be driven by mere technology regulations. They routinely ignore the regulations that are bad for business like the patient-access features of HIPAA and the Accounting for Disclosures rules. Patients have no private right of action in HIPAA and the federal government has not enforced provisions like health records access abuses or refusal to account for disclosures. Patients and physicians are not organized to counter regulatory capture by the hospitals and health IT vendors.
The one thing hospitals do care about is Medicare payments. Some of the information blocking provisions of the draft NPRM are linked to Medicare participation. Let’s hope these are kept and enforced after the final regulations.
Competition to Bend the Cost Curve
Government has two paths to bending the cost curve: setting prices or meaningful competition. The ACA and HITECH have done neither. In theory, the government could do some of both but let’s ignore the role of price controls because it can always be added on if competition proves inadequate. Anyway, we’re in an administration that wants to go the pro-competition path and they need visible progress for patients and doctors before the next election. Just blaming pharma for high costs is probably not enough.
Meaningful competition requires multiple easy choices for both the patients and the prescribers as well as transparency of quality and cost. This will require a reversal of the HITECH strategy that allows large hospitals and their large EHRs to restrict the choices offered and to obscure the quality and cost behind the choices that are offered. We need health records systems that make the choice of imaging center, lab, hospital, medical group practice, direct primary care practice, urgent care center, specialist, and even telemedicine equally easy. “Without special effort”.
The NPRM has the makings of a pro-competitive shift away from large hospitals and other rent-seeking intermediaries but the elements are buried in over a thousand pages of ONC and CMS jargon. This confuses implementers, physicians and advocates and should be fixed before the regulations are finalized. The fix requires a clear statement that middlemen are optional and the interoperability path that bypasses the middlemen as “data follows the patient” is the default and “without special effort”. What follows are the essential clarifications I recommend for the final information blocking regulations – the Regulation, below.
Covered Entity – A hospital or technology provider subject to the Regulation and/or to Medicare conditions of participation.
Patient-directed vs. HIPAA TPO – Information is shared by a covered entity either as directed by the patient vs. without patient consent under the HIPAA Treatment, Payment, or Operations.
FHIR – The standard for information to follow the patient is FHIR. The FHIR standard will evolve under industry direction, primarily to meet the needs of large hospitals and large EHR vendors. The FHIR standard serves both patient-directed and HIPAA TPO sharing.
FHIR API – FHIR is necessary but not synonymous with a standard Application Programming Interface (API). The FHIR API can be used for both patient-directed and TPO APIs. Under the Regulation, all patient information available for sharing under TPO will also be available for sharing under patient direction. Information sharing that does not use the FHIR API, such as bulk transfers or private interfaces with business partners will be regulated according to the information blocking provisions of the Regulations.
Server FHIR API – The FHIR API operated by a Covered Entity.
Client FHIR API – The FHIR API operated by a patient-designee. The patient designee can be anyone (doctor, family, service provider, research institution) anywhere in the world.
Patient-designee – A patient can direct a Covered Entity to connect to any Client FHIR API by specifying either the responsible user of a Client FHIR API or the responsible institution operating a Client FHIR API. Under no circumstances does the Regulation require the patient to use an intermediary such as a personal health record or data bank in order to designate a Client FHIR API connection. Patient-controlled intermediaries such as personal health records or data banks are just another Client FHIR API that happen to be owned, operated, or controlled by the patient themselves.
Dynamic Client Registration – The Server FHIR API will register the Client FHIR API without special effort as long as the patient clearly designates the operator of the Client. Examples of a clear designation would include: (a) a National Provider Identifier (NPI) as published in the NPPES https://npiregistry.cms.hhs.gov; (b) an email address; (c) an https://… FHIR API endpoint; (d) any other standardized identifier that is provided by the patient as part of a declaration digitally signed by the patient.
Digital Signature – The Client FHIR API must present a valid signed authorization token to the Server FHIR API. The authorization token may be digitally signed by the patient. The patient can sign such a token using: (a) a patient portal operated by the Server FHIR API; (b) a standard Authorization Server designated by the patient using the patient portal of the sever operator (e.g. the UMA standard referenced in the Interoperability Standards Advisory); (c) a software statement from the Client FHIR API that is digitally signed by the Patient-designee.
Refresh Tokens – Once the patient provides a digital signature that enables a FHIR API connection, that signed authorization should suffice for multiple future connections by that same Client FHIR API, typically for a year, or until revoked by the patient. The duration of the authorization can be set by the patient and revoked by the patient using the patient portal of the Server FHIR API.
Patient-designated Authorization Servers – The draft NPRM correctly recognizes the problem of patients having to visit multiple patient portals in order to review which Clients are authorized to receive what data and to revoke access authorization. A patient may not even know how many patient portals they have enabled and how to reach them to check for sharing authorizations. By allowing the patient to designate the FHIR Authorization Server, a Server FHIR API operator would enable the patient to choose one service provider that would then manage authorizations in one place. This would also benefit the operator of the Server FHIR API by reducing their cost and risk of operating an authorization server. UMA, as referenced in the Interoperability Standards Advisory is one candidate standard for enhancing FHIR APIs to enable a patient-designated authorization server.
Big Win for Patients and Physicians
As I read it, the 11 definitions above are consistent with the draft NPRM. Entrepreneurs, private investors, educators, and licensing boards stand ready to offer patients and physicians innovative services that compete with each other and with the incumbents that were so heavily subsidized by HITECH. To encourage this private-sector investment and provide a visible win to their constituents, Federal health architecture regulators and managers, including ONC, CMS, VA, and DoD would do well to reorganize the Regulations in a way that makes the opportunity to compete on the basis of patient-directed exchange as clear as possible. As an alternative to reorganizing the Regulations, guidance could be provided that makes clear the 11 definitions above. Furthermore, although it could take years for the private-sector covered entities to fully deploy patient-directed sharing, deployments directly controlled by the Federal government such as access to the Medicare database and VA-DoD information sharing could begin to implement patient-directed information sharing “without special effort” immediately. Give patients and doctors the power of modern technology.
Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. 
Article source:The Health Care Blog
0 notes
hodldrgn-blog · 6 years ago
Photo
Tumblr media
New Post has been published on https://www.cryptomoonity.com/7-myths-of-self-sovereign-identity-2/
7 Myths of Self-Sovereign Identity
Dispelling misunderstandings around SSI (Part 1 of 2)
Image by David Travis on Unsplash
Here are seven myths of SSI that I repeatedly hear and will address across two posts. Myths 1–3 will be discussed here, myths 4–7 here.
Self-sovereign means self-attested.
SSI attempts to reduce government’s power over an identity owner.
SSI creates a national or “universal ID” credential.
SSI gives absolute control over identity.
There’s a “main” issuer of credentials.
There’s a built-in method of authenticating.
User-centric identity is the same as SSI.
Note: readers should have a basic understanding of how SSI works before reading this. For a primer, review the third and final section of The Three Models of Digital Identity Relationships.
The self-sovereign identity model.
Background
I recently attended the ID2020 event in New York, where some of the biggest players in identity were on hand, working toward fulfilling the United Nations’ Sustainable Development Goal 16.9: Identity for all by 2030. It was an excellent event, lots of energy, very professional, and serious about moving the needle on this BHAG (big, hairy, audacious goal).
We heard first-hand examples of the pains caused by broken identity systems around the world, some of which were truly heartbreaking. Most of us take for granted that we can prove things about ourselves, unaware that over a billion people cannot, leaving them unable to obtain desirable work or advanced education, open a bank account, hold title to property, or even travel. As noted by the World Bank’s ID4D, identity is a prerequisite to financial inclusion, and financial inclusion is a big part of solving poverty.
That means improving identity will reduce poverty, not to mention what it could do for human trafficking. Refugees bring another troubling identity dilemma where the need is critical, and where we are commencing efforts through our partnership with iRespond.
The Culprit
Several times throughout the event, SSI was discussed as a new and potentially big part of the solution. While there was clearly hope, there was also skepticism that, in my opinion, stems from misperceptions about what SSI really is and is not.
If SSI really was what these skeptics thought, I wouldn’t favor it either. And if they knew what SSI really is, I think they’d embrace it wholeheartedly.
The perception problem begins with the very term, “self-sovereign.”
At one point on the main stage, the venerable Kim Cameron, Microsoft’s Principal Identity Architect and author of the seminal 7 Laws of Identity, quipped:
“The term ‘self-sovereign’ identity makes me think of hillbillies on a survivalist kick.”
Kim went on to clarify that he is strongly in favor of SSI, he just dislikes the term and the negative perceptions it conjures up.
Me, too.
Self-sovereign identity is not a great term — for lots of reasons — but until we have a better one, (“decentralized identity” is a serious candidate) let’s clarify the one we’ve got.
Myth 1: Self-sovereign means self-attested.
Third-Party Credentials
In meatspace (real life, compared with cyberspace), to prove something about yourself you must present what others say about you in the form of credentials or other evidence; without this, what you claim about yourself isn’t strongly reliable.
I can claim I went to Harvard, but when a prospective employer needs to know for sure, my claim is no longer sufficient. Saying my credit is great won’t get me a loan, and claiming I’m a pilot won’t get me into the cockpit. I need proof, and it must come from a source that the relying party will trust.
SSI is no different. You can make all the claims you want about yourself, but when a relying party needs to know for sure, you need to show them credentials provably issued by a source the relying party trusts.
Self-Attested Credentials
Self-attested verifiable credentials — what you say about yourself — still have their place: they are how you provide your opinion, preference, and most important, consent¹. Opinion, preference, and consent can only reliably come from the identity owner and not from third parties, whereas proof of identity or other attributes are exactly the opposite: they must come from third parties and not the identity owner.
So, to prove Timothy Ruff has given his consent — which only Timothy can give — you must be confident that you’re dealing with the real Timothy Ruff, which is only provable with third-party attestations.
This means that self-attested credentials, including consent, still rely indirectly on third-party credentials. (Unless it’s something like pizza preferences, where who you are doesn’t matter much.)
Bottom line: the foundation of SSI, as with any strong identity system, is third-party issued credentials, not self-attested credentials. SSI supports both, and each type can add value to the other.
Myth 2: SSI attempts to reduce government’s power over an identity owner.
This myth hearkens back to Kim’s comment, where the term “self-sovereign” could literally be interpreted to mean an individual might somehow become less subject to government. In reality, nothing could be further from the truth. In fact, SSI can actually build a stronger and richer relationship between governments and citizens.
SSI makes possible a private, encrypted, peer-to-peer connection between government and citizens that can, with mutual consent, be used for powerful mutual authentication (preventing phishing), communication, data sharing, and more. This connection wouldn’t be affected by changes in email address, postal address, phone numbers, and so on. And since both sides of the link would be self-sovereign, either side could terminate it, too.
From the perspective of government, the initial function of SSI is straightforward: take existing credentials, whether physical or digital, and begin issuing them cryptographically secure in the form of digital, verifiable credentials. These credentials then can be held independently by the individual, and verified instantly by anyone, anywhere, including government, when presented.
The secondary function of SSI is even more interesting: use the encrypted connection that was created during credential issuance for direct, private, ongoing interaction with the constituent.
From the perspective of the individual, we’ve actually had some central features of SSI for hundreds of years, using the global standard known as paper. Today, government gives you a passport which you carry and present anywhere you wish, with broad acceptance. SSI simply makes the same thing possible digitally, and with significant advantages (zero-knowledge proofs/selective disclosure, revocation, mutual authentication, etc.).
This digital transformation of credentials simply hasn’t been possible until now, at least interoperably and on a global scale.
Myth 3: SSI creates a national or “universal ID” credential.
There exists no intention (or delusion) that I am aware of that somehow SSI can, once it is broadly adopted, supplant a national ID system. On the contrary, as mentioned above, government should get excited about how SSI can complement and improve existing identity systems, whether national, regional, or otherwise.
SSI actually does not replace the trust of government or any other organization; it is simply a means for connecting and exchanging instantly authenticatable data. SSI is set of protocols, not an actor, and it has no inherent basis for trust other than the cryptographic properties that ensure the privacy and integrity of the data exchanged and the connection used to exchange it. What parties exchange over that connection, and whether to trust what was exchanged, is up to them.
Some governments already understand SSI and are leading out on its implementation. My prediction: all governments will eventually use SSI to issue credentials digitally, to better communicate with and interact with constituents, to streamline internal processes where slow verification bogs things down, to more strongly authenticate the people, organizations, and things they deal with, and to reduce the printing of paper and plastic.
SSI in the Developing World
Now that’s all fine and dandy for the developed world… but what about the billion-plus “invisibles” living without credentials, often in situations where a government is somehow struggling to issue them… can SSI help?
Quite possibly.
In some parts of the world, trust within a community is established by obtaining from a trusted individual a signed attestation that you’re worthy of obtaining a loan, for example. With SSI this could be done digitally rather than on paper, it could involve biometrics that strongly attach the attestation to the attestee and attestor, and it could include attestations and other potential credit scoring data from multiple sources.
I can imagine a baby born in a remote village and receiving her first “credentials” from her family and friends, who each give her attestations about her birth and their recollections of it. Pictures, videos, songs, and other precious memories could be added to her brand new digital wallet — which is now so much more than a wallet — and with guardianship of it tied to her parents. Who knows how such a set of credentials issued by loved ones might later be used, but my sense is that it could be vitally important some day.
I love the fact that SSI is powerful for both developed and developing worlds. I can’t wait to explore this topic more in the future.
Part 2, Myths 4–7, can be read here.
Footnotes:
¹ Consent is a rich topic that will be covered in greater detail in the future. See here for an eye-opening perspective about how elusive, and practically impossible in many cases, consent can be.
Founded in 2013, Evernym helps organizations implement self-sovereign identity, and individuals to manage and utilize their self-sovereign identity. Learn more at evernym.com.
7 Myths of Self-Sovereign Identity was originally published in Evernym on Medium, where people are continuing the conversation by highlighting and responding to this story.
0 notes
eunicecom125-blog · 7 years ago
Text
Week 09 - IoT (Internet of Things)
In my first post, I briefly mentioned about the IoT and how the world is already starting to experience the impact of the Internet of Things. However, I didn’t exactly mention the definition of IoT. Well, IoT is defined as “network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, actuators, and network connectivity which enable these objects to connect and exchange data.”
Each of these “things” uniquely identifiable through its embedded computing system but is able to interoperate within the existing Internet infrastructure.
Tumblr media
Hahaha, that is so me trying to absorb all the technical terms. Today, in this post, I’ll be discussing IoT in Singapore!
So, as some of you may know, early last month, the old SingPost Centre was finally relaunched after 2 years of redevelopment. This new shopping mall was called a “Smart Mall” because this mall highlighted how technology is changing the retail landscape, and it aims to be in line with Singapore’s government Smart Nation initiative. Here’re some of the technological advances put into place:
1. SMART POST OFFICE  
Tumblr media
The General Post Office is the largest post office in Singapore and its Singapore’s very first “smart post office”. Its’ facilities are described as the future of the post office and it offers automated systems that improve operational efficiency and provide round-the-clock access to postal and other essential services. Some of the features include:
largest POPStation of 143 lockers, giving customers flexibility to collect and drop off parcels at any time of the day.
new SAM kiosks which are augmented with a self-service posting box for registered articles (the first in Singapore).
A new dropbox for registered articles, accessible 24/7. This new box allows customers to skip the queues by weighing their parcels and printing the labels at the SAM machines before depositing them.
A short video showing the General Post Office:
youtube
2. ALL-LASER PROJECTION CINEMA  
Taking high definition to the next level, the cinema at Singpost Centre is also Singapore’s very first all-laser cinema, where award winning smart laser projectors have been introduced and installed for enhanced image quality! Think of clear, crisp, sharp images when you catch your favourite blockbuster movies the next time! 
Tumblr media
3. SUPER HIGH TECH SUPERMARKET 
In my opinion, this is the true star of the “smart mall”! NTUC fairprice and Singpost mall definitely amped up their game and introduced several new innovations: 
a. Fairprice@Singpost Mobile Application 
Contains an in-store navigation and it allows customers to locate their products, check for availability, and even gather data about users’ shopping habits to deliver user-specific promotions based on their habits! 
Tumblr media
b. Scan2Go System
Tumblr media
This handheld scanner = super efficient grocery shopping! Especially useful for customers who don’t want to spend a lot of time in the supermarket. 
Allows shoppers to simply scan their items as they shop and pay at the self-checkout counters. 
Handheld scanners keeps track of the running total of shoppers’ purchases, total amount spent, product descriptions and promotions! 
At the end of it, shoppers will just have to scan a QR code at self-checkout kiosk and pay the final amount! Saves the trouble of having to stand at the machine and slowly scan one item at a time...
Tumblr media
(original picture from TSL) 
c. Click&collect lockers & experiential corners 
Lockers for the purpose of self-collection for online purchases. These lockers even come with a refrigerated storage to keep chilled products fresh and cold! 
Tumblr media Tumblr media
(original picture from TSL) 
Experiential corners to engage customers using augmented reality and allowing them to immerse themselves in a virtual setting, trying different products.
Tumblr media
(gif from TSL) 
The Internet of Things (IoT) is rapidly penetrating into our society today and Singapore is definitely incorporating such web-connected objects to slowly advance into a Smart Nation. As you guys can see, the Internet of Things can change our lives with the technologies they offer. Experts estimate that the IoT will consist of about 30 billion objects by 2020. I definitely look forward to see what the future of IoT will bring.
Let me end of this post with a video showing the new “smart mall”. Enjoy :) 
youtube
0 notes
stackinqueue · 8 years ago
Text
Ipv6 Tunneling Over Ipv4 Infrastructure
http://www.stackinqueue.com/?p=7586
Part 1: Introduction Though the Web Protocol IPv4 was giving environment friendly service over than 20 years , however the brand new Web Protocol IPv6 supplies greater effectivity like having sufficient degree of IPs, stronger safety and mobility. In reality it's good to guage the efficiency advantages that we are able to get from IPv6 protocol in examine to the IPv4 protocol. We are able to improve the present IPv4 infrastructure to the subsequent era Web Protocol(IPv6) and get its benefits utilizing the transition mechanisms. When IPv4 was designed most of networks had simply few nodes, low bandwidth, excessive latency, and excessive error charges. Commonest purposes at the moment have been FTP,e-mail, and so forth.Within the early 1990's, the pc trade expanded with coming the private computer systems (PCs) to the market. The web additionally developed and digital companies or e-commerce began. The market demand was the largest issue within the Web's revolution. Because the quick develop of the Web was detected within the early 1990's, it was exhibiting that the IPv4 deal with area could be end by the top of the century. On this regard, some mechanisms resembling Community Handle Translator (NAT) have prolonged the lifetime of IPv4, nevertheless it was not a logical answer.At the moment, the market appears utterly totally different than it was within the 1980's. Though FTP, and e- mail are nonetheless highly regarded in the present day however new purposes resembling video conferencing, Voice-over-IP, E-Commerce, Mobiles, and and so forth , have led the Web Engineering Process Pressure (IETF) to hunt a brand new Web Protocol, that we name it IPv6. IPv4 and IPv6 are incompatible protocols. Because of this, transition to the brand new protocol can't be anticipated to be painless, and can contain vital prices for service suppliers and clients alike. If we examine the prices of transition with the non-transition mode or utilizing IPv4 with supporting new providers, then it will possibly assist us establish the very best time to begin the transition course of .Every time transition begins there shall be no single "flag day" on which the all-IPv4 community turns into an IPv6 community. On the Web degree, transition shall be a prolonged course of, with the 2 protocols present facet by facet for a few years to come back. To facilitate transition, the IETF (Web Engineering Process Pressure) has arrange a piece group known as ngtrans (Subsequent Era TRANSition) which specifies mechanisms for supporting interoperability between IPv4 and IPv6. Specifically, the group has centered on two main issues: •Learn how to make IPv6 terminals talk with IPv4 terminals. •Learn how to transport IPv6 over an IPv4 community in order that IPv6 "islands" interconnected through the IPv4-based Web can talk. This second drawback, which is extraordinarily vital within the preliminary stage of IPv6 deployment, shall be joined sooner or later by the reciprocal drawback: how you can transport IPv4 over IPv6. Nonetheless; dialogue of this challenge have been postponed till the presence of IPv6 reaches to a big level on the Web. Work on these issues has led to the event of a set of transition mechanisms, every focused to a selected vary of makes use of and purposes. Part 2: IP Overview Web protocol is the set of methods utilized by many hosts for transmitting information over the Web. The present model of the Web protocol is IPv4, which supplies a 32-bit deal with system. Web protocol is a "greatest effort" system, that means that no packet of data despatched over it's assured to achieve its vacation spot in the identical situation it was despatched. Typically different protocols are utilized in tandem with the Web protocol for information that for one purpose or one other will need to have extraordinarily excessive constancy. Each system linked to a community, be it a neighborhood space community (LAN) or the Web, is given an Web protocol quantity. This deal with is used to establish the system uniquely amongst all different gadgets linked to the prolonged community. 2.1 : Options of IP IP is a connectionless protocol. Because of this it has no idea of a job or a session. Every packet is handled as an entity in itself. IP is fairly like a postal employee sorting letters. He isn't involved with whether or not a packet is one in every of a batch. He merely routes packets, separately, to the subsequent location on the supply route. IP can also be unconcerned with whether or not a packet reaches its eventual vacation spot, or whether or not packets arrive within the authentic order. There isn't a data in a packet to establish it as a part of a sequence or as belonging to a selected job. Consequently, IP can not inform if packets have been misplaced or whether or not they have been obtained out of order. IP is an unreliable protocol. Any mechanisms for making certain that information despatched arrives appropriate and intact are offered by the higher- degree protocols within the suite. 2.2 : IP Routing So how does an IP packet addressed to a pc on the opposite facet of the world discover its solution to its vacation spot? The essential mechanism may be very easy. On a LAN, each host sees each packet that's despatched by each different host on that LAN. Usually, it should solely do one thing with that packet whether it is addressed to itself, or if the vacation spot is a broadcast deal with. A router is totally different. A router examines each packet, and compares the vacation spot deal with with a desk of addresses that it holds in reminiscence. If it finds an actual match, it forwards the packet to an deal with related to that entry within the desk. This related deal with stands out as the deal with of one other community in a point- to- level hyperlink, or it might be the deal with of the next-hop router. If the router would not discover a match, it runs by means of the desk once more, this time searching for a match on simply the community ID a part of the deal with. Once more, if a match is discovered, the packet is shipped on to the deal with related to that entry. If a match nonetheless is not discovered, the router appears to see if a default next- hop deal with is current. In that case, the packet is shipped there. If no default deal with is current, the router sends an ICMP "host unreachable" or "community unreachable" message again to the sender. If you see this message, it normally signifies a router failure in some unspecified time in the future within the community. The troublesome a part of a router's job isn't the way it routes packets, however the way it builds up its desk. Within the easiest case, the router desk is static: it's learn in from a file at start- up. That is ample for easy networks. You do not even want a devoted piece of equipment for this, as a result of routing performance is constructed into IP. Dynamic routing is extra sophisticated. A router builds up its desk by broadcasting ICMP router solicitation messages, to which different routers reply. Routing protocols are used to find the shortest path to a location. Routes are up to date periodically in response to site visitors situations and availability of a route. Nonetheless, the small print of how this all works is past the scope of this report. 2.three : Way forward for the Web As we are able to see the Web could have a major problem in just a few years. Attributable to its superb progress and the restrictions in its design and amenities , there shall be a drawback when no extra free addresses can be found for connecting to new hosts or assigning to a brand new system. At that time, no extra new net servers could be arrange, no extra customers can join accounts at ISPs, and no extra new machines could be set as much as entry the net or take part on-line video games. A number of options have been made to unravel the issue. A highly regarded method is to not assign a worldwide distinctive deal with to each person's machine, however fairly to assign them "personal" addresses, and conceal a number of machines behind one official, globally distinctive deal with. This method known as "Community Handle Translation" or NAT. It has issues, because the machines hidden behind the worldwide deal with cannot be addressed, and because of this, opening connections to them that are utilized in on-line gaming, peer-to-peer networking, and and so forth, isn't doable. A distinct method to the issue of Web addresses getting scarce is to discard the outdated Web protocol with its restricted addressing capabilities, and use a brand new protocol that doesn't have these limitations. The protocol or really, a set of protocols utilized by machines linked to kind in the present day's Web is called the TCP/IP (Transmission Management Protocol, Web Protocol), and model Four presently in use has all the issues described above. Switching to a unique protocol model that doesn't have these issues after all requires for a brand new model to be accessible. And really, there's a higher model. Model 6 of the Web Protocol (IPv6) supplies future inquiries on deal with area, and in addition addresses different options resembling privateness, encryption, and higher assist of cell computing as properly. Assuming a fundamental understanding of how in the present day's IPv4 works, this report is meant as an introduction to the IPv6 protocol. The adjustments in deal with codecs and title decision are lined. After that, it's proven how you can use IPv6 by utilizing a simple-yet- environment friendly transition mechanism known as 6to4. Part three : IPv6 vs IPV4 When telling folks emigrate from IPv4 to IPv6, the query you normally hear is "Why?". There are literally just a few good causes to maneuver to the brand new model: • Larger deal with area • Assist for cell gadgets • Constructed-in safety three.1 : Larger deal with area The larger deal with area IPv6 affords is the obvious enhancement it has over IPv4. Whereas in the present day's Web structure is predicated on 32-bit extensive addresses, the brand new model has 128-bit know-how accessible for addressing. Base on the enlarged deal with area, workarounds like NAT do not have for use anymore. This permits full, unconstrained IP connectivity for in the present day's IP-based machines in addition to upcoming cell gadgets like PDAs and cell telephones all will profit from full IP entry by means of GPRS and UMTS. three.2 : Mobility When mentioning cell gadgets and IP, it is vital to notice particular protocol is required to assist mobility, and implementing this protocol that known as "Cellular IP" is likely one of the necessities for each IPv6 stack. Thus, if we've got IPv6 going, we have assist for roaming between totally different networks, with international notification when we go away one community and enter the opposite one. Assist for roaming is feasible with IPv4 too, however there are a variety of hoops that should be jumped to be able to get issues working. With IPv6, there is not any want for this, as assist for mobility was one of many design necessities for IPv6. three.three : Safety Moreover assist for mobility, safety was one other requirement for the successor to in the present day's Web Protocol model. Because of this, IPv6 protocol stacks are required to embody IPsec. IPsec permits authentication, encryption, and compression of IP site visitors. Aside from application-level protocols like SSL or SSH, all IP site visitors between two nodes could be dealt with with out adjusting any purposes. The advantage of that is that every one purposes on a machine can profit from encryption and authentication, and that insurance policies could be set on a per-host (and even per-network) foundation, not per software/service. Part Four : IPV6 Addressing The IPV6 Addressing properties is offered on this part. Four.1: A number of addresses In IPv4, a number normally has one IP quantity per community interface and even per machine if the IP stack helps it. Solely very uncommon purposes like net servers lead to machines having multiple IP quantity. In IPv6, that is totally different. For every interface, there's not solely a globally distinctive IP deal with, however there are two different addresses which might be of curiosity: The link-local deal with, and the site-local deal with. The link-local deal with has a prefix of fe80::/64, and the host bits are constructed from the interface's EUI64 deal with. The link-local deal with is used for contacting hosts and routers on the identical community solely, the addresses are usually not seen or reachable from totally different subnets. If desired, there's the selection of both utilizing international addresses as assigned by a supplier, or utilizing site-local addresses.[16] Website-local addresses are assigned the community deal with fec0::/10, and subnets and hosts could be addressed simply as for provider-assigned networks. The one distinction is that the addresses won't be seen to exterior machines, as these are on a unique community, and their site-local addresses are in a unique bodily web. As with the 10/eight community in IPv4, site-local addresses can be utilized, however do not should be. For IPv6, it is most typical to have hosts assigned a neighborhood hyperlink and a world IP deal with. Website-local addresses are fairly unusual in the present day, and isn't any substitute for globally distinctive adresses if international connectivity is required. Four.2 : Multicasting In IP land, there are 3 ways to speak to a number: unicast, broadcast, and multicast. The most typical solution to speak to a number is by speaking to it immediately utilizing its unicast deal with. In IPv4, the unicast deal with is the "regular" IP deal with assigned to a single host, with all deal with bits assigned. The printed deal with used to deal with all hosts in the identical IP subnet has the community bits set to the community deal with, and all host bits set to "1" which could be simply finished utilizing the netmask and a few bit operations. Multicast addresses are used to achieve a variety of hosts in the identical multicast group, which could be machines unfold throughout the Web. Machines should be a part of multicast teams explicitly to take part, and there are particular IPv4 numbers used for multicast addresses, allotted from the 224/eight subnet. Multicast is not used very a lot in IPv4, and solely few purposes use it.In IPv6, unicast addresses are used the identical as in IPv4, no shock there all of the community and host bits are assigned to establish the goal community and machine. Broadcasts are now not accessible in IPv6 in the way in which they have been in IPv4, that is the place multicasting comes into play. Addresses within the ff::/eight community are reserved for multicast purposes, and there are two particular multicast addresses that supersede the published addresses from IPv4. One is the "all routers" multicast deal with, the others is for "all hosts". The small print about IPv6 are basically the way in which they have been proposed within the RFCs by IETF, nonetheless we selected to make use of Microsoft Home windows 2003 because the platform to implement the exams. Attributable to their early levels of improvement, the IPv6 protocol stack in Home windows 2003 nonetheless has many issues, resembling fragmentation points, no assist for IPSec, a local safety function, and so forth… Microsoft has two totally different implementations of an IPv6 stack each for Home windows NT 5.zero and Home windows 2003. The older stack, generally known as the "Microsoft Analysis IPv6 Launch 1.Four", works beneath each NT Four.zero and Win2K; the newer stack, generally known as the "Microsoft IPv6 Expertise Preview for Home windows 2003" works beneath Home windows 2003. Each stacks require an present IPv4 stack to be beforehand put in. As soon as put in, apart from giving the Home windows atmosphere the assist for IPv6, it creates a complete new set of routines, resembling "ping6", "tracert6", that are related in operate to "ping" and "tracert", however work with the brand new IPv6 stack. The nice half concerning the IPv6 implementation that Microsoft created is that they embedded the IPv6 socket creation within the Winsock2 API. That implies that they added just a few extra capabilities whenever you create the sockets, nonetheless, the basics remained the identical, and thus a programmer that may make an IPv4 software can probably learn the way to make a easy IPv6 software as properly. Web Protocol model 6 is designed as an evolutionary improve to the Web Protocol (IPv4) and can, the truth is, coexist with the older IPv4 for a while. IPv6 is designed to permit the Web to develop steadily, each when it comes to the variety of hosts linked and the entire quantity of knowledge site visitors transmitted; it should have a 128 bit deal with wanting like FFFF:FFFF:FFFF:FFFF, and it'll assist as much as 340,282,366,920938,463,463,374,607,431,768,211,456distinctive addresses.in Table1 we are able to see some great benefits of IPV6 versus IPV4 . The IPv6 header is at all times current and is a set dimension of 40 bytes. The fields within the IPv6 header are described briefly under. The fields within the IPv6 header are: Model – Four bits are used to point the model of IP and is about to six. Site visitors Class – Signifies the category or precedence of the IPv6 packet. The scale of this area is eight bits.The Site visitors Class area supplies related performance to the IPv4 Sort of Service area. Move Label – Signifies that this packet belongs to a selected sequence of packets between a supply and vacation spot, requiring particular dealing with by intermediate IPv6 routers. The scale of this area is 20 bits. The Move Label is used for non-default high quality of service connections, resembling these wanted by real- time information (voice and video). For default router dealing with, the Move Label is about to zero. There could be a number of flows between a supply and vacation spot, as distinguished by separate non-zero Move Labels.Payload Size – Signifies the size of the IP payload. The scale of this area is 16 bits. The Payload Size area contains the extension headers and the higher layer PDU. With 16 bits, an IPv6 payload of as much as 65,535 bytes could be indicated. For payload lengths larger than 65,535 bytes, the Payload Size area is about to zero and the Jumbo Payload possibility is used within the Hop-by-Hop Choices extension header. Subsequent Header – Signifies both the primary extension header (if current) or the protocol within the higher layer PDU (resembling TCP, UDP, or ICMPv6). The scale of this area is eight bits. When indicating an higher layer protocol above the Web layer, the identical values used within the IPv4 Protocol area are used right here. Extension Header – Zero or extra extension headers could be current and are of various lengths. A Subsequent Header area within the IPv6 header signifies the subsequent extension header.Inside every extension header is one other Subsequent Header area that signifies the subsequent extension header. The final extension header signifies the higher layer protocol (resembling TCP, UDP, or ICMPv6) contained throughout the higher layer protocol information unit. The IPv6 header and extension headers exchange the present IPv4 IP header with choices. The brand new extension header format permits IPv6 to be augmented to assist future wants and capabilities. Not like choices within the IPv4 header, IPv6 extension headers don't have any most dimension and might increase to accommodate all of the extension information wanted for IPv6 communication. Hop Restrict – Signifies the utmost variety of hyperlinks over which the IPv6 packet can journey earlier than being discarded. The scale of this area is eight bits. The Hop Restrict is analogous to the IPv4 TTL area besides that there isn't any historic relation to the period of time (in seconds) that the packet is queued on the router. When the Hop Restrict equals zero, the packet is discarded and an ICMP Time Expired message is shipped to the supply deal with. Supply Handle –Shops the IPv6 deal with of the originating host. The scale is 128 bits. Vacation spot Handle – Shops the IPv6 deal with of the present vacation spot host. The dimension of this area is 128 bits. Most often the Vacation spot Handle is about to the ultimate vacation spot deal with. Nonetheless, if a Routing extension header is current, the Vacation spot Handle may be set to the subsequent router interface within the supply route record. Part 5 : Transition Mechanisms As IPv6 is lastly starting to mature, it's evident that strategies of upgrading the Web should be discovered. One thought could be to show off your entire Web at 12 pm, improve the community infrastructure embody routers, protocol stacks, …and switch the Web again on at 6 am and hope the whole lot works high quality and proper. That is unrealistic as a result of the truth that it might value extra money than it's possible, the time could be manner too quick, and nothing ever works pretty much as good as it's in concept. Extra gradual transition strategies have developed, ones that are more likely to occur over the course of 10 years or so. A number of the transition mechanisms are: Twin Stack SIIT – Stateless IP/ ICMP Translator AIIH – Project of IPv4 International Addresses to IPv6 Hosts NAT – Protocol Translator – has scaling and DNS points, and has single level of failure drawback Tunnel Dealer – dynamically acquire entry to tunnel servers, however has authentication and scaling points; 6-to-Four Mechanism – dynamic stateless tunnels over IPv4 infrastructure to attach 6-to-Four domains IPv6 in IPv4 tunneling – Permits present infrastructure to be utilized through manually configured tunnels o Host-Host Tunneling o Router-Router Tunneling o Host-Router and vice versa Tunneling 5.1 : Twin Stack: The essential method for allowing all communications is the so-called twin stack IP, the place every new host, server, router or different merchandise of kit coping with the IP degree can assist each protocols. On this manner, communication between IPv6 terminals takes place immediately, whereas an IPv4/IPv6 terminal which should talk with an IPv4-only terminal can accomplish that in IPv4. This method isn't notably burdensome for hosts and servers, as it's a software program improve which has no vital impression on the system. However, the primary disadvantage of this method is the necessity to preserve a multi-protocol community with a double routing infrastructure, which will increase directors' work load. As well as, generalized use of the twin stack IP mannequin won't be doable when deal with area exhaustion reaches the purpose that new IPv4 addresses can now not be assigned. To beat these issues, a number of options for interoperation between IPv6-only networks and IPv4-only networks have been specified which allow end-to-end communication between heterogeneous terminals: •Twin stack IP ALG gadgets which make it doable to carry out protocol translation on the borders between non-homogeneous networks by means of the usage of software proxies carried out on twin stack servers. •NAT-PT (Community Handle Translator - Protocol Translator) gadgets, which make it doable to carry out deal with and protocol translation on the borders between non-homogeneous networks at IP degree. •The Twin Stack Transition Mechanism, or DSTM, which proposes to make use of the twin stack IP method on the idea of IPv4 addresses assigned dynamically solely when wanted, and the usage of IPv4 over IPv6 tunneling to be able to cross the native IPv6 community earlier than accessing the outer IPv4 community. Although these transition mechanisms have the identical shortcomings as the same mechanisms proposed for interconnecting separate IPv4 networks, they supply a big benefit for the long run. Thus, whereas the mechanisms for IPv4 are last,and might now not be finished with out, these for the transition in direction of IPv6 are instrumental in making certain coexistence between IPv4 and IPv6, which ought to come to an finish as soon as the Web operates completely beneath IPv6. IPv6 was delivered with migration methods to cowl each conceivable IPv4 improve case, however many have been finally rejected by the know-how group, and in the present day we're left with a small set of sensible approaches. Twin stack is contain with operating IPv4 and IPv6 on the identical time. Finish nodes and routers/switches run each protocols, and if IPv6 communication is feasible that's the popular protocol. A standard dual-stack migration technique is to make the transition from the core to the edge. This entails enabling two TCP/IP protocol stacks on the WAN core routers,then perimeter routers and firewalls, then the server-farm routers and eventually the desktop entry routers. After the community helps IPv6 and IPv4 protocols, the method will allow twin protocol stacks on the servers after which the sting laptop programs. One other method is to make use of tunnels to hold one protocol inside one other. These tunnels take IPv6 packets and encapsulate them in IPv4 packets to be despatched throughout parts of the community that have not but been upgraded to IPv6. Different methods, resembling community deal with translation–protocol translation (NAT-PT) merely translate IPv6 packets into IPv4 packets. These translation methods are extra sophisticated than IPv4 NAT as a result of the protocols have totally different header codecs.Translation methods have been supposed for use as a final resort. Utilizing dual-stack and tunneling methods is preferable to utilizing NAT-PT. Will probably be simpler to attempt to run the whole lot in a dual-stack mode first after which take away the IPv4 protocol over time. Presently there aren't many programs being developed for IPv6-only communications, however there are numerous programs that work in dual-stack mode. Microsoft's new working programs, for instance, have a dual-layer structure that makes for seamless operation of both protocol. Subsequently, migration plans ought to maximize the usage of twin stack and decrease the quantity of tunneling. It must also be talked about that operating twin stack isn't the ultimate state. We will not neglect that full migration to IPv6 is the ultimate vacation spot. Twin stack IPV4/IPV6 Within the 1990s the community trade used the phrase "Swap the place you possibly can, route the place you have to." Nonetheless, over time the efficiency hole between routing and switching closed. For IPv6 transitions the brand new moniker shall be "Twin stack the place you possibly can, tunnel the place you have to." 5.2 : IPv6 in IPv4 tunneling: IPv6 in IPv4 tunneling is likely one of the best transition mechanism by which two IPv6 hosts / networks could be linked with one another whereas operating on present IPv4 networks by means of establishing some particular routes known as tunnels. On this method, IPv6 packets are encapsulated in IPv4 packets after which are despatched over IPv4 networks like extraordinary IPv4 packets by means of tunnels. On the finish of tunnel these packets are decapsulated to the unique IPv6 packets. The next are some vital traits of tunneling mechanism: When encapsulating a datagram, the TTL within the internal IP header is decremented by just one if the tunnel is being finished as a part of forwarding the datagram; in any other case the internal header TTL isn't modified throughout encapsulation. If the ensuing TTL within the internal IP header is zero, the datagram is discarded and an ICMP Time Exceeded message is returned to the sender. Subsequently, an encapsulator won't encapsulate a datagram with TTL=zero. Encapsulation of IPv6 in IPv4: o Makes use of IPv4 routing and properties. o Loses particular IPv6 options. o Requires a gap in firewall to permit by means of protocol 41 (IP in IP). There are two varieties of tunnels: handbook and dynamic. Manually configured IPv6 tunneling requires configuration at each ends of the tunnel, whereas dynamic tunnels are created mechanically primarily based on the packet vacation spot deal with and routing. Dynamic tunneling methods simplify upkeep in contrast with statically configured tunnels, however static tunnels make site visitors data accessible for every endpoint, offering additional safety in opposition to injected site visitors. There are, the truth is, issues over the safety of tunneling methods. For instance, with dynamic tunnels it is not simple to trace who's speaking over the transient tunnels, and you do not know the tunnel vacation spot endpoint. It's a scary proposition when your routers talk with different nonauthenticated routers. It is usually doable to ship cast site visitors towards a tunnel endpoint and get site visitors spuriously inserted into the tunnel. Tunneling creates conditions during which site visitors shall be encapsulated, and plenty of firewalls will not examine the site visitors whether it is in a tunnel. Permitting IP Protocol 41 (IPv6 encapsulated in IPv4) by means of an IPv4 firewall isn't a greatest apply. That is like creating an "IPv6 allow any any all" rule by means of the firewall. Tunnels will continually should be modified and monitored as your transition progresses. Tunnels will even should be eliminated when the IPv6 ocean will get bigger and we migrate to full IPv6. Tunnels are, subsequently, only a transitional method, and troubleshooting in an atmosphere filled with tunnels shall be difficult. Dynamic tunnel methods do not create tunnel interfaces that may be monitored with SNMP. Dynamic tunnel methods resembling 6 to Four use 2002::/16 addresses, which suggests you'll need to re-address the community twice as a part of the transition to IPv6. Most of the dynamic tunneling methods are additionally unable to ahead multicast site visitors and might't traverse an IPv4 NAT in the course of the community. If a tunnel falls completely inside a routing area, it will likely be thought-about as plain serial hyperlink by inside routing protocol resembling RIP or OSPF. But when it lies between two routing domains it wants exterior protocols like BGP and so forth.. In case of congestion within the tunnel, an ICMP Supply Quench message shall be issued to be able to inform the earlier node of the congestion. In several types of tunneling, solely de/encapsulation factors are diverse relying on the beginning and finish of tunnels, nonetheless the essential thought stays the identical. IPv6 tunneling allows the iSeries server to hook up with IPv6 nodes (hosts and routers) throughout IPv4 domains. Tunneling permits remoted IPv6 nodes or networks to speak with out altering the underlying IPv4 infrastructure. Tunneling permits IPv4 and IPv6 protocols to cooperate, and thereby supplies a transitional methodology of implementing IPv6 whereas retaining IPv4 connectivity. A tunnel consists of two dual-stack (IPv4 and IPv6) nodes on an IPv4 community. These dual-stack nodes are able to processing each IPv4 and IPv6 communications. One of many dual-stack nodes on the sting of the IPv6 infrastructure inserts an IPv4 header in entrance of (encapsulates) every IPv6 packet that arrives and sends it as if it have been regular IPv4 site visitors, by means of present hyperlinks. IPv4 routers proceed to ahead this site visitors. On the opposite facet of the tunnel, one other dual-stack node removes the additional IP header from the IPv6 packet (decapsulates) and routes it to the last word vacation spot utilizing normal IPv6. IPv6 tunneling runs over configured tunnel strains, that are digital strains. Configured tunnel strains present IPv6 communications to any node with a routable IPv4 deal with that helps IPv6 tunnels. These nodes could exist wherever, that's, throughout the native IPv4 area or inside a distant area. Configured tunnel connections are point-to-point.To configure this kind of tunnel line, you have to specify the native tunnel endpoint (IPv4 deal with), resembling 124.10.10.150, and the native IPv6 deal with, resembling 1080:zero:zero:zero:eight:800:200c:417a. We should additionally create an IPv6 path to allow site visitors to journey by means of the tunnel. As we create the route, we are going to outline one of many tunnel's distant endpoints (IPv4 deal with) because the route's subsequent hop. We could configure an infinite variety of endpoints for an infinite variety of tunnels. 5.2.1 : Host-to-Host Tunneling In host to host tunneling methodology, encapsulation is finished at supply host and ecapsulation is finished at vacation spot host. So the tunnel is created in between two hosts supporting each IPv4 and IPv6 stacks. So on this manner encapsulated datagrams are despatched by means of the tunnel over the IPv4 community. Each hosts having twin stack encapsulate the packets of IPv6 in IPv4 packets and transmit over the community as an IPv4 packet using all of the traits and routing mechanisms of IPv4. With this transition mechanism, it's doable to assist IPv6 just by upgrading the top hosts protocol stacks to IPv6 whereas leaving the IPv4 infrastructure unchanged. 5.2.2 : Router-to-Router Tunneling In router to router tunneling mechanism, encapsulation is finished at edge router of originating host and decapsulation is finished in the identical manner at edge router of destined host. The tunnel is created in between two edge routers supporting each IPv4 and IPv6 stacks. Subsequently, the top hosts can assist native IPv6 protocol stack whereas the sting routers create the tunnels and deal with the encapsulation and decapsulation to be able to transmit the packets over the present IPv4 infrastructure. The IPv6 datagrams are forwarded from host to edge routers whereas encapsulation takes place on the router degree; equally on the different finish, the reverse course of takes place. On this methodology, each edge routers have to assist twin stacks and established a tunnel previous to transmission. 5.2.three : Host-to-Router Tunneling In host to router tunneling mechanism, encapsulation is finished at originating host and decapsulation is finished in the identical manner at edge router of destined host and vice versa. The tunnel is created in between one host and one edge router each of them supporting each IPv4 and IPv6 stacks. So on this manner encapsulated datagrams are despatched by means of the tunnel over the present IPv4 community. The identical course of can occur the opposite manner round, from one edge router to an finish host. The tunnel is subsequently established between the host and the router. On this methodology one twin stack supporting router and one twin stack supporting host is required. 5.three : Overlay Tunnels for IPv6 Overlay tunneling encapsulates IPv6 packets in IPv4 packets for supply throughout an IPv4 infrastructure (a core community or the Web).By utilizing overlay tunnels, we are able to talk with remoted IPv6 networks with out upgrading the IPv4 infrastructure between them. Overlay tunnels could be configured between border routers or between a border router and a number; nonetheless, each tunnel endpoints should assist each the IPv4 and IPv6 protocol stacks as we are able to see in determine Four . Cisco IOS IPv6 helps the following varieties of overlay tunneling mechanisms: • Handbook • Generic routing encapsulation (GRE) • IPv4-compatible • 6to4 • Intra-Website Automated Tunnel Addressing Protocol (ISATAP) Word Overlay tunnels cut back the utmost transmission unit (MTU) of an interface by 20 octets (assuming the essential IPv4 packet header doesn't comprise non-obligatory fields). A community utilizing overlay tunnels is troublesome to troubleshooting. Subsequently, overlay tunnels connecting remoted IPv6 networks shouldn't be thought-about as a last IPv6 community structure. The usage of overlay tunnels ought to be thought-about as a transition method towards a community that helps each the IPv4 and IPv6 protocol stacks or simply the IPv6 protocol stack . 5.5 : GRE/IPv4 Tunnel Assist for IPv6 Site visitors IPv6 site visitors could be carried over IPv4 GRE tunnels utilizing the usual GRE tunneling method that's designed to offer the providers essential to implement any normal point-to-point encapsulation scheme. As in IPv6 manually configured tunnels, GRE tunnels are hyperlinks between two factors, with a separate tunnel for every hyperlink. The tunnels are usually not tied to a selected passenger or transport protocol, however on this case carry IPv6 because the passenger protocol with the GRE because the service protocol and IPv4 or IPv6 because the transport protocol. The first use of GRE tunnels is for secure connections that require common safe communication between two edge routers or between an edge router and an finish system. The sting routers and the top programs should be dual-stack implementations. GRE has a protocol area that identifies the passenger protocol. GRE tunnels enable Intermediate System-to-Intermediate System (IS-IS) or IPv6 to be specified as a passenger protocol, which permits each IS-IS and IPv6 site visitors to run over the identical tunnel. If GRE didn't have a protocol area, it might be unimaginable to tell apart whether or not the tunnel was carrying IS-IS or IPv6 packets. The GRE protocol area is why it's fascinating that you simply tunnel IS-IS and IPv6 inside GRE. 5.6 : GRE/CLNS Tunnel Assist for IPv4 and IPv6 Packets GRE tunneling of IPv4 and IPv6 packets by means of CLNS networks allows Cisco CLNS Tunnels (CTunnels) to interoperate with networking tools from different distributors. The non-obligatory GRE providers outlined in header fields, resembling checksums, keys, and sequencing, are usually not supported. Any packet obtained requesting such providers shall be dropped. 5.7 : Automated 6to4 Tunnels An computerized 6to4 tunnel permits remoted IPv6 domains to be linked over an IPv4 community to distant IPv6 networks. The important thing distinction between computerized 6to4 tunnels and manually configured tunnels is that the tunnel isn't point-to-point; it's point-to-multipoint. In computerized 6to4 tunnels, routers are usually not configured in pairs as a result of they deal with the IPv4 infrastructure as a digital nonbroadcast multiaccess (NBMA) hyperlink. The IPv4 deal with embedded within the IPv6 deal with is used to seek out the opposite finish of the automated tunnel. An computerized 6to4 tunnel could also be configured on a border router in an remoted IPv6 community, which creates a tunnel on a per-packet foundation to a border router in one other IPv6 community over an IPv4 infrastructure. The tunnel vacation spot is decided by the IPv4 deal with of the border router extracted from the IPv6 deal with that begins with the prefix 2002::/16, the place the format is 2002:border-router-IPv4-address::/48. Following the embedded IPv4 deal with are 16 bits that can be utilized to quantity networks throughout the website. The border router at every finish of a 6to4 tunnel should assist each the IPv4 and IPv6 protocol stacks. 6to4 tunnels are configured between border routers or between a border router and a number. The best deployment situation for 6to4 tunnels is to interconnect a number of IPv6 websites, every of which has no less than one connection to a shared IPv4 community. This IPv4 community might be the worldwide Web or a company spine. The important thing requirement is that every website have a globally distinctive IPv4 deal with; the Cisco IOS software program makes use of this deal with to assemble a globally distinctive 6to4/48 IPv6 prefix. As with different tunnel mechanisms, acceptable entries in a Area Identify System (DNS) that map between hostnames and IP addresses for each IPv4 and IPv6 enable the purposes to decide on the required deal with. 5.eight : Automated IPv4-Appropriate IPv6 Tunnels Automated IPv4-compatible tunnels use IPv4-compatible IPv6 addresses. IPv4-compatible IPv6 addresses are IPv6 unicast addresses which have zeros within the high-order 96 bits of the deal with, and an IPv4 deal with within the low-order 32 bits. They are often written as zero:zero:zero:zero:zero:zero:A.B.C.D or ::A.B.C.D, the place "A.B.C.D" represents the embedded IPv4 deal with. The tunnel vacation spot is mechanically decided by the IPv4 deal with within the low- order 32 bits of IPv4-compatible IPv6 addresses. The host or router at every finish of an IPv4-compatible tunnel should assist each the IPv4 and IPv6 protocol stacks. IPv4-compatible tunnels could be configured between border-routers or between a border-router and a number. Utilizing IPv4-compatible tunnels is a straightforward methodology to create tunnels for IPv6 over IPv4, however the method doesn't scale for giant networks. IPv4-compatible tunnels have been initially supported for IPv6, however are being deprecated. Cisco recommends that you simply use the IPv6 ISATAP tunneling method. Part 6 : IPV6 Community potential issues 6.1 : Poor IPv6 Community Efficiency: Most purposes on twin stack nodes will attempt IPv6 locations first by default due to the Default Handle Choice mechanism. If the IPv6 connectivity to these locations is poor whereas the IPv4 connectivity is healthier , the IPv6 site visitors experiences greater latency, decrease throughput, or extra misplaced packets than IPv4 site visitors, purposes will nonetheless talk over IPv6 on the expense of community efficiency. There isn't a data accessible to purposes on this case to advise them to attempt one other vacation spot deal with. An instance of such a scenario is a node which obtains IPv4 connectivity natively by means of an ISP, however whose IPv6 connectivity is obtained by means of a configured tunnel whose different endpoint is topologically such that the majority IPv6 communication is finished by means of triangular IPv4 paths. Operational expertise on the 6bone exhibits that IPv6 RTT's are poor in such conditions. An instance of such a community is an enterprise community that has each IPv4 and IPv6 routing throughout the enterprise and has a firewall configured to permit some IPv4 communication,however no IPv6 ommunication. 6.2 : Safety Issues in IPV6 over IPV4: Enabling IPv6 on a number implies that the providers on the host could also be open to IPv6 communication. If the service itself is insecure and relies on a safety coverage enforced some place else on the community (resembling in a firewall), then there's potential for brand spanking new assaults in opposition to the service. A firewall is probably not implementing the identical coverage for IPv4 as for IPv6 site visitors, which might be as a result of misconfiguration of the firewall. One risk is that the firewall might have extra relaxed coverage for IPv6, maybe by letting all IPv6 packets move by means of, or by letting all IPv4 protocol packets move by means of. On this situation, the twin stack hosts throughout the protected community might be topic to totally different assaults than for IPv4.Even when a firewall has a stricter coverage or similar coverage for IPv6 site visitors than for IPv4 (the acute case being that it drops all IPv6 site visitors), IPv6 packets might undergo the community untouched if tunneled over a transport layer. This might open the host to direct IPv6 assaults. It ought to be famous that IPv4 packets can be tunneled, so this isn't a brand new safety concern for IPv6. Firewalls should be intentionally and correctly configured. An identical drawback might exist for digital personal community (VPN) software program. A VPN might defend all IPv4 packets however transmit all others onto the native subnet unprotected. Not less than one extensively used VPN behaves this fashion. That is problematic on a twin stack host that has IPv6 enabled on its native community. It establishes its VPN hyperlink and makes an attempt to speak with locations that resolve to each IPv4 and IPv6 addresses. The vacation spot deal with choice mechanism prefers the IPv6 vacation spot so the appliance sends packets to an IPv6 deal with. The VPN would not learn about IPv6, so as an alternative of defending the packets and sending them to the distant finish of the VPN, it passes such packets within the clear to the native community. That is problematic for a variety of causes. The primary is that if the node has a default IPv6 route, the packets shall be forwarded off-link to an unknown vacation spot. One other is that if no reputable router is on-link and the node makes the on-link, the packets will merely be despatched onto the native hyperlink to be probably considered by a node spoofing the vacation spot. A 3rd is that if a rogue IPv6 router exists on-link. In that case the malicious node will merely be despatched all IPv6 packets within the clear. 6.three : Discovering issues in TCP/IP utilizing IPV6: On this half I wish to describe the methods and instruments that we are able to use to assist establish an issue at successive layers of the Transmission Management Protocol/Web Protocol (TCP/IP) protocol stack that's utilizing an Web Protocol model 6 (IPv6) Web layer in Microsoft Home windows XP , Home windows Server 2003 or Home windows Vista. Relying on the kind of drawback, we would do one of many following: -Ranging from the underside of the stack and transfer up. -Ranging from the highest of the stack and transfer down. The next sections are organized from the highest of the stack and describe how you can: -Confirm IPv6 connectivity -Confirm Area Identify System (DNS) title decision for IPv6 addresses -Confirm IPv6-based TCP connections We are able to additionally use Community Monitor to seize IPv6 site visitors Though not specified within the following sections, to troubleshoot many issues with IPv6-based communications. Community Monitor is supplied with Microsoft Methods Administration Server and as an non-obligatory community element with Home windows Server 2003. Nonetheless, to accurately interpret the show of IPv6 packets in Community Monitor, we will need to have detailed information of the protocols included in every packet. 7.three.1 : Handle Configuration To manually configure IPv6 addresses, use the netsh interface ipv6 set deal with command. In Home windows Vista, we are able to manually configure IPv6 addresses from the properties of the Web Protocol Model 6 (TCP/IPv6) element, accessible from the Community Connections folder. Most often, we don't have to manually configure IPv6 addresses as a result of they're mechanically assigned for hosts by means of IPv6 deal with auto-configuration. Additionally to make adjustments to the configuration of IPv6 interfaces, we use the netsh interface ipv6 set interface command. So as to add the IPv6 addresses of DNS servers, use the netsh interface ipv6 add dnsserver command. 7.three.2 : Confirm Reachability To confirm reachability with a neighborhood or distant vacation spot, attempt the next: "Test and flush the neighbor cache" . Much like the Handle Decision Protocol (ARP) cache, the neighbor cache shops just lately resolved link-layer addresses. To show the present contents of the neighbor cache, use the netsh interface ipv6 present neighbors command. Part 7 : Conclusion There are a few of mechanisms for community directors to transition their networks from IPv4 to IPv6. The transition applied sciences I've offered are strong to slowly and incrementally transitioning teams of networks, in addition to combined protocol assist of hosts inside particular person networks. My advice is utilizing tunneling IPV6 over IPV4 as a lot as doable to Simplify communications between IPv6 hosts. I like to recommend first utilizing tunneling to assist each IPv4 and IPv6 purposes, then slowly transitioning to pure IPV6 infrastructure. I imagine this gradual course of will assist legacy programs till they're completely changed, and this can prepared the intranet for an IPv6 web by the point of IPv4 deal with exhaustion. Microsoft has extra software program that has no IPv6 assist, however alternate options can be found and the whole lot nonetheless works on IPv4. It should take a while at the beginning has IPv6 assist, till than each IPv6 and IPv4 can coexist collectively with none issues. Subsequently it's advisable to implement IPv6 as a lot as doable, as a result of in the end the migration from IPv4 to IPv6 must be made. Essential when deciding to implement IPv6 is to plan the whole lot very rigorously. Particularly in terms of providers it is very important know whether or not or not the providers put in and configured in your scenario are able to dealing with IPv6. Web service suppliers could wait until there are sufficient IPv6 purposes to deploy IPv6 networks, and software builders could watch for the IPv6 community to be deployed first. It's as much as servers and software builders to take increasingly IPv6 into consideration and in addition all of the enterprise sectors to think about migrating to IPv6, and never ready for others to be the firsts. After all, if everybody waits till the final minute, it might find yourself costing a lot extra not simply to engineer the transition, however in the price of the disruption to what has change into an important a part of our financial and social infrastructure. As I wrote a typical dual-stack migration technique is to make the transition from the core to the sting. This entails enabling two TCP/IP protocol stacks on the WAN core routers, then perimeter routers and firewalls, then the server-farm routers and eventually the desktop entry routers. After the community helps IPv6 and IPv4 protocols, the course of will allow twin protocol stacks on the servers after which the sting laptop Methods. In my view it's not troublesome to implement IPv6 in an IPv4 atmosphere and if there are any hesitations left, this report exhibits that migration can go with out difficulties. The transition from IPv4 to IPv6 shall be a bigger process for the trade. It should have an effect on practically all networked purposes, end-systems, frastructure programs, and community architectures. The conversion to IPv6 has no particular timeline. Nonetheless, as famous higher, the speed of IPv4 deal with utilizing is quickly lowering. Part 9 : References [1] Borella, M.; Grabelsky, D.; Lo, J.; Taniguchi, Ok. Realm "Particular IP Protocol Specification." . IJCSNS Worldwide Journal of Laptop Science and Community Safety .http://instruments.ietf.org/html/rfc3103 March 2007 [2] Sawant, A. " IPv6 Options and Migration from IPv4." In Bechtel Telecommunications Technical Journal, January 2004. from www.bechteltelecoms.com/docs/bttj_v2/Article8.pdf [3] T. Chown." Issues for IPv6 Tunneling Options.". Worldwide Journal of Foundations of Laptop Science (IJFCS).April 2004.College of Southampton [4] China Web Info Heart. "Statistical Survey Report on the Web Growth in China.". from http://www.cnnic.web.cn/uploadfiles/pdf/2007/2/14/200607.pdf January 2007 [5] S. Daniel Park, "IPv6 Tunnel Finish-point Automated Discovery Mechanism". IJCSNS Worldwide Journal .(Sep 2004). [6] Nevil Brownless, NeTraMet, ." Observations of IPv6 site visitors on a 6to4 relay" IJCSA, Worldwide Journal of laptop science and software . http://portal.acm.org/quotation.cfm?id=1052821 .(Jan 2005) [7] Daniele Muscetta , " Connecting to an IPv6 Tunnel Dealer " . IJCSNS Worldwide Journal . (2005) [8] Wright, A. " Web Adoption Slowing However Dependence on It Continues to Develop. ". from http://www.ipsosna.com/information/pressrelease.cfm?id=3030 March 29, 2006 [9] Barlow, J. " IPv6 HandsOn " IJCSA, Worldwide Journal of laptop science and software . December 2006 [10] Tsirtsis, G.; Srisuresh, P." Community Handle Translation Protocol Translation (NATPT)." In InternetDraft, .Retrieved December 2006 from http://instruments.ietf.org/html/rfc2766 [11] Borella, M.; Montenegro, G. "Handle Sharing with EndtoEnd Safety. " Within the Proceedings of the Particular Workshop on Intelligence on the Community Edge, December 2006 from https://www.usenix.org/publications/library/proceedings/ine2000/full_papers/borella/borella_html/rsipusenix.html [12] Borman, D.; Deering, S.; Hinden, R. " IPv6 Jumbograms." . IJCSNS Worldwide Journal . December 2006 from http://instruments.ietf.org/html/rfc2675 [13] Carpenter, B.; Moore, Ok." Connection of IPv6 Domains through IPv4 Clouds." Worldwide Journal of Foundations of Laptop Science (IJFCS) Decemeber 2006 . [14] Hupprich, L.; Bumatay, M. International Web Inhabitants Grows an Common of 4 P.c YearOverYear. Nielsen//NetRatings. March 2007 from http://phx.corporateir.web/phoenix.zhtml?c=82037&p=irolnewsArticle&ID=538993&spotlight= [15] [RFC4607] H. Holbrook and B. Cain, "Supply-Particular Multicast for IP", Cisco RFC 4607, August 2006. [16] IPv6 Process Pressure, U.S. Division of Commerce." Technical and Financial Evaluation of Web Protocol Model 6 (IPv6)." January 2006. from http://www.ntia.doc.gov/ntiahome/ntiageneral/ipv6/last/ipv6final.pdf [17] Metz, C.; Hagino, J. " IPv4Mapped Addresses on the Wire Thought-about Dangerous." Worldwide Journal of Foundations of Laptop Science (IJFCS), December 2006 . [18] Professor Peter Kirstein, Dr. Tim Chown "Why a brand new Web Protocol?", UKIPV6 Process Pressure Journal . (2006). [19] Pekka Savola. CSC/FUNET, Finland . " Observations of IPv6 Site visitors on a 6to4 Relay. "IJCSA, Worldwide Journal of laptop science and software. (Sep 2007). [20] Microsoft, " Microsoft's Aims for IPV6 Tunneling" http://technet.microsoft.com/en-us/library/bb726951.aspx (2007), [21] [RFC4795] B. Aboba, D. Thaler, L. Esibov, "Hyperlink-local Multicast Identify zesolution (LLMNR)", HongKong Laptop Society journal. January 2007. [22] Raymond A. Plzak, "ARIN Board Advises Web Neighborhood on Migration to IPv6." Worldwide Journal of Foundations of Laptop Science (IJFCS). (Could 2007) [23] Jeroen van Nieuwenhuizen ( 2007 ). Establishing IPv6 . Undertaking Phoenix The Legend M. Rahman, Ph.D, Andrew Schaumberg (2007). Transitioning Networks from IPv4 to IPv6.College Plaza, Platteville, USA . [24] IANA. " IPv4 Handle Report." Worldwide Journal of Foundations of Laptop Science (IJFCS) . (March 2007) from http://www.potaroo.web/instruments/ipv4/index.html
0 notes
kristinsimmons · 6 years ago
Text
Patient-Directed Access for Competition to Bend the Cost Curve
By ADRIAN GROPPER, MD
Many of you have received the email: Microsoft HealthVault is shutting down. By some accounts, Microsoft has spent over $1 Billion on a valiant attempt to create a patient-centered health information system. They were not greedy. They adopted standards that I worked on for about a decade. They generously funded non-profit Patient Privacy Rights to create an innovative privacy policy in a green field situation. They invited trusted patient surrogates like the American Heart Association to participate in the launch. They stuck with it for almost a dozen years. They failed. The broken market and promise of HITECH is to blame and now a new administration has the opportunity and the tools to avoid the rent-seekers’ trap.
The 2016 21st Century CURES Act is the law. It is built around two phrases: “information blocking” and “without special effort” that give the administration tremendous power to regulate anti-competitive behavior in the health information sector. The resulting draft regulation, February’s Notice of Proposed Rulemaking (NPRM) is a breakthrough attempt to bend the healthcare cost curve through patient empowerment and competition. It could be the last best chance to avoid a $6 Trillion, 20% of GDP future without introducing strict price controls.
This post highlights patient-directed access as the essential pro-competition aspect of the NPRM which allows the patient’s data to follow the patient to any service, any physician, any caregiver, anywhere in the country or in the world.
The NPRM is powerful regulation in the hands of an administration caught between anti-regulatory principles and an entrenched cabal of middlemen eager to keep their toll booth on the information highway. Readers interested in or frustrated by the evolution of patient-directed interoperability can review my posts on this over the HITECH years: 2012; 2013; 2013; 2014; 2015.
The struggle throughout has been a reluctance to allow digital patient-directed information exchange to bypass middlemen in the same way that fax or postal service information exchange does not introduce a rent-seeking intermediary capable of censorship over the connection.
Middlemen
Who are the middlemen? Simply put, they are everyone except the patient or the physician. Middlemen includes hospitals, health IT vendors, health information exchanges, certifiers like DirectTrust and CARIN Alliance, and a vast number of hidden data brokers like Surescripts, Optum, Lexis-Nexis, Equifax, and insurance rating services. The business model of the middlemen depends on keeping patients and physicians from bypassing their toll booth. In so doing, they are making it hard for new ventures to compete without paying the overhead imposed by the hospital or the fees imposed by the EHR vendors.
But what about data cleansing, search and discovery, outsourced security, and other value-added services these middlemen provide? A value-added service provider shouldn’t need to put barriers to bypass to stay in business. The doctor or patient should be able to choose which value-added services they want and pay for them in a competitive market. Information blocking and the requirement for special effort on the part of the patient or the physician would be illogical for any real value-added service provider.
In summary, patient-directed access is simply the ability for a patient to direct and control the access of information from one hospital system to another “without special effort”. Most of us know what that looks like because most of us already direct transfer of funds from one bank to another. We know how much effort is involved. We know that we need to sign-in to the sending bank portal in order to provide the destination address and to restrict how much money moves and whether it moves once or every month until further notice. We know that we can send this money not just to businesses but to anyone, including friends and family without censorship or restriction. In most cases today, these transfers don’t cost anything at all. Let’s call this kind of money interoperability “without special effort”.
Could interoperating money be even less effort than that? Yes. For instance, it’s obnoxious that each bank and each payee forces us to use a different user interface. Why can’t I just tell all of my banks and payees: use that managing agent or trustee that I choose? Why can’t we get rid of all of the different emails and passwords for each of the 50+ portals in our lives and replace them with a secure digital wallet on our phone with fingerprint or face recognition protection? This would further reduce the special effort but it does require more advanced standards. But, at least in payment, we can see it coming. Apple, for instance gives me a biometric wallet for my credit cards and person-to person payments. ApplePay also protects my privacy by not sharing my credit card info with the merchants. Beyond today’s walled garden solutions, self-sovereign identity standards groups are adding the next layer of privacy and security to password-less sign-in and control over credentials.
Rent Seekers
But healthcare isn’t banking because HITECH fertilized layers upon layers of middlemen that we, as patients and doctors, do not control and sometimes, as with Surescripts, don’t even know exist. You might say that Visa or American Express are middlemen but they are middlemen that compete fiercely for our consumer business. As patients we have zero market power over the EHR vendors, the health information exchanges, and even the hospitals that employ our doctors. Our doctors are in the same boat. The EHR they use is forced on them by the hospital and many doctors are unhappy about that but subject to gag orders unprecedented in medicine until recently.
This is what “information blocking” means for patients and doctors. This is what the draft NPRM is trying to fix by mandating “without special effort”. This is what the hospitals, EHR vendors, and health information exchanges are going to try to squash before the NPRM becomes final. After the NPRM becomes a final regulation, presumably later in 2019, the hospitals and middlemen will have two years to fix information blocking. That brings us to 2022. Past experience with HITECH and Washington politics assures us of many years of further foot dragging and delay. We’ve seen this before with HIPAA, misinterpreted by hospitals in ways that frustrate patients, families, and physicians for over a decade.
Large hospital systems have too much political power at the state and local level to be driven by mere technology regulations. They routinely ignore the regulations that are bad for business like the patient-access features of HIPAA and the Accounting for Disclosures rules. Patients have no private right of action in HIPAA and the federal government has not enforced provisions like health records access abuses or refusal to account for disclosures. Patients and physicians are not organized to counter regulatory capture by the hospitals and health IT vendors.
The one thing hospitals do care about is Medicare payments. Some of the information blocking provisions of the draft NPRM are linked to Medicare participation. Let’s hope these are kept and enforced after the final regulations.
Competition to Bend the Cost Curve
Government has two paths to bending the cost curve: setting prices or meaningful competition. The ACA and HITECH have done neither. In theory, the government could do some of both but let’s ignore the role of price controls because it can always be added on if competition proves inadequate. Anyway, we’re in an administration that wants to go the pro-competition path and they need visible progress for patients and doctors before the next election. Just blaming pharma for high costs is probably not enough.
Meaningful competition requires multiple easy choices for both the patients and the prescribers as well as transparency of quality and cost. This will require a reversal of the HITECH strategy that allows large hospitals and their large EHRs to restrict the choices offered and to obscure the quality and cost behind the choices that are offered. We need health records systems that make the choice of imaging center, lab, hospital, medical group practice, direct primary care practice, urgent care center, specialist, and even telemedicine equally easy. “Without special effort”.
The NPRM has the makings of a pro-competitive shift away from large hospitals and other rent-seeking intermediaries but the elements are buried in over a thousand pages of ONC and CMS jargon. This confuses implementers, physicians and advocates and should be fixed before the regulations are finalized. The fix requires a clear statement that middlemen are optional and the interoperability path that bypasses the middlemen as “data follows the patient” is the default and “without special effort”. What follows are the essential clarifications I recommend for the final information blocking regulations – the Regulation, below.
Covered Entity – A hospital or technology provider subject to the Regulation and/or to Medicare conditions of participation.
Patient-directed vs. HIPAA TPO – Information is shared by a covered entity either as directed by the patient vs. without patient consent under the HIPAA Treatment, Payment, or Operations.
FHIR – The standard for information to follow the patient is FHIR. The FHIR standard will evolve under industry direction, primarily to meet the needs of large hospitals and large EHR vendors. The FHIR standard serves both patient-directed and HIPAA TPO sharing.
FHIR API – FHIR is necessary but not synonymous with a standard Application Programming Interface (API). The FHIR API can be used for both patient-directed and TPO APIs. Under the Regulation, all patient information available for sharing under TPO will also be available for sharing under patient direction. Information sharing that does not use the FHIR API, such as bulk transfers or private interfaces with business partners will be regulated according to the information blocking provisions of the Regulations.
Server FHIR API – The FHIR API operated by a Covered Entity.
Client FHIR API – The FHIR API operated by a patient-designee. The patient designee can be anyone (doctor, family, service provider, research institution) anywhere in the world.
Patient-designee – A patient can direct a Covered Entity to connect to any Client FHIR API by specifying either the responsible user of a Client FHIR API or the responsible institution operating a Client FHIR API. Under no circumstances does the Regulation require the patient to use an intermediary such as a personal health record or data bank in order to designate a Client FHIR API connection. Patient-controlled intermediaries such as personal health records or data banks are just another Client FHIR API that happen to be owned, operated, or controlled by the patient themselves.
Dynamic Client Registration – The Server FHIR API will register the Client FHIR API without special effort as long as the patient clearly designates the operator of the Client. Examples of a clear designation would include: (a) a National Provider Identifier (NPI) as published in the NPPES https://npiregistry.cms.hhs.gov; (b) an email address; (c) an https://… FHIR API endpoint; (d) any other standardized identifier that is provided by the patient as part of a declaration digitally signed by the patient.
Digital Signature – The Client FHIR API must present a valid signed authorization token to the Server FHIR API. The authorization token may be digitally signed by the patient. The patient can sign such a token using: (a) a patient portal operated by the Server FHIR API; (b) a standard Authorization Server designated by the patient using the patient portal of the sever operator (e.g. the UMA standard referenced in the Interoperability Standards Advisory); (c) a software statement from the Client FHIR API that is digitally signed by the Patient-designee.
Refresh Tokens – Once the patient provides a digital signature that enables a FHIR API connection, that signed authorization should suffice for multiple future connections by that same Client FHIR API, typically for a year, or until revoked by the patient. The duration of the authorization can be set by the patient and revoked by the patient using the patient portal of the Server FHIR API.
Patient-designated Authorization Servers – The draft NPRM correctly recognizes the problem of patients having to visit multiple patient portals in order to review which Clients are authorized to receive what data and to revoke access authorization. A patient may not even know how many patient portals they have enabled and how to reach them to check for sharing authorizations. By allowing the patient to designate the FHIR Authorization Server, a Server FHIR API operator would enable the patient to choose one service provider that would then manage authorizations in one place. This would also benefit the operator of the Server FHIR API by reducing their cost and risk of operating an authorization server. UMA, as referenced in the Interoperability Standards Advisory is one candidate standard for enhancing FHIR APIs to enable a patient-designated authorization server.
Big Win for Patients and Physicians
As I read it, the 11 definitions above are consistent with the draft NPRM. Entrepreneurs, private investors, educators, and licensing boards stand ready to offer patients and physicians innovative services that compete with each other and with the incumbents that were so heavily subsidized by HITECH. To encourage this private-sector investment and provide a visible win to their constituents, Federal health architecture regulators and managers, including ONC, CMS, VA, and DoD would do well to reorganize the Regulations in a way that makes the opportunity to compete on the basis of patient-directed exchange as clear as possible. As an alternative to reorganizing the Regulations, guidance could be provided that makes clear the 11 definitions above. Furthermore, although it could take years for the private-sector covered entities to fully deploy patient-directed sharing, deployments directly controlled by the Federal government such as access to the Medicare database and VA-DoD information sharing could begin to implement patient-directed information sharing “without special effort” immediately. Give patients and doctors the power of modern technology.
Adrian Gropper, MD, is the CTO of Patient Privacy Rights, a national organization representing 10.3 million patients and among the foremost open data advocates in the country. 
Patient-Directed Access for Competition to Bend the Cost Curve published first on https://wittooth.tumblr.com/
0 notes
kristinsimmons · 7 years ago
Text
A National Health Encounter Surveillance System
By ADRIAN GROPPER, MD
Trust is essential for interoperability. One way to promote trust is to provide transparency and accountability for the proposed national system. People have come to expect email or equivalent notification when a significant transaction is made on our personal data. From a patient’s perspective, all health records transactions involving TEFCA are likely significant. When a significant transaction occurs, we expect contemporaneous notification (not the expectation that you have to ask first), a monthly statement of all transactions, and a clear indication of how an error or dispute can be resolved. We also expect the issuer of the notification to also be accountable for the transaction and to assist in holding other participants accountable if that becomes necessary. Each such notification should identify who accessed the data and how the patient can review the data that was accessed. Each time, the patient should be informed of the procedure to flag errors, report abuse, and opt-out of further participation at either the individual source or at the national level.
Recommendation 1 :  Add Principle 2D as: Every transaction over the TEFCA network, including bulk access, is to be accompanied by a contemporaneous email to each individual patient and a monthly statement delivered via email or post if there is activity in that month.
Make Patient-Directed Exchange the Baseline for a National API
Application Programming Interfaces (APIs) are the future of interoperability and mandated by law applicable to TEFCA. The scope of APIs is broad. An API can serve inside a single legal entity to connect one information system to another, it can provide access to one patient per transaction or to a batch of patients, it can connect under the direction of an authorized entity, or it can connect two parties directly on demand by the patient herself. We are familiar with patient-directed interoperability as the paper Release of Information Form submitted to hospital records departments. This fundamental patient right must be preserved and enhanced as we move from paper and Fax to APIs. Using a separate API for entity-directed vs. patient-directed exchange increases the attack surface, confuses patients, and increases cost. If separated, the patient-directed exchange API is likely to be less supported and less functional that the entity-directed API. Errors and security breaches in both APIs are likely to be harder to detect if the two APIs are separate.
Specify that the same API is to be used for both entity-directed and patient-directed exchange. Treat bulk transfers of multiple patients in one transaction as a special case of the API that is not patient-directed, but still notifies the individual patients involved. Ensure that the API does not require more paper-based or in-person processes for patient-directed exchange than are required for entity-directed exchange.
Recommendation 2 : Amend Principle 3A to specify that the same API is to be used for both entity-directed and patient-directed exchange.
Recommendation 3 : Amend Principle 5A to specify that the same API is to be used for both entity-directed and patient-directed exchange.
Recommendation 4 : Amend Principle 6A to clarify that the multiple patient record functionality does not reduce the responsibility to contemporaneously notify individual patients.
Recommendation 5 : Definition of Individual Access 2) is confusing. The API Task Force was clear that blocking of any patient-directed sharing other than one that endangers other patients is prohibited. For example, a patient directed-request to move information to a destination via plain, unsecured email or to a foreign country is acceptable under Applicable Law.
Separate Identity and Authorization Standards from Data Model Standards
Introducing interoperability into a system as large and diverse as healthcare is a tremendous challenge that the draft TEFCA clearly recognizes and seeks to address. Much of this regulation is, quite appropriately, devoted to standards. Some of the standards relate to how health records are encoded. Let’s call this the data model. Other standards relate to how access to a record is controlled. Let’s call this authorization. It is common practice for standards-dependent efforts such as SMART or Argonaut or TEFCA to combine both data model and authorization concerns because there is some overlap in their scope. For example, the data model includes demographic information that is critical to the discovery aspects of authorization around an encounter. Unfortunately, blending projects that seek to standardize the data model with those that seek to standardize the authorization model makes scaling interoperability much harder because it makes healthcare practices less able to benefit from large-scale authorization standards outside of the healthcare domain.
Identity, demographics, and authorization standards are not specific to healthcare. To achieve broad interoperability on a national scale, adopt the Postal Service model of separating what’s on the envelope (authorization) from what’s in the envelope (the data model) and manage the corresponding standards, policies, and practices separately.
We are especially mindful of the multiple portals problem that forces patients to manage consent separately by accessing every separate provider using different protocols and procedures. Just as TEFCA seeks to provide a single on-ramp for providers as End Users it should encourage and ideally offer a single on-ramp for individual patients by allowing the patient to specify their preferred UMA / HEART Authorization Server.
Recommendation 6 : Amend Principle 1A to encourage separation of authorization and data model standards.
Recommendation 7 : Reference Kantara UMA and the profile work of Health Relationship Trust (HEART) as components of ISA.
Recommendation 8 : Any QHIN, Participant, or End User that offers access to Individuals via an API, including the TEFCA-specified API, must allow the Individual to specify and delegate control to a standards-based authorization server of their choosing.
Be Clear About Creating a National Health Encounter Surveillance System
TEFCA is creating a national health encounter surveillance system under the control of the Federal Government. Regardless of the reasons why this might be desirable, the Federal Government needs to be clear that this is a new national agency that manages personal information on substantially everyone just like the IRS, TSA, and FBI. The draft TEFCA is very confusing in this respect. It is hard to draw an analogy to existing systems of identity. State drivers’ licenses are the only common example of a distributed identity system that allows for broadcast queries to some extent, but it is operated by government, founded on coercive biometric databases, and controversial when subject to federal policy like Real ID.
Recommendation 9 : State clearly in the introduction of the proposed regulation that it is national in scope and subject to federal government policy. State also that the system is identity-based, and that a person can have zero, one, or multiple identities in the system.
Recommendation 10 : Definition of Broadcast Query to make clear that it is a national in scope and may include encounters outside of HIPAA Covered Entities.
Recommendation 11 : Definition of Recognized Coordinating Entity (RCE) to make clear that it is controlled by the federal government and subject to the policies of the federal government.
Recommendation 12 : In section 3.3, describe how patients can choose to be tracked, identity-matched, and notified of a match, in a voluntary and non-coercive way.
Recommendation 13 : In section 6.2.4, describe how patients identified only as “known to the practice” under HIPAA or receiving an anonymous service from a laboratory may voluntarily participate, without Identity Proofing in the national system.
If TEFCA is Voluntary, Explain How Patients Can Opt-Out
TEFCA is introduced as voluntary but the draft document is not clear about how a patient can avoid participation in the national surveillance system. Consider, for example, a 15 year old with a severe anxiety attack requiring mental health care. Will this patient be entered into the national system by the emergency department, the psychiatrist, the laboratory, the pharmacy, the insurance company, or all of the above? When this patient turns 18, will he or she have the ability to delete the record of this episode of care from the national system and the process to effect this deletion?
Define how the system is voluntary from the perspective of the patient and describe how a patient opts-out of having an encounter from being entered into system, how a patient is notified when an encounter is added to the system, and how an encounter is deleted from the system.
Recommendation 14 : Amend Principle 5B to replace the reference to Qualified HIN and replace it with a broad statement of participation in the national health encounter surveillance network.
Avoid Introduction of a Hidden Data Brokerage Layer
Current patient rights regulations tend to focus on the right of access to a service provider such as a HIPAA Covered Entity combined with a limit on the ability of Business Associates to aggregate information about an individual across multiple service providers. When a data aggregator or broker is introduced, as for example some state health information exchanges or the Surescripts network, these entities are not well-known to the patient and have no customer relationship with the patient. The result is that these intermediary data brokers are effectively hidden from the patient and not accountable to the patient.
By analogy, we are familiar with the national surveillance system of credit bureaus. Equifax, Experian, and TransUnion are limited in number to three so people can know of all of them, they are regulated to be accessible and responsive to people, and they are required to accept and redistribute comments from the individual. The credit surveillance system also has benefit of a unique person identifier, the Social Security Number, in order to reduce the number of errors that are propagated. Nonetheless, having to deal with three separate data brokers in cases such as identity theft to impose a credit freeze is a major hardship for the individual.
There is, however a major difference between credit surveillance and health surveillance. As individuals we access credit voluntarily but we are compelled to access health care by illness, accident, and misfortune. At a time of suffering and stress, US patients already have to worry about the scope of their insurance network, large unknowable out-of-pocket costs, the impact of their misfortune on employment, disability, and life insurance. These are all hidden consequences of seeking health care. It’s imperative that TEFCA not add another hidden layer to an already stressful system.
To the extent TEFCA envisions a layer of QHINs responsible for managing the location of encounters and consent to access personal information, it is critical that they be accessible and accountable directly to the individual at least as much as the hospitals and service providers are. To the extent TEFCA is establishing a single national data brokerage system like the TSA or the IRS, it is imperative that people know exactly who they are dealing with and how they are identified in the system. Decentralized, private-sector surveillance such as we have for advertising tracking is not appropriate for healthcare.
Recommendation  15 : In Section 7, Access, make RCE the single patient-facing entity, accountable for a consistent policy and a consistent patient identifier across all hospitals, labs, payers, and other service providers. To avoid coercion, allow patients to have multiple, separate RCE identifiers in order to voluntarily segment sensitive encounters from routine one
A National Health Encounter Surveillance System published first on https://wittooth.tumblr.com/
0 notes