Tumgik
#architecture principles
codeonedigest · 2 years
Video
youtube
Liskov Substitution Principle Tutorial with Java Coding Example for Begi...
Hello friends, new #video on #liskovsubstitutionprinciple #solidprinciples with #Java #coding #example is published on #codeonedigest #youtube channel. Learn #lsp #principle #programming #coding with codeonedigest.
@java #java #awscloud @awscloud @AWSCloudIndia #Cloud #CloudComputing @YouTube #youtube #liskovsubstitutionprinciple #liskovsubstitutionprinciplesolid #lsp #lspprinciple #liskovsubstitutionprinciple #liskov #liskovprinciple #solidprinciples #solidprinciplesinterviewquestions #solidprinciplesjavainterviewquestions #solidprinciplesreact #solidprinciplesinandroid #solidprinciplestutorial #solidprinciplesexplained #solidprinciplesjava #singleresponsibilityprinciple #openclosedprinciple #liskovsubstitutionprinciple #interfacesegregationprinciple #dependencyinversionprinciple #objectorientedprogramming #objectorienteddesignandmodelling #objectorienteddesign #objectorienteddesignsoftwareengineering #objectorienteddesigninterviewquestions #objectorienteddesignandanalysis #objectorienteddesigninjava #objectorienteddesignmodel #objectorienteddesignapproach #objectorienteddesignparadigm #objectorienteddesignquestions
2 notes · View notes
a-very-tired-jew · 3 months
Text
I was just reminded that the art collective Forensic Architecture exists and once again I’m disgusted.
For those of you who don’t know, it’s a collective of various artists who play at forensic science, conduct “forensic investigations”, and then make art exhibits of their “results”. Their reports and exhibits will make statements such as “the evidence shows that X is linked to Y” but the statistical output that they share will show something like a 5% confidence in the match.
That's right. They make art exhibits of their "investigations".
You want to talk about fandomizing tragedy? Making “forensic investigations” into art exhibits is the bougiest version I can think of, and it's only to serve an echelon of people who enjoy that kind of stuff. If any of the people in this art collective had a background in forensic science they would have taken ethics courses that would tell them how horrid putting on an art exhibit like this actually is. You don't honor the victims by putting on an art show for the rich and powerful to gasp and faint over so that you can fundraise for your next show.
Their founder has even stated that they’re not in forensics but “counter-forensics” and "counter-investigation". They eschew the practices and norms of the scientific community for telling their own version of investigative “truth”. They’ve even gone so far as to quote post-truth philosophies in their work and the controversial Nietzsche quote about there being no facts, only interpretations. Both are dangerous philosophies to hold in forensic science as it presents the evidence as subjective rather than objective. This is why they're an art collective and not a forensic science research group as they purport, they're rejecting objective scientific outcome for subjective interpretation.
You can go to the group's website and they have profiles on all of their team members. Almost every person is labeled as a "researcher", but once you click on their profile it quickly tells you that they're an artist, designer, activist, or some combination of the three. No mention of any scientific background whatsoever. That indicates their ability to actual conduct forensic science research is not great as they don't have any training or education on the methods involved. In fact, their entire program and personnel are out of an arts college with no science programs or faculty outside of anthropology.
That's weird, right?
A group that supposedly made a new discipline of forensic science, according to them, has no members with actual backgrounds in forensic science or scientific disciplines relating to it?
None of the team member profiles detail any scientific background that would be relevant to forensics outside of a few people with engineering and computer science degrees. Neither of the aforementioned disciplines typically train you in forensic practices anyway unless you take certain courses. Because these profiles are public you can go and checked LinkedIn profiles and find the CVs for each member as well. Guess what? No forensic science or relevant scientific backgrounds listed there as well.
But for some reason this art collective has received funding from governments and NGOs for "creating" a new discipline of forensic science. They're a "trusted" source for forensic investigations. That's worrying. That's terrifying.
I'm a forensic scientist and to make an objective field based upon methodology and empirically supported practice into one that is subjective and throws out the empirical aspects is terrifying. Everyone should have klaxons going off in their head whenever they see Forensic Architecture's name appear in a publication. I've reviewed a few of their "investigations" and they are rife with bad practice, manipulation, and misinformation. In fact, it appears that they present their work in art exhibits more than they testify to it in court due to their methods being questionable and their intent being not to help the investigation but to be a "counter-investigation" that can be judged by the court of public opinion. What do I mean by this? In many of their investigations the collective does not actively have personnel at the scene. Meaning they are not getting first hand physical evidence and measurements. Now, it's not always possible to be there personally and as such you rely upon crime scene techs, investigators, and other personnel to collect this stuff. Typically if you're a consultant or outside firm you are getting the evidence after it has been collected for analysis. You want the physical evidence in your hands as much as possible so that you can analyze it properly. Sometimes you have to request going to the scene yourself to get the measurements and evidence you need. The worst type of evidence to receive is honestly digital images of the scene as you are now having to analyze something a general investigator, who likely does not have specialized training, took a picture of.
In situations where you cannot have the physical evidence for analysis and you are left with only photographs then a forensic expert should be tempering their responses and conclusions. You cannot confidently come to conclusions based simply on looking at photos. This is something that is hammered home repeatedly in forensic programs and professionals.
In the case of warzone crime scene analysis, as FA typically does, they are, typically, not collecting evidence first hand from the scene, nor are they receiving evidence secondarily from actual trained investigators (when they are there first hand they also rely excessively upon expensive technology instead of best practices). They rely upon third party photos and satellite imagery to do their analysis.
Time and time again, forensic experts who rely solely upon digital photos and media to make their analysis get ripped apart by a good lawyer. Being confident in conclusions based upon photographs is the easiest way to lose your credibility. But again, the art collective playing forensic scientist primarily puts their work in art exhibits where they are not scrutinized by experts. Hell, I don't think I've ever seen them present at one of our professional conferences nationally or internationally (I would love to be a fly on the wall when that happens).
And finally, if this was an actual credible scientific group that produced credible investigations and had created a brand new field with methodology that stood to scrutiny there would be publications in the forensic journals detailing this. Especially from the "creator" of the field Eyal Weizman.
Guess what there isn't?
But in the end all they’re actually doing is crime scene reconstruction from people who want to cosplay as forensic scientists.
(for more reading on the group see this article that highlights issues with FA from another perspective https://www.artnews.com/art-in-america/features/forensic-architecture-fake-news-1234661013/)
55 notes · View notes
assassin1513 · 7 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
🌲The Beauty of TTP 2 part two🌲
41 notes · View notes
bornt-urnge · 5 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
Monuments
16 notes · View notes
door · 2 years
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
went to see the Alex Katz retrospective at the Guggenheim today and am forced to concede that it is, in fact, an incredible gallery space, especially for monumental works.
93 notes · View notes
obseletrix · 3 months
Text
osamu mikumo: World's Squishiest Wizard
6 notes · View notes
skylordhorus · 1 year
Text
i already do not like brutalist architecture but now i dislike it even more out of grumpiness after i saw a frankly kinda pretentious and self aggrandising post chain that acted like brutalism is THE Leftist Architectural Style(tm), that ornamentation is for the bourgeoisie, and that a preference for older or ancient architecture is absolutely a capitalist and/or neo nazi red flag/dogwhistle
3 notes · View notes
thingstol00kat · 1 year
Photo
Tumblr media
Sea Ranch principles
7 notes · View notes
notarealwelder · 2 years
Text
Please don't tell me the next tier of software dev class is centered on interpersonals and diplomacy
5 notes · View notes
isubhamdas · 1 month
Text
How to Effectively Apply Behavioral Economics for Consumer Engagement?
Tumblr media
I never thought behavioral economics would revolutionize my marketing strategy.
But here I am, telling you how it changed everything.
It all started when our company's engagement rates plummeted. We were losing customers faster than we could acquire them.
That's when I stumbled upon behavioral economics.
I began by implementing subtle changes. We reframed our pricing strategy using the decoy effect.
Suddenly, our premium package became more attractive. Sales increased by 15% in the first month.
Next, we tapped into loss aversion. Our email campaigns highlighted what customers might miss out on. Open rates soared from 22% to 37%.
But the real game-changer was social proof. We showcased user testimonials prominently on our website. Conversion rates jumped by 28%.
As we delved deeper, we encountered challenges. Some team members worried about ethical implications.
Were we manipulating consumers?
We addressed this by prioritizing transparency.
Every nudge we implemented was designed to benefit both the customer and our business.
This approach not only eased internal concerns but also built trust with our audience.
The results spoke for themselves. Overall engagement increased by 45% within six months. Customer retention improved by 30%.
But it wasn't just about numbers. We were creating meaningful connections. Customers felt understood and valued.
Looking back, I realize behavioral economics isn't about tricks or gimmicks. It's about understanding human behavior and using that knowledge to create win-win situations.
So, how can you improve your consumer engagement using behavioral economics?
Start by observing your customers' behaviors. What motivates them? What holds them back?
Use these insights to craft strategies that resonate.
Remember, the goal is to guide, not manipulate.
How are you applying behavioral economics in your business?
Get Tips, Suggestions, & Workarounds, in 2-3 mins, on How to Effectively Apply Behavioral Economics for Consumer Engagement?
0 notes
Text
Top Mobile App Architecture Best Practices Every Developer Should Know
Discover essential mobile app architecture best practices that every developer should know. Learn to build scalable, efficient, and robust apps with expert tips and strategies.
0 notes
pjoshi12 · 4 months
Text
Maximize Your Growth: Unlock The Power Of Zero Trust Architecture
Tumblr media
Boosting Scalability and Growth: The Quantifiable Impact of Zero Trust Architecture on Organizations
Amid rising security breaches, the limitations of traditional cybersecurity models highlight the need for a robust, adaptive framework. Zero Trust Architecture (ZTA), operating on the principle of "trust no one, verify everything," offers enhanced protection aligned with modern tech trends like remote work and IoT. This article explores ZTA's core components, implementation strategies, and transformative impacts for stronger cyber defenses.
Understanding Zero Trust Architecture
Zero Trust Architecture is a cybersecurity strategy that revolves around the belief that organizations should not automatically trust anything inside or outside their perimeters. Instead, they must verify anything and everything by trying to connect to its systems before granting access. This approach protects modern digital environments by leveraging network segmentation, preventing lateral movement, providing Layer 7 threat prevention, and simplifying granular user-access control.
Core Principles of Zero Trust
Explicit Verification: Regardless of location, every user, device, application, and data flow is authenticated and authorized under the strictest possible conditions. This ensures that security does not rely on static, network-based perimeters.
Least Privilege Access: Users are only given access to the resources needed to perform their job functions. This minimizes the risk of attackers accessing sensitive data through compromised credentials or insider threats.
Micro-segmentation: The network is divided into secure zones, and security controls are enforced on a per-segment basis. This limits an attacker's ability to move laterally across the network.
Continuous Monitoring: Zero Trust systems continuously monitor and validate the security posture of all owned and associated devices and endpoints. This helps detect and respond to threats in real-time.
Historical Development
With the advent of mobile devices, cloud technology, and the dissolution of conventional perimeters, Zero Trust offered a more realistic model of cybersecurity that reflects the modern, decentralized network environment.
Zero Trust Architecture reshapes how we perceive and implement cybersecurity measures in an era where cyber threats are ubiquitous and evolving. By understanding these foundational elements, organizations can better plan and transition towards a Zero Trust model, reinforcing their defenses against sophisticated cyber threats comprehensively and adaptively.
The Need for Zero Trust Architecture
No matter how spooky the expression 'zero trust' might sound, we must address that the rapidly advancing technology landscape dramatically transformed how businesses operate, leading to new vulnerabilities and increasing the complexity of maintaining secure environments. The escalation in frequency and sophistication of cyber-attacks necessitates a shift from traditional security models to more dynamic, adaptable frameworks like Zero Trust Architecture. Here, we explore why this shift is not just beneficial but essential.
Limitations of Traditional Security Models
Traditional security models often operate under the premise of a strong perimeter defense, commonly referred to as the "castle-and-moat" approach. This method assumes that threats can be kept out by fortifying the outer defenses. However, this model falls short in several ways:
Perimeter Breach: Once a breach occurs, the attacker has relatively free reign over the network, leading to potential widespread damage.
Insider Threats: It inadequately addresses insider threats, where the danger comes from within the network—either through malicious insiders or compromised credentials.
Network Perimeter Dissolution: The increasing adoption of cloud services and remote workforces has blurred the boundaries of traditional network perimeters, rendering perimeter-based defenses less effective.
Rising Cybersecurity Challenges
Traditional security models often operate under the premise of a strong perimeter defense, commonly referred to as the "castle-and-moat" approach. This method assumes that threats can be kept out by fortifying the outer defenses. However, this model falls short in several ways:
Increased Data Breaches: Recently, annual data breaches exploded, with billions of records being exposed each year, affecting organizations of all sizes.
Cost of Data Breaches: The average cost of a data breach has risen, significantly impacting the financial health of affected organizations.
Zero Trust: The Ultimate Response to Modern Challenges
Zero Trust Architecture arose to address the vulnerabilities inherent in modern network environments:
Remote Work: With more talent working remotely, traditional security boundaries became obsolete. Zero Trust ensures secure access regardless of location.
Cloud Computing: As more data and applications move to the cloud, Zero Trust provides rigorous access controls that secure cloud environments effectively.
Advanced Persistent Threats (APTs)
Zero Trust's continuous verification model is ideal for detecting and mitigating sophisticated attacks that employ long-term infiltration strategies.
The Shift to Zero Trust
Organizations increasingly recognize the limitations of traditional security measures and shift towards Zero Trust principles. Several needs drive this transition:
Enhance Security Posture:Implement robust, flexible security measures that adapt to the evolving IT landscape.
Minimize Attack Surfaces:Limit the potential entry points for attackers, thereby reducing overall risk.
Improve Regulatory Compliance
Meet stringent data protection regulations that demand advanced security measures.
In the face of ever-evolving threats and changing business practices, it becomes clear that Zero Trust Architecture goes beyond a simple necessity.
By adopting Zero Trust, not only can organizations stand tall against current threats more effectively but also position themselves to adapt to future challenges in the cybersecurity landscape. This proactive approach is critical to maintaining the integrity and resilience of modern digital enterprises.
Critical Components of Zero Trust Architecture
Zero Trust Architecture (ZTA) redefines security by systematically addressing the challenges of a modern digital ecosystem. Architecture comprises several vital components that ensure robust protection against internal and external threats. Understanding these components provides insight into how Zero Trust operates and why it is effective.
Multi-factor Authentication (MFA)
A cornerstone of Zero Trust is Multi-factor Authentication (MFA), which enhances security by requiring multiple proofs of identity before granting access. Unlike traditional security that might rely solely on passwords, MFA can include a combination of:
By integrating MFA, organizations significantly reduce the risk of unauthorized access due to credential theft or simple password breaches.
Least Privilege Access Control
At the heart of the Zero Trust model is the principle of least privilege, which dictates that users and devices only get the minimum access necessary for their specific roles. This approach limits the potential damage from compromised accounts and reduces the attack surface within an organization. Implementing the least privilege requires:
Rigorous user and entity behavior analytics (UEBA) to understand typical access patterns.
Dynamic policy enforcement to adapt permissions based on the changing context and risk level.
Microsegmentation
Microsegmentation divides network resources into separate, secure zones. Each zone requires separate authentication and authorization to access, which prevents an attacker from moving laterally across the network even if they breach one segment. This strategy is crucial in minimizing the impact of an attack by:
Isolating critical resources and sensitive data from broader network access.
Applying tailored security policies specific to each segment's function and sensitivity.
Continuous Monitoring and Validation
Zero Trust insists on continuously monitoring and validating all devices and user activities within its environment. This proactive stance ensures that anomalies or potential threats are quickly identified and responded to. Key aspects include:
Real-time threat detection using advanced analytics, machine learning, and AI.
Automated response protocols that can isolate threats and mitigate damage without manual intervention.
Device Security
In Zero Trust, security extends beyond the user to their devices. Every device attempting to access resources must be secured and authenticated, including:
The assurance that devices meet security standards before they can connect.
Continuously assessing device health to detect potential compromises or anomalies.
Integration of Security Policies and Governance
Implementing Zero Trust requires a cohesive integration of security policies and governance frameworks that guide the deployment and operation of security measures. This integration helps in:
Standardizing security protocols across all platforms and environments.
Ensuring compliance with regulatory requirements and internal policies.
Implementing Zero Trust Components.
Implementing Zero Trust involves assessing needs, defining policies, and integrating solutions, requiring cross-departmental collaboration. This proactive approach creates a resilient security posture, adapting to evolving threats and transforming security strategy.
Implementing Zero Trust Architecture
Implementing Zero Trust Architecture (ZTA) is a strategic endeavor that requires careful planning, a detailed understanding of existing systems, and a clear roadmap for integration. Here's a comprehensive guide to deploying Zero Trust in an organization, ensuring a smooth transition and security enhancements to ensure a practical realization.
Step 1: Define the Protect Surface
The first step in implementing Zero Trust is to identify and define the 'protect surface'—the critical data, assets, applications, and services that need protection. Such an implementation will involve the following:
Data Classification: Identify where sensitive data resides, how it moves, and who accesses it.
Asset Management: Catalog and manage hardware, software, and network resources to understand the full scope of the digital environment.
Step 2: Map Transaction Flows
Understanding how data and requests flow within the network is crucial. Mapping transaction flows helps in the following:
Identifying legitimate traffic patterns: This aids in designing policies that allow normal business processes while blocking suspicious activities.
Establishing baselines for network behavior: Anomalies from these baselines can be quickly detected and addressed.
Step 3: Architect a Zero Trust Network
With a clear understanding of the protected surface and transaction flows, the next step is to design the network architecture based on Zero Trust principles:
Microsegmentation: Design network segments based on the sensitivity and requirements of the data they contain.
Least Privilege Access Control: Implement strict access controls and enforce them consistently across all environments.
Step 4: Create a Zero Trust Policy
Zero Trust policies dictate how identities and devices access resources, including:
Policy Engine Creation: Develop a policy engine that uses dynamic security rules to make access decisions based on the trust algorithm.
Automated Rules and Compliance: Utilize automation to enforce policies efficiently and ensure compliance with regulatory standards.
Step 5: Monitor and Maintain
Zero Trust requires ongoing evaluation and adaptation to remain effective. Continuous monitoring and maintenance involve:
Advanced Threat Detection: Use behavioral analytics, AI, and machine learning to detect and respond to anomalies in real-time.
Security Posture Assessment: Regularly assess the security posture to adapt to new threats and incorporate technological advancements.
Feedback Loops: Establish mechanisms to learn from security incidents and continuously improve security measures.
Step 6: Training and Culture Change
Implementing Zero Trust affects all aspects of an organization and requires a shift in culture and mindset:
Comprehensive Training: Educate staff about the principles of Zero Trust, their roles within the system, and the importance of security in their daily activities.
Promote Security Awareness: Foster a security-first culture where all employees are vigilant and proactive about security challenges.
Challenges in Implementation
The transition to Zero Trust is not without its challenges:
Complexity in Integration: Integrating Zero Trust with existing IT and legacy systems can be complex and resource-intensive.
Resistance to Change: Operational disruptions and skepticism from stakeholders can impede progress.
Cost Implications: Initial setup, especially in large organizations, can be costly and require significant technological and training investments.
Successfully implementing Zero Trust Architecture demands a comprehensive approach beyond technology, including governance, behavior change, and continuous improvement. By following these steps, organizations can enhance their cybersecurity defenses and build a more resilient and adaptive security posture equipped to handle the threats of a dynamic digital world.
Impact and Benefits of Zero Trust Architecture
Implementing Zero Trust Architecture (ZTA) has far-reaching implications for an organization's cybersecurity posture. This section evaluates the tangible impacts and benefits that Zero Trust provides, supported by data-driven outcomes and real-world applications.
Reducing the Attack Surface
Zero Trust minimizes the organization's attack surface by enforcing strict access controls and network segmentation. With the principle of least privilege, access is granted only based on necessity, significantly reducing the potential pathways an attacker can exploit.
Statistical Impact
Organizations employing Zero Trust principles have observed a marked decrease in the incidence of successful breaches. For instance, a report by Forrester noted that Zero Trust adopters saw a 30% reduction in security breaches.
Case Study
A notable financial institution implemented Zero Trust strategies and reduced the scope of breach impact by 40%, significantly lowering their incident response and recovery costs.
Enhancing Regulatory Compliance
Zero Trust aids in compliance with stringent data protection regulations such as GDPR, HIPAA, and PCI-DSS by providing robust mechanisms to protect sensitive information and report on data access and usage.
Compliance Metrics
Businesses that transition to Zero Trust report higher compliance rates, with improved audit performance due to better visibility and control over data access and usage.
Improving Detection and Response Times
The continuous monitoring component of Zero Trust ensures that anomalies are detected swiftly, enabling quicker response to potential threats. This dynamic approach helps in adapting to emerging threats more effectively.
Operational Efficiency
Studies show that organizations using Zero Trust frameworks have improved their threat detection and response times by up to 50%, enhancing operational resilience.
Cost-Effectiveness
While the initial investment in Zero Trust might be considerable, the architecture can lead to significant cost savings in the long term through reduced breach-related costs and more efficient IT operations.
Economic Benefits
Analysis indicates that organizations implementing Zero Trust save on average 30% in incident response costs due to the efficiency and efficacy of their security operations.
Future-Proofing Security
Zero Trust architectures aim to be flexible and adaptable, which makes them particularly suited to evolving alongside emerging technologies and changing business models, thus future-proofing an organization's security strategy.
Strategic Advantage
Adopting Zero Trust provides a strategic advantage in security management, positioning organizations to quickly adapt to new technologies and business practices without compromising security.
The impacts and benefits of Zero Trust Architecture make a compelling case for its adoption. As the digital landscape continues to evolve, the principles of Zero Trust provide a resilient and adaptable framework that addresses current security challenges and anticipates future threats. By embracing Zero Trust, organizations can significantly enhance their security posture, ensuring robust defense mechanisms that scale with their growth and technological advancements.
Future Trends and Evolution of Zero Trust
With digital transformation emerges highly sophisticated cybersecurity threats pushing Zero Trust Architecture (ZTA) to evolve in response to these dynamic challenges. In this final section, we explore future Zero Trust trends, their ongoing development, and the potential challenges organizations may face as they continue to implement this security model.
Evolution of Zero Trust Principles
Zero Trust is not a static model and must continuously be refined as new technologies and threat vectors emerge. Critical areas of evolution include:
Integration with Emerging Technologies
As organizations increasingly adopt technologies like 5G, IoT, and AI, Zero Trust principles must be adapted to secure these environments effectively. For example, the proliferation of IoT devices increases the attack surface, necessitating more robust identity verification and device security measures within a Zero Trust framework.
Advanced Threat Detection Using AI
Artificial Intelligence and Machine Learning will play pivotal roles in enhancing the predictive capabilities of zero-trust systems. AI can analyze vast amounts of data to detect patterns and anomalies that signify potential threats, enabling proactive threat management and adaptive response strategies.
Challenges in Scaling Zero Trust
As Zero Trust gains visibility, organizations may encounter several challenges:
Future Research and Standardization
Continued research and standardization efforts are needed to address gaps in Zero Trust methodologies and to develop best practices for their implementation. Industry collaboration and partnerships will be vital in creating standardized frameworks that effectively guide organizations in adopting Zero Trust.
Developing Zero Trust Maturity Models
Future efforts could focus on developing maturity models that help organizations assess their current capabilities and guide their progression toward more advanced Zero Trust implementations.
Legal and Regulatory Considerations
As Zero Trust impacts data privacy and security, future legal frameworks must consider how Zero Trust practices align with global data protection regulations. Ensuring compliance while implementing Zero Trust will be an ongoing challenge.
The future of Zero Trust Architecture is one of continual adaptation and refinement. By staying ahead of technological advancements and aligning with emerging security trends, Zero Trust can provide organizations with a robust framework capable of defending against the increasingly sophisticated cyber threats of the digital age. As this journey unfolds, embracing Zero Trust will enhance security and empower organizations to innovate and grow confidently.
Concluding Thoughts:
As cyber threats keep evolving, Zero Trust Architecture (ZTA) emerges as the most effective cybersecurity strategy, pivotal for safeguarding organizational assets in an increasingly interconnected world. The implementation of Zero Trust not only enhances security postures but also prompts a significant shift in organizational culture and operational frameworks. How will integrating advanced technologies like AI and blockchain influence the evolution of zero-trust policies? Can Zero Trust principles keep pace with the rapid expansion of IoT devices across corporate networks?
Furthermore, questions about their scalability and adaptability remain at the forefront as Zero Trust principles evolve. How will organizations overcome the complexities of deploying Zero Trust across diverse and global infrastructures? Addressing these challenges and questions will be crucial for organizations that leverage Zero Trust Architecture effectively.
How Coditude can help you
For businesses looking to navigate the complexities of Zero Trust and fortify their cybersecurity measures, partnering with experienced technology providers like Coditude offers a reassuring pathway to success. Coditude's expertise in cutting-edge security solutions can help demystify Zero Trust implementation and tailor a strategy that aligns with your business objectives. Connect with Coditude today to secure your digital assets and embrace the future of cybersecurity with confidence.
0 notes
assassin1513 · 7 months
Text
Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media Tumblr media
🌲The Beauty of TTP 2 part three🌲
38 notes · View notes
habilelabs · 4 months
Text
Tumblr media
"What is Universal Design? Learn here, how accessibility plays a role in web designing, and what points are to be remembered for the perfect design. So, let’s dive into"
0 notes
Text
Tumblr media Tumblr media Tumblr media Tumblr media
Building Works
We undertake all aspects of building works such as extensions, loft conversions, garage conversions, porches and internal structural and non structural modifications. We can liaise with planning and building control on your behalf and advise you throughout the process. All trades are in house so we can manage the project from start to finish for you
1 note · View note
jcmarchi · 6 months
Text
ScreenAI: A visual language model for UI and visually-situated language understanding
New Post has been published on https://thedigitalinsider.com/screenai-a-visual-language-model-for-ui-and-visually-situated-language-understanding/
ScreenAI: A visual language model for UI and visually-situated language understanding
Posted by Srinivas Sunkara and Gilles Baechler, Software Engineers, Google Research
Screen user interfaces (UIs) and infographics, such as charts, diagrams and tables, play important roles in human communication and human-machine interaction as they facilitate rich and interactive user experiences. UIs and infographics share similar design principles and visual language (e.g., icons and layouts), that offer an opportunity to build a single model that can understand, reason, and interact with these interfaces. However, because of their complexity and varied presentation formats, infographics and UIs present a unique modeling challenge.
To that end, we introduce “ScreenAI: A Vision-Language Model for UI and Infographics Understanding”. ScreenAI improves upon the PaLI architecture with the flexible patching strategy from pix2struct. We train ScreenAI on a unique mixture of datasets and tasks, including a novel Screen Annotation task that requires the model to identify UI element information (i.e., type, location and description) on a screen. These text annotations provide large language models (LLMs) with screen descriptions, enabling them to automatically generate question-answering (QA), UI navigation, and summarization training datasets at scale. At only 5B parameters, ScreenAI achieves state-of-the-art results on UI- and infographic-based tasks (WebSRC and MoTIF), and best-in-class performance on Chart QA, DocVQA, and InfographicVQA compared to models of similar size. We are also releasing three new datasets: Screen Annotation to evaluate the layout understanding capability of the model, as well as ScreenQA Short and Complex ScreenQA for a more comprehensive evaluation of its QA capability.
ScreenAI
ScreenAI’s architecture is based on PaLI, composed of a multimodal encoder block and an autoregressive decoder. The PaLI encoder uses a vision transformer (ViT) that creates image embeddings and a multimodal encoder that takes the concatenation of the image and text embeddings as input. This flexible architecture allows ScreenAI to solve vision tasks that can be recast as text+image-to-text problems.
On top of the PaLI architecture, we employ a flexible patching strategy introduced in pix2struct. Instead of using a fixed-grid pattern, the grid dimensions are selected such that they preserve the native aspect ratio of the input image. This enables ScreenAI to work well across images of various aspect ratios.
The ScreenAI model is trained in two stages: a pre-training stage followed by a fine-tuning stage. First, self-supervised learning is applied to automatically generate data labels, which are then used to train ViT and the language model. ViT is frozen during the fine-tuning stage, where most data used is manually labeled by human raters.
ScreenAI model architecture.
Data generation
To create a pre-training dataset for ScreenAI, we first compile an extensive collection of screenshots from various devices, including desktops, mobile, and tablets. This is achieved by using publicly accessible web pages and following the programmatic exploration approach used for the RICO dataset for mobile apps. We then apply a layout annotator, based on the DETR model, that identifies and labels a wide range of UI elements (e.g., image, pictogram, button, text) and their spatial relationships. Pictograms undergo further analysis using an icon classifier capable of distinguishing 77 different icon types. This detailed classification is essential for interpreting the subtle information conveyed through icons. For icons that are not covered by the classifier, and for infographics and images, we use the PaLI image captioning model to generate descriptive captions that provide contextual information. We also apply an optical character recognition (OCR) engine to extract and annotate textual content on screen. We combine the OCR text with the previous annotations to create a detailed description of each screen.
A mobile app screenshot with generated annotations that include UI elements and their descriptions, e.g., TEXT elements also contain the text content from OCR, IMAGE elements contain image captions, LIST_ITEMs contain all their child elements.
LLM-based data generation
We enhance the pre-training data’s diversity using PaLM 2 to generate input-output pairs in a two-step process. First, screen annotations are generated using the technique outlined above, then we craft a prompt around this schema for the LLM to create synthetic data. This process requires prompt engineering and iterative refinement to find an effective prompt. We assess the generated data’s quality through human validation against a quality threshold.
You only speak JSON. Do not write text that isn’t JSON. You are given the following mobile screenshot, described in words. Can you generate 5 questions regarding the content of the screenshot as well as the corresponding short answers to them? The answer should be as short as possible, containing only the necessary information. Your answer should be structured as follows: questions: [ question: the question, answer: the answer , ... ] THE SCREEN SCHEMA
A sample prompt for QA data generation.
By combining the natural language capabilities of LLMs with a structured schema, we simulate a wide range of user interactions and scenarios to generate synthetic, realistic tasks. In particular, we generate three categories of tasks:
Question answering: The model is asked to answer questions regarding the content of the screenshots, e.g., “When does the restaurant open?”
Screen navigation: The model is asked to convert a natural language utterance into an executable action on a screen, e.g., “Click the search button.”
Screen summarization: The model is asked to summarize the screen content in one or two sentences.
Block diagram of our workflow for generating data for QA, summarization and navigation tasks using existing ScreenAI models and LLMs. Each task uses a custom prompt to emphasize desired aspects, like questions related to counting, involving reasoning, etc.
LLM-generated data. Examples for screen QA, navigation and summarization. For navigation, the action bounding box is displayed in red on the screenshot.
Experiments and results
As previously mentioned, ScreenAI is trained in two stages: pre-training and fine-tuning. Pre-training data labels are obtained using self-supervised learning and fine-tuning data labels comes from human raters.
We fine-tune ScreenAI using public QA, summarization, and navigation datasets and a variety of tasks related to UIs. For QA, we use well established benchmarks in the multimodal and document understanding field, such as ChartQA, DocVQA, Multi page DocVQA, InfographicVQA, OCR VQA, Web SRC and ScreenQA. For navigation, datasets used include Referring Expressions, MoTIF, Mug, and Android in the Wild. Finally, we use Screen2Words for screen summarization and Widget Captioning for describing specific UI elements. Along with the fine-tuning datasets, we evaluate the fine-tuned ScreenAI model using three novel benchmarks:
Screen Annotation: Enables the evaluation model layout annotations and spatial understanding capabilities.
ScreenQA Short: A variation of ScreenQA, where its ground truth answers have been shortened to contain only the relevant information that better aligns with other QA tasks.
Complex ScreenQA: Complements ScreenQA Short with more difficult questions (counting, arithmetic, comparison, and non-answerable questions) and contains screens with various aspect ratios.
The fine-tuned ScreenAI model achieves state-of-the-art results on various UI and infographic-based tasks (WebSRC and MoTIF) and best-in-class performance on Chart QA, DocVQA, and InfographicVQA compared to models of similar size. ScreenAI achieves competitive performance on Screen2Words and OCR-VQA. Additionally, we report results on the new benchmark datasets introduced to serve as a baseline for further research.
Comparing model performance of ScreenAI with state-of-the-art (SOTA) models of similar size.
Next, we examine ScreenAI’s scaling capabilities and observe that across all tasks, increasing the model size improves performances and the improvements have not saturated at the largest size.
Model performance increases with size, and the performance has not saturated even at the largest size of 5B params.
Conclusion
We introduce the ScreenAI model along with a unified representation that enables us to develop self-supervised learning tasks leveraging data from all these domains. We also illustrate the impact of data generation using LLMs and investigate improving model performance on specific aspects with modifying the training mixture. We apply all of these techniques to build multi-task trained models that perform competitively with state-of-the-art approaches on a number of public benchmarks. However, we also note that our approach still lags behind large models and further research is needed to bridge this gap.
Acknowledgements
This project is the result of joint work with Maria Wang, Fedir Zubach, Hassan Mansoor, Vincent Etter, Victor Carbune, Jason Lin, Jindong Chen and Abhanshu Sharma. We thank Fangyu Liu, Xi Chen, Efi Kokiopoulou, Jesse Berent, Gabriel Barcik, Lukas Zilka, Oriana Riva, Gang Li,Yang Li, Radu Soricut, and Tania Bedrax-Weiss for their insightful feedback and discussions, along with Rahul Aralikatte, Hao Cheng and Daniel Kim for their support in data preparation. We also thank Jay Yagnik, Blaise Aguera y Arcas, Ewa Dominowska, David Petrou, and Matt Sharifi for their leadership, vision and support. We are very grateful toTom Small for helping us create the animation in this post.
0 notes