#Sge-Ceo
Explore tagged Tumblr posts
Text
Questo articolo sarà per lo più informativo e ci aggiornerà sulle novità messe in atto da Google e sugli aggiornamenti di SGE, Gemini e Ads ,rilasciati direttamente dal CEO di Google. https://www.sitiweb.re/il-ceo-di-google-parla-del-futuro-di-sge-gemini-e-ads/ #seo #google #gemini #ads
4 notes
·
View notes
Text
SGE Optimization: An Opportunity to Diversify Your Local Search Strategy
Search Generative Experience (SGE) is continuing to drive a shift in Google search results. As AI innovation continues to deliver more comprehensive and informative answers to Google searches, businesses must ensure their digital presence remains relevant – especially as we can expect SGE results to be integrated directly into most Search Engine Results Pages (SERP), as well as within Local/Maps results sometime this year.
SGE is no longer a trend, but a major component of Google’s search business. As Google CEO Sundar Pichai noted in the company’s recent earnings call, the organization is “seeing an increase in Search usage among people who use the new AI overviews, as well as increased user satisfaction with the results.”
The question is – how much will businesses need to overhaul their Google Business Profiles in order to remain relevant in local search?
My answer: it depends.
There are many factors to consider. To start, do you have a deep understanding of your customer and the path they take to even begin the search that results in finding your business? Where are you at with other online marketing strategies? How much time have you spent already optimizing your business for local search?
To really understand the best path forward for your business, let’s dive in a bit deeper.
Understanding SGE’s Impact on Local Search
SGE has changed local search and will continue to play a role in its evolution, so businesses must understand it in order to optimize for the future. Where traditional search results offer a list of links to business listings, websites and sponsored pages from Google, SGE uses generative AI to create snapshots that answer user queries by summarizing content from Google Business Profiles, websites, social media and customer reviews. This approach allows search to unfold as a series of refined questions and follow-ups, similar to the way ChatGPT functions.
When a potential customer conducts a local business search, SGE provides a comprehensive summary of businesses, including customer ratings and reviews, presenting a more informative search result. While traditional search usually leads with ads, SGE (for now) focuses on providing users with more detailed information regardless of whether a business is paying for it to be there
As it stands today, SGE snippets are more likely to be triggered when a searcher enters a longer query. According to a recent study from SERanking, single-word queries triggered a SGE snippet in approximately 12% of cases, where five-word queries triggered a SGE snippet in nearly 20% of cases.
Marketing Technology News: MarTech Interview with Susan Ganeshan, Chief Marketing Officer @ Emplifi
Let’s look at an example. Say you search for “Subaru dealers near me.” If you use traditional Google search, results will consist of a bunch of sponsored links and a 3-pack (a format for displaying top results for business pages):
When you use SGE to perform the same search, you get a concise summary of nearby dealers – and you even notice that customer review ratings appear alongside each location. SGE has culled those dealers’ websites, business listings, rating pages and other digital channels to summarize this information. Most interestingly, there are zero ads anywhere on the entire page featuring SGE results – although most certainly there will be soon given how dependent Google is on ad revenue.
Now let’s try a more refined search in SGE – this time, Chevrolet service center:
The results here are becoming busy and hard to parse even without any ads on the page. SGE is returning multiple results from the same dealer but without clearly delineating what is a service center page vs what is a dealer/sales page. The additional information they are choosing to show is not consistent with some descriptions appearing to be based off of user reviews (“some say”), while other descriptions seem to be a summary of the webpage, website, or a rewritten business description.
While traditional search also uncovers service listings, only three listings appear, and no ratings or details accompany the listings which means SGE is improving its understanding of the service listing. It now knows that service listings are a department at the dealer, therefore, the reviews that all service-oriented reviews can be attributed to the service listing.
This busy experience is similar for non-branded searches but the execution varies depending on the query. For example, “best coffee shops in Lincoln Square to work,” reveals relevant, descriptive answers about local coffee shops – and remember, these results appear above any other organic and paid search results. Also note comments about food and drink offerings as well as atmosphere.
When you compare the SGE results to the traditional Featured Snippet from TripAdvisor that appears below it, you can see how this type of experience is likely to keep searchers more engaged, especially if they take advantage of the refined follow-up searches Google is encouraging.. Granted, people who simply need to complete utility searches (such as movie times nearby) won’t necessarily care to have a deep conversation in order to complete their search. But, when SGE is integrated into every SERP, utility users may benefit from a better user experience due to the more relevant results and, as a bonus, the lack of ads (for now).
Optimizing and Diversifying to Navigate the Future of SGE
Given the changes – and media coverage – resulting from SGE innovations, it might seem like every business must rush to overhaul their local search strategy. I would advise anyone that rushes to do this to first take a step back and first conduct a thoughtful audit of your local search strategy.
Here are the steps to take to optimize your business’ local search strategy for SGE:
1. Prioritize Google Business Profile:
Ensure that your business listings are accurate and reflect the scale and depth of your offerings. Making note of things that might seem small – like free Wi-Fi or accessible parking – will play a big role in being findable in conversational search.
2. Speak to your Customers:
Opt for a conversational tone in your listings. Make the customer feel like they’re in a one-on-one conversation. Incorporate longtail keywords naturally into your content and avoid keyword stuffing.
3. Proactively Seek Reviews:
Encourage customers to leave reviews on your GBP listings. This directly influences the AI-backed SGE results, which often surface the most representative ones.
4. Elicit Detailed Insights:
Encourage customers to provide detailed feedback when they leave reviews. As AI technologies evolve, users will start posing more nuanced inquiries about local businesses, and details about the Wi-Fi strength at your coffee shop or ambiance at your restaurant will help improve your ranking.
5. Make the Most of Multiple GBP Listings:
Businesses, especially car dealerships, can maintain separate listings for sales, parts, and service, which enhances the search experience for potential customers.
6. Consistency is Key:
Your website’s information needs to be in sync with the listing content. SGE also includes a website carousel in its results, giving the user the option to visit your site directly.
Navigating the SGE Future
SGE is changing local search, but businesses can quickly adapt to the evolving dynamics by mastering the basics and diversifying their strategies by optimizing their business listings, maintaining up-to-date, consistent content across platforms and seeking feedback from consumers. Businesses that are able to adapt and evolve will reap the benefits of higher visibility, more engagement, and ultimately, growth.
0 notes
Text
What does Google CEO Sundar Pichai have to say about small publishers that have lost significant organic search traffic?
The following excerpt is from an interview published today that also covers SGE and public sentiment on AI Overviews.
0 notes
Text
Gemini 時代の新たな一歩
Google は、Google 検索などのすべての製品を改善する最良の方法として、長年にわたり AI に多大な投資を行っています。その進化の一例として、Google Labs でお使いいただける試験運用中の生成 AI による検索体験 (SGE) などを提供できていることを嬉しく思います。また、 AI は近年急速に成長している 2 つのビジネス、Google Cloud と Google Workspace および加入者数が 1 億人に達しようとしているサブスクリプション サービス Google One の中心になっています。 新しい最高水準 昨年の 12 月、AI をすべての人により役立つものにする Gemini 時代のはじまりとして、Gemini は、テキスト、画像、音声、動画におけるベンチマーク テストで最高水準を達成しました。しかし、Gemini は AI モデル以上へと進化しています。Gemini は、数十億人の人々が日々使用する製品から、ディベロッパーや企業のイノベーションを支援する API やプラットフォームに至るまで、エコシステム全体をサポートします。 高性能かつ最大のモデルである Ultra 1.0 は、知識と問題解決能力をテストするために数学、物理学、歴史、法律、医学、倫理など 57 の科目の組み合わせで能力をテストする MMLU (Measuring Massive Multitask Language Understanding : 大規模マルチタスク言語理解) で人間の専門家を上回るパフォーマンスを達成した初のモデルです。 本日、またもう一歩先へと進み、Ultra を Google 製品に搭載し世界中のユーザーに提供します。 Gemini Advanced が登場 Gemini は、Google の最も高性能な AI モデルであり、この核となる高度なテクノロジーが反映された製品であることを明確にお伝えするために、Bard は本日より Gemini になります。Gemini はウェブ版では 40 言語で利用でき、Android では新しい Gemini アプリ、iOS では Google アプリでお使いいただけるようになります。 Ultra を搭載した Gemini Advanced (英語版のみ提供)は、推論、理解と要約、コーディングなどの能力が大幅に向上した新しい体験を提供します。たとえば、学習スタイルに合わせた個人的な家庭教師やコンテンツ戦略の計画やビジネス プランの構築を支援するクリエイティブ パートナーになどとしてお使いいただけます。詳細は、こちらのブログをご確認ください。 新しい Google One AI Premium Planに加入すると、どなたでも Gemini Advanced をお使いいただけます。このプランは、Google の最高の AI 機能を 1 か所で提供します。この AI Premium Plan は、ストレージの拡張などの製品機能を提供する Google One がベースです。 Gemini の機能をより多くの製品に導入 Gemini モデルは、Google Workspace や Google Cloud など、人々や企業が日々利用する製品にも導入されます。 * Google Workspace:すでに 100 万人以上の人々が、Duet AI を通じて生産性と創造性を高めるために Help me write などの機能を活用しています。Duet AI for Workspace は Gemini for Workspace となり、間もなく Google One AI Premium Plan に加入したユーザーも Gmail、ドキュメント、スプレッドシート、スライド、Meet で Gemini をお使いいただけるようになります。 * Google Cloud:今後数週間で Duet AI in Google Cloud も Gemini for Google Cloud に変わります。企業にとって、Gemini for Google Cloud は、企業の生産性の向上、ディベロッパーのコード作成の高速化、組織のサイバー攻撃からの保護など、数え切れないほどのメリットをもたらします。 今後について ディベロッパーはあらゆる主要なテクノロジーの中心であり、Gemini エコシステムにおいても変わらずに重要な役割を果たします。すでに数十万のディベロッパーや企業が Gemini モデルを使用して開発しています。本日は Gemini Advanced とその��機能についてご紹介しましたが、来週はディベロッパーとクラウドの顧客企業向けに今後の機能について詳しくご紹介します。 Google は大胆かつ責任を持ってこのテクノロジーを推進しています。すでに Gemini モデルの次のトレーニングを進めているので、今後の展開にもご期待ください。 Posted by By Sundar Pichai, CEO, Google/Alphabet http://japan.googleblog.com/2024/02/gemini.html?utm_source=dlvr.it&utm_medium=tumblr Google Japan Blog
0 notes
Text
2023: A year of groundbreaking advances in AI and computing
New Post has been published on https://thedigitalinsider.com/2023-a-year-of-groundbreaking-advances-in-ai-and-computing/
2023: A year of groundbreaking advances in AI and computing
Posted by Jeff Dean, Chief Scientist, Google DeepMind & Google Research, Demis Hassabis, CEO, Google DeepMind, and James Manyika, SVP, Google Research, Technology & Society
This has been a year of incredible progress in the field of Artificial Intelligence (AI) research and its practical applications.
As ongoing research pushes AI even farther, we look back to our perspective published in January of this year, titled “Why we focus on AI (and to what end),” where we noted:
We are committed to leading and setting the standard in developing and shipping useful and beneficial applications, applying ethical principles grounded in human values, and evolving our approaches as we learn from research, experience, users, and the wider community.
We also believe that getting AI right — which to us involves innovating and delivering widely accessible benefits to people and society, while mitigating its risks — must be a collective effort involving us and others, including researchers, developers, users (individuals, businesses, and other organizations), governments, regulators, and citizens.
We are convinced that the AI-enabled innovations we are focused on developing and delivering boldly and responsibly are useful, compelling, and have the potential to assist and improve lives of people everywhere — this is what compels us.
In this Year-in-Review post we’ll go over some of Google Research’s and Google DeepMind’s efforts putting these paragraphs into practice safely throughout 2023.
Advances in products & technologies
This was the year generative AI captured the world’s attention, creating imagery, music, stories, and engaging conversation about everything imaginable, at a level of creativity and a speed almost implausible a few years ago.
In February, we first launched Bard, a tool that you can use to explore creative ideas and explain things simply. It can generate text, translate languages, write different kinds of creative content and more.
In May, we watched the results of months and years of our foundational and applied work announced on stage at Google I/O. Principally, this included PaLM 2, a large language model (LLM) that brought together compute-optimal scaling, an improved dataset mixture, and model architecture to excel at advanced reasoning tasks.
By fine-tuning and instruction-tuning PaLM 2 for different purposes, we were able to integrate it into numerous Google products and features, including:
An update to Bard, which enabled multilingual capabilities. Since its initial launch, Bard is now available in more than 40 languages and over 230 countries and territories, and with extensions, Bard can find and show relevant information from Google tools used every day — like Gmail, Google Maps, YouTube, and more.
Search Generative Experience (SGE), which uses LLMs to reimagine both how to organize information and how to help people navigate through it, creating a more fluid, conversational interaction model for our core Search product. This work extended the search engine experience from primarily focused on information retrieval into something much more — capable of retrieval, synthesis, creative generation and continuation of previous searches — while continuing to serve as a connection point between users and the web content they seek.
MusicLM, a text-to-music model powered by AudioLM and MuLAN, which can make music from text, humming, images or video and musical accompaniments to singing.
Duet AI, our AI-powered collaborator that provides users with assistance when they use Google Workspace and Google Cloud. Duet AI in Google Workspace, for example, helps users write, create images, analyze spreadsheets, draft and summarize emails and chat messages, and summarize meetings. Duet AI in Google Cloud helps users code, deploy, scale, and monitor applications, as well as identify and accelerate resolution of cybersecurity threats.
And many other developments.
In June, following last year’s release of our text-to-image generation model Imagen, we released Imagen Editor, which provides the ability to use region masks and natural language prompts to interactively edit generative images to provide much more precise control over the model output.
Later in the year, we released Imagen 2, which improved outputs via a specialized image aesthetics model based on human preferences for qualities such as good lighting, framing, exposure, and sharpness.
In October, we launched a feature that helps people practice speaking and improve their language skills. The key technology that enabled this functionality was a novel deep learning model developed in collaboration with the Google Translate team, called Deep Aligner. This single new model has led to dramatic improvements in alignment quality across all tested language pairs, reducing average alignment error rate from 25% to 5% compared to alignment approaches based on Hidden Markov models (HMMs).
In November, in partnership with YouTube, we announced Lyria, our most advanced AI music generation model to date. We released two experiments designed to open a new playground for creativity, DreamTrack and music AI tools, in concert with YouTube’s Principles for partnering with the music industry on AI technology.
Then in December, we launched Gemini, our most capable and general AI model. Gemini was built to be multimodal from the ground up across text, audio, image and videos. Our initial family of Gemini models comes in three different sizes, Nano, Pro, and Ultra. Nano models are our smallest and most efficient models for powering on-device experiences in products like Pixel. The Pro model is highly-capable and best for scaling across a wide range of tasks. The Ultra model is our largest and most capable model for highly complex tasks.
[embedded content]
In a technical report about Gemini models, we showed that Gemini Ultra’s performance exceeds current state-of-the-art results on 30 of the 32 widely-used academic benchmarks used in LLM research and development. With a score of 90.04%, Gemini Ultra was the first model to outperform human experts on MMLU, and achieved a state-of-the-art score of 59.4% on the new MMMU benchmark.
Building on AlphaCode, the first AI system to perform at the level of the median competitor in competitive programming, we introduced AlphaCode 2 powered by a specialized version of Gemini. When evaluated on the same platform as the original AlphaCode, we found that AlphaCode 2 solved 1.7x more problems, and performed better than 85% of competition participants
At the same time, Bard got its biggest upgrade with its use of the Gemini Pro model, making it far more capable at things like understanding, summarizing, reasoning, coding, and planning. In six out of eight benchmarks, Gemini Pro outperformed GPT-3.5, including in MMLU, one of the key standards for measuring large AI models, and GSM8K, which measures grade school math reasoning. Gemini Ultra will come to Bard early next year through Bard Advanced, a new cutting-edge AI experience.
Gemini Pro is also available on Vertex AI, Google Cloud’s end-to-end AI platform that empowers developers to build applications that can process information across text, code, images, and video. Gemini Pro was also made available in AI Studio in December.
To best illustrate some of Gemini’s capabilities, we produced a series of short videos with explanations of how Gemini could:
ML/AI Research
In addition to our advances in products and technologies, we’ve also made a number of important advancements in the broader fields of machine learning and AI research.
At the heart of the most advanced ML models is the Transformer model architecture, developed by Google researchers in 2017. Originally developed for language, it has proven useful in domains as varied as computer vision, audio, genomics, protein folding, and more. This year, our work on scaling vision transformers demonstrated state-of-the-art results across a wide variety of vision tasks, and has also been useful in building more capable robots.
Expanding the versatility of models requires the ability to perform higher-level and multi-step reasoning. This year, we approached this target following several research tracks. For example, algorithmic prompting is a new method that teaches language models reasoning by demonstrating a sequence of algorithmic steps, which the model can then apply in new contexts. This approach improves accuracy on one middle-school mathematics benchmark from 25.9% to 61.1%.
By providing algorithmic prompts, we can teach a model the rules of arithmetic via in-context learning.
In the domain of visual question answering, in a collaboration with UC Berkeley researchers, we showed how we could better answer complex visual questions (“Is the carriage to the right of the horse?”) by combining a visual model with a language model trained to answer visual questions by synthesizing a program to perform multi-step reasoning.
We are now using a general model that understands many aspects of the software development life cycle to automatically generate code review comments, respond to code review comments, make performance-improving suggestions for pieces of code (by learning from past such changes in other contexts), fix code in response to compilation errors, and more.
In a multi-year research collaboration with the Google Maps team, we were able to scale inverse reinforcement learning and apply it to the world-scale problem of improving route suggestions for over 1 billion users. Our work culminated in a 16–24% relative improvement in global route match rate, helping to ensure that routes are better aligned with user preferences.
We also continue to work on techniques to improve the inference performance of machine learning models. In work on computationally-friendly approaches to pruning connections in neural networks, we were able to devise an approximation algorithm to the computationally intractable best-subset selection problem that is able to prune 70% of the edges from an image classification model and still retain almost all of the accuracy of the original.
In work on accelerating on-device diffusion models, we were also able to apply a variety of optimizations to attention mechanisms, convolutional kernels, and fusion of operations to make it practical to run high quality image generation models on-device; for example, enabling “a photorealistic and high-resolution image of a cute puppy with surrounding flowers” to be generated in just 12 seconds on a smartphone.
Advances in capable language and multimodal models have also benefited our robotics research efforts. We combined separately trained language, vision, and robotic control models into PaLM-E, an embodied multi-modal model for robotics, and Robotic Transformer 2 (RT-2), a novel vision-language-action (VLA) model that learns from both web and robotics data, and translates this knowledge into generalized instructions for robotic control.
RT-2 architecture and training: We co-fine-tune a pre-trained vision-language model on robotics and web data. The resulting model takes in robot camera images and directly predicts actions for a robot to perform.
Furthermore, we showed how language can also be used to control the gait of quadrupedal robots and explored the use of language to help formulate more explicit reward functions to bridge the gap between human language and robotic actions. Then, in Barkour we benchmarked the agility limits of quadrupedal robots.
Algorithms & optimization
Designing efficient, robust, and scalable algorithms remains a high priority. This year, our work included: applied and scalable algorithms, market algorithms, system efficiency and optimization, and privacy.
We introduced AlphaDev, an AI system that uses reinforcement learning to discover enhanced computer science algorithms. AlphaDev uncovered a faster algorithm for sorting, a method for ordering data, which led to improvements in the LLVM libc++ sorting library that were up to 70% faster for shorter sequences and about 1.7% faster for sequences exceeding 250,000 elements.
We developed a novel model to predict the properties of large graphs, enabling estimation of performance for large programs. We released a new dataset, TPUGraphs, to accelerate open research in this area, and showed how we can use modern ML to improve ML efficiency.
The TPUGraphs dataset has 44 million graphs for ML program optimization.
We developed a new load balancing algorithm for distributing queries to a server, called Prequal, which minimizes a combination of requests-in-flight and estimates the latency. Deployments across several systems have saved CPU, latency, and RAM significantly. We also designed a new analysis framework for the classical caching problem with capacity reservations.
Heatmaps of normalized CPU usage transitioning to Prequal at 08:00.
We improved state-of-the-art in clustering and graph algorithms by developing new techniques for computing minimum-cut, approximating correlation clustering, and massively parallel graph clustering. Additionally, we introduced TeraHAC, a novel hierarchical clustering algorithm for trillion-edge graphs, designed a text clustering algorithm for better scalability while maintaining quality, and designed the most efficient algorithm for approximating the Chamfer Distance, the standard similarity function for multi-embedding models, offering >50× speedups over highly-optimized exact algorithms and scaling to billions of points.
We continued optimizing Google’s large embedding models (LEMs), which power many of our core products and recommender systems. Some new techniques include Unified Embedding for battle-tested feature representations in web-scale ML systems and Sequential Attention, which uses attention mechanisms to discover high-quality sparse model architectures during training.
<!–
This year, we also continued our research in market algorithms to design computationally efficient marketplaces and causal inference. First, we remain committed to advancing the rapidly growing interest in ads automation for which our recent work explains the adoption of autobidding mechanisms and examines the effect of different auction formats on the incentives of advertisers. In the multi-channel setting, our findings shed light on how the choice between local and global optimizations affects the design of multi-channel auction systems and bidding systems.
–>
Beyond auto-bidding systems, we also studied auction design in other complex settings, such as buy-many mechanisms, auctions for heterogeneous bidders, contract designs, and innovated robust online bidding algorithms. Motivated by the application of generative AI in collaborative creation (e.g., joint ad for advertisers), we proposed a novel token auction model where LLMs bid for influence in the collaborative AI creation. Finally, we show how to mitigate personalization effects in experimental design, which, for example, may cause recommendations to drift over time.
The Chrome Privacy Sandbox, a multi-year collaboration between Google Research and Chrome, has publicly launched several APIs, including for Protected Audience, Topics, and Attribution Reporting. This is a major step in protecting user privacy while supporting the open and free web ecosystem. These efforts have been facilitated by fundamental research on re-identification risk, private streaming computation, optimization of privacy caps and budgets, hierarchical aggregation, and training models with label privacy.
Science and society
In the not too distant future, there is a very real possibility that AI applied to scientific problems can accelerate the rate of discovery in certain domains by 10× or 100×, or more, and lead to major advances in diverse areas including bioengineering, materials science, weather prediction, climate forecasting, neuroscience, genetic medicine, and healthcare.
Sustainability and climate change
In Project Green Light, we partnered with 13 cities around the world to help improve traffic flow at intersections and reduce stop-and-go emissions. Early numbers from these partnerships indicate a potential for up to 30% reduction in stops and up to 10% reduction in emissions.
In our contrails work, we analyzed large-scale weather data, historical satellite images, and past flights. We trained an AI model to predict where contrails form and reroute airplanes accordingly. In partnership with American Airlines and Breakthrough Energy, we used this system to demonstrate contrail reduction by 54%.
Contrails detected over the United States using AI and GOES-16 satellite imagery.
We are also developing novel technology-driven approaches to help communities with the effects of climate change. For example, we have expanded our flood forecasting coverage to 80 countries, which directly impacts more than 460 million people. We have initiated a number of research efforts to help mitigate the increasing danger of wildfires, including real-time tracking of wildfire boundaries using satellite imagery, and work that improves emergency evacuation plans for communities at risk to rapidly-spreading wildfires. Our partnership with American Forests puts data from our Tree Canopy project to work in their Tree Equity Score platform, helping communities identify and address unequal access to trees.
Finally, we continued to develop better models for weather prediction at longer time horizons. Improving on MetNet and MetNet-2, in this year’s work on MetNet-3, we now outperform traditional numerical weather simulations up to twenty-four hours. In the area of medium-term, global weather forecasting, our work on GraphCast showed significantly better prediction accuracy for up to 10 days compared to HRES, the most accurate operational deterministic forecast, produced by the European Centre for Medium-Range Weather Forecasts (ECMWF). In collaboration with ECMWF, we released WeatherBench-2, a benchmark for evaluating the accuracy of weather forecasts in a common framework.
[embedded content] A selection of GraphCast’s predictions rolling across 10 days showing specific humidity at 700 hectopascals (about 3 km above surface), surface temperature, and surface wind speed.
Health and the life sciences
The potential of AI to dramatically improve processes in healthcare is significant. Our initial Med-PaLM model was the first model capable of achieving a passing score on the U.S. medical licensing exam. Our more recent Med-PaLM 2 model improved by a further 19%, achieving an expert-level accuracy of 86.5%. These Med-PaLM models are language-based, enable clinicians to ask questions and have a dialogue about complex medical conditions, and are available to healthcare organizations as part of MedLM through Google Cloud.
In the same way our general language models are evolving to handle multiple modalities, we have recently shown research on a multimodal version of Med-PaLM capable of interpreting medical images, textual data, and other modalities, describing a path for how we can realize the exciting potential of AI models to help advance real-world clinical care.
Med-PaLM M is a large multimodal generative model that flexibly encodes and interprets biomedical data including clinical language, imaging, and genomics with the same model weights.
We have also been working on how best to harness AI models in clinical workflows. We have shown that coupling deep learning with interpretability methods can yield new insights for clinicians. We have also shown that self-supervised learning, with careful consideration of privacy, safety, fairness and ethics, can reduce the amount of de-identified data needed to train clinically relevant medical imaging models by 3×–100×, reducing the barriers to adoption of models in real clinical settings. We also released an open source mobile data collection platform for people with chronic disease to provide tools to the community to build their own studies.
AI systems can also discover completely new signals and biomarkers in existing forms of medical data. In work on novel biomarkers discovered in retinal images, we demonstrated that a number of systemic biomarkers spanning several organ systems (e.g., kidney, blood, liver) can be predicted from external eye photos. In other work, we showed that combining retinal images and genomic information helps identify some underlying factors of aging.
In the genomics space, we worked with 119 scientists across 60 institutions to create a new map of the human genome, or pangenome. This more equitable pangenome better represents the genomic diversity of global populations. Building on our ground-breaking AlphaFold work, our work on AlphaMissense this year provides a catalog of predictions for 89% of all 71 million possible missense variants as either likely pathogenic or likely benign.
Examples of AlphaMissense predictions overlaid on AlphaFold predicted structures (red – predicted as pathogenic; blue – predicted as benign; grey – uncertain). Red dots represent known pathogenic missense variants, blue dots represent known benign variants. Left: HBB protein. Variants in this protein can cause sickle cell anaemia. Right: CFTR protein. Variants in this protein can cause cystic fibrosis.
We also shared an update on progress towards the next generation of AlphaFold. Our latest model can now generate predictions for nearly all molecules in the Protein Data Bank (PDB), frequently reaching atomic accuracy. This unlocks new understanding and significantly improves accuracy in multiple key biomolecule classes, including ligands (small molecules), proteins, nucleic acids (DNA and RNA), and those containing post-translational modifications (PTMs).
On the neuroscience front, we announced a new collaboration with Harvard, Princeton, the NIH, and others to map an entire mouse brain at synaptic resolution, beginning with a first phase that will focus on the hippocampal formation — the area of the brain responsible for memory formation, spatial navigation, and other important functions.
Quantum computing
Quantum computers have the potential to solve big, real-world problems across science and industry. But to realize that potential, they must be significantly larger than they are today, and they must reliably perform tasks that cannot be performed on classical computers.
This year, we took an important step towards the development of a large-scale, useful quantum computer. Our breakthrough is the first demonstration of quantum error correction, showing that it’s possible to reduce errors while also increasing the number of qubits. To enable real-world applications, these qubit building blocks must perform more reliably, lowering the error rate from ~1 in 103 typically seen today, to ~1 in 108.
Responsible AI research
Design for Responsibility
Generative AI is having a transformative impact in a wide range of fields including healthcare, education, security, energy, transportation, manufacturing, and entertainment. Given these advances, the importance of designing technologies consistent with our AI Principles remains a top priority. We also recently published case studies of emerging practices in society-centered AI. And in our annual AI Principles Progress Update, we offer details on how our Responsible AI research is integrated into products and risk management processes.
Proactive design for Responsible AI begins with identifying and documenting potential harms. For example, we recently introduced a three-layered context-based framework for comprehensively evaluating the social and ethical risks of AI systems. During model design, harms can be mitigated with the use of responsible datasets.
We are partnering with Howard University to build high quality African-American English (AAE) datasets to improve our products and make them work well for more people. Our research on globally inclusive cultural representation and our publication of the Monk Skin Tone scale furthers our commitments to equitable representation of all people. The insights we gain and techniques we develop not only help us improve our own models, they also power large-scale studies of representation in popular media to inform and inspire more inclusive content creation around the world.
With advances in generative image models, fair and inclusive representation of people remains a top priority. In the development pipeline, we are working to amplify underrepresented voices and to better integrate social context knowledge. We proactively address potential harms and bias using classifiers and filters, careful dataset analysis, and in-model mitigations such as fine-tuning, reasoning, few-shot prompting, data augmentation and controlled decoding, and our research showed that generative AI enables higher quality safety classifiers to be developed with far less data. We also released a powerful way to better tune models with less data giving developers more control of responsibility challenges in generative AI.
We have developed new state-of-the-art explainability methods to identify the role of training data on model behaviors. By combining training data attribution methods with agile classifiers, we found that we can identify mislabelled training examples. This makes it possible to reduce the noise in training data, leading to significant improvements in model accuracy.
We initiated several efforts to improve safety and transparency about online content. For example, we introduced SynthID, a tool for watermarking and identifying AI-generated images. SynthID is imperceptible to the human eye, doesn’t compromise image quality, and allows the watermark to remain detectable, even after modifications like adding filters, changing colors, and saving with various lossy compression schemes.
We also launched About This Image to help people assess the credibility of images, showing information like an image’s history, how it’s used on other pages, and available metadata about an image. And we explored safety methods that have been developed in other fields, learning from established situations where there is low-risk tolerance.
SynthID generates an imperceptible digital watermark for AI-generated images.
Privacy remains an essential aspect of our commitment to Responsible AI. We continued improving our state-of-the-art privacy preserving learning algorithm DP-FTRL, developed the DP-Alternating Minimization algorithm (DP-AM) to enable personalized recommendations with rigorous privacy protection, and defined a new general paradigm to reduce the privacy costs for many aggregation and learning tasks. We also proposed a scheme for auditing differentially private machine learning systems.
On the applications front we demonstrated that DP-SGD offers a practical solution in the large model fine-tuning regime and showed that images generated by DP diffusion models are useful for a range of downstream tasks. We proposed a new algorithm for DP training of large embedding models that provides efficient training on TPUs without compromising accuracy.
We also teamed up with a broad group of academic and industrial researchers to organize the first Machine Unlearning Challenge to address the scenario in which training images are forgotten to protect the privacy or rights of individuals. We shared a mechanism for extractable memorization, and participatory systems that give users more control over their sensitive data.
We continued to expand the world’s largest corpus of atypical speech recordings to >1M utterances in Project Euphonia, which enabled us to train a Universal Speech Model to better recognize atypical speech by 37% on real-world benchmarks.
We also built an audiobook recommendation system for students with reading disabilities such as dyslexia.
Adversarial testing
Our work in adversarial testing engaged community voices from historically marginalized communities. We partnered with groups such as the Equitable AI Research Round Table (EARR) to ensure we represent the diverse communities who use our models and engage with external users to identify potential harms in generative model outputs.
We established a dedicated Google AI Red Team focused on testing AI models and products for security, privacy, and abuse risks. We showed that attacks such as “poisoning” or adversarial examples can be applied to production models and surface additional risks such as memorization in both image and text generative models. We also demonstrated that defending against such attacks can be challenging, as merely applying defenses can cause other security and privacy leakages. We also introduced model evaluation for extreme risks, such as offensive cyber capabilities or strong manipulation skills.
Democratizing AI though tools and education
As we advance the state-of-the-art in ML and AI, we also want to ensure people can understand and apply AI to specific problems. We released MakerSuite (now Google AI Studio), a web-based tool that enables AI developers to quickly iterate and build lightweight AI-powered apps. To help AI engineers better understand and debug AI, we released LIT 1.0, a state-of-the-art, open-source debugger for machine learning models.
Colab, our tool that helps developers and students access powerful computing resources right in their web browser, reached over 10 million users. We’ve just added AI-powered code assistance to all users at no cost — making Colab an even more helpful and integrated experience in data and ML workflows.
One of the most used features is “Explain error” — whenever the user encounters an execution error in Colab, the code assistance model provides an explanation along with a potential fix.
To ensure AI produces accurate knowledge when put to use, we also recently introduced FunSearch, a new approach that generates verifiably true knowledge in mathematical sciences using evolutionary methods and large language models.
For AI engineers and product designers, we’re updating the People + AI Guidebook with generative AI best practices, and we continue to design AI Explorables, which includes how and why models sometimes make incorrect predictions confidently.
Community engagement
We continue to advance the fields of AI and computer science by publishing much of our work and participating in and organizing conferences. We have published more than 500 papers so far this year, and have strong presences at conferences like ICML (see the Google Research and Google DeepMind posts), ICLR (Google Research, Google DeepMind), NeurIPS (Google Research, Google DeepMind), ICCV, CVPR, ACL, CHI, and Interspeech. We are also working to support researchers around the world, participating in events like the Deep Learning Indaba, Khipu, supporting PhD Fellowships in Latin America, and more. We also worked with partners from 33 academic labs to pool data from 22 different robot types and create the Open X-Embodiment dataset and RT-X model to better advance responsible AI development.
Google has spearheaded an industry-wide effort to develop AI safety benchmarks under the MLCommons standards organization with participation from several major players in the generative AI space including OpenAI, Anthropic, Microsoft, Meta, Hugging Face, and more. Along with others in the industry we also co-founded the Frontier Model Forum (FMF), which is focused on ensuring safe and responsible development of frontier AI models. With our FMF partners and other philanthropic organizations, we launched a $10 million AI Safety Fund to advance research into the ongoing development of the tools for society to effectively test and evaluate the most capable AI models.
In close partnership with Google.org, we worked with the United Nations to build the UN Data Commons for the Sustainable Development Goals, a tool that tracks metrics across the 17 Sustainable Development Goals, and supported projects from NGOs, academic institutions, and social enterprises on using AI to accelerate progress on the SDGs.
The items highlighted in this post are a small fraction of the research work we have done throughout the last year. Find out more at the Google Research and Google DeepMind blogs, and our list of publications.
Future vision
As multimodal models become even more capable, they will empower people to make incredible progress in areas from science to education to entirely new areas of knowledge.
Progress continues apace, and as the year advances, and our products and research advance as well, people will find more and interesting creative uses for AI.
Ending this Year-in-Review where we began, as we say in Why We Focus on AI (and to what end):
If pursued boldly and responsibly, we believe that AI can be a foundational technology that transforms the lives of people everywhere — this is what excites us!
This Year-in-Review is cross-posted on both the Google Research Blog and the Google DeepMind Blog.
#000#2023#250#acids#agile#aging#ai#ai model#ai platform#AI red team#AI-powered#airplanes#algorithm#Algorithms#AlphaFold#America#American Airlines#amp#Analysis#anthropic#APIs#applications#approach#apps#architecture#Art#artificial#Artificial Intelligence#atomic#auction
0 notes
Text
Today’s News
Tech
-Qualcomm has unveiled its new Snapdragon X Elite platform, which it claims will deliver superior performance over multiple Intel Core i7 processors, Apple's M2 chip, and AMD's Ryzen 9 7940HS, according to Geekbench 6 benchmark tests.
-Nvidia and AMD are planning to produce Arm-based CPUs by 2025 in direct competition with Intel.
-Amazon has introduced passkey support for customers using browsers, with rollouts planned for the iOS and Android Amazon Shopping apps. Passkeys allow users to sign in using the method they use to unlock their devices, such as a fingerprint, face scan, or a lock screen PIN.
AI
-Artificial intelligence is increasingly being used to generate illegal child sexual abuse images online, posing a significant and growing threat to child safety, according to a report by the Internet Watch Foundation.
-Adult film star Riley Reid has launched Clona.AI, an AI chatbot platform offering fans intimate conversations with their favorite adult stars for $30 a month, using Meta’s Llama 2 large language model, which is heavily refined and retrained to mimic the personality of each star, allowing content creators to directly control the discourse of their AI clones to ensure it's in line with their personal preferences.
-The US government has ordered tech company Nvidia to immediately halt the shipping of its advanced artificial intelligence chips to China, a move originally planned to be effected 30 days from 17 October.
-Humane's first gadget, the AI Pin, which uses a mix of proprietary software and OpenAI’s GPT-4, will launch on November 9th. It will attach magnetically to clothing and has various features, including making calls, translating speech, and understanding nutritional information. A feature called the "Trust Light" illuminates when the device's camera, microphone, or sensors are recording data.
-A new tool called Nightshade could help artists prevent unauthorized use of their work in generative AI models.
-IBM has acquired Manta Software Inc., a leading data lineage platform, to enhance its capabilities in AI and data governance. The platform aids businesses in understanding their data flows, sources, transformations, and dependencies, thereby increasing transparency and ensuring correct data is used in AI models.
-YouTube Music now allows users to create custom playlist cover art using generative AI, rather than merging cover art for the first four songs of a playlist. The feature, initially available to English-speaking users in the US, enables the creation of AI-generated art in several categories, presenting five options to choose from.
-Sundar Pichai, Alphabet's CEO, sheds light on Google's AI search advancements, SGE. This new search tool aims to revolutionize user experience. He also updated us on Alphabet's revenue, as it rose 11 percent year over year to $76.7 billion.
-Microsoft's cloud services are seeing impressive growth, and OpenAI plays a significant role. Its growth is helping to outgrow Google Cloud. As Microsoft leverages OpenAI's tech, it's setting new standards in the cloud industry.
0 notes
Text
[ad_1] Las herramientas de inteligencia artificial generativa (IA) como ChatGPT están experimentando una rápida adopción, aumentando más rápido que los teléfonos inteligentes y las tabletas después de su lanzamiento, según una nueva investigación de Emarketer. Emarketer informa que la IA generativa llegará a 77,8 millones de usuarios dos años después del lanzamiento de ChatGPT en noviembre de 2022. Esto es más del doble de la tasa de adopción de tabletas y teléfonos inteligentes en un período similar. El factor clave que impulsa la rápida aceptación es la baja barrera de entrada. A diferencia de los teléfonos inteligentes y las tabletas, la IA generativa no requiere la compra de nuevo hardware. Las personas pueden acceder a ChatGPT y herramientas similares en los dispositivos que ya poseen. Esto ha permitido que florezca la curiosidad, aunque la tecnología sigue siendo nueva para muchos consumidores y empresas. Emarketer espera que la tasa de crecimiento disminuya después del aumento inicial. Sin embargo, el uso generativo de IA persistirá, especialmente entre los millennials y la Generación Z. Más contexto sobre la tasa de crecimiento La IA generativa llegó a más de 100 millones de usuarios solo dos meses después del lanzamiento de ChatGPT. En comparación, los teléfonos inteligentes tardaron más de dos años en obtener 100 millones de usuarios en todo el mundo después del lanzamiento del iPhone en 2007. Las tabletas llegaron a 75 millones de usuarios en todo el mundo aproximadamente dos años después del lanzamiento del iPad en 2010. Por lo tanto, la adopción de IA generativa ha superado ampliamente a ambas categorías. Captura de pantalla de: insiderintelligence.com/content/generative-ai-adoption-climbed-faster-than-smartphones-tablets, agosto de 2023. Información sobre las tendencias demográficas Según Emarketer, la adopción de IA generativa entre personas de 18 a 34 años es casi el doble que la demográfica de mayor edad. El 13,5% de los millennials y Gen Z ya han adoptado herramientas como ChatGPT. Los grupos demográficos más jóvenes son más nativos digitalmente y son los primeros en adoptar nuevas tecnologías como la IA generativa. Su nivel de comodidad ayuda a impulsar el aumento inicial de la adopción. Casos de uso para IA generativa Si bien la IA generativa ha existido por un tiempo, explotó en popularidad este año como la novedad más novedosa de la industria tecnológica. Los consumidores experimentaron con entusiasmo con herramientas como ChatGPT para crear poesía, Midjourney para generar arte y chatbots de IA de Google y Microsoft integrados en la búsqueda. Ahora que el frenesí inicial se ha calmado, persisten las preguntas sobre qué aplicaciones de IA generativa tendrán poder de permanencia para las empresas. Los especialistas en marketing, en particular, se preguntan cómo pueden aprovechar mejor estas nuevas capacidades de manera impactante en el futuro. Oportunidades y riesgos para los especialistas en marketing Para los especialistas en SEO y los especialistas en marketing digital, la IA generativa presenta oportunidades y riesgos. La búsqueda impulsada por IA, como la Experiencia generativa de búsqueda (SGE) de Google, tiene como objetivo mejorar la calidad de la búsqueda. Sin embargo, existe la preocupación de que SGE pueda mostrar más contenido de baja calidad. El CEO de Google, Sundar Pichai, calificó a SGE de Google como una "clara victoria en calidad" durante una llamada de ganancias reciente, pero no todos los expertos están de acuerdo. Los chatbots, como ChatGPT y Google Bard, se muestran prometedores para aplicaciones de marketing como servicio al cliente, análisis de datos, intercambio de ideas y más. Aún así, la supervisión humana sigue siendo crítica. Mirando hacia el futuro A medida que la IA generativa continúa evolucionando, los especialistas en marketing deben monitorear de cerca los desarrollos. Con pruebas y evaluaciones cuidadosas, existe la posibilidad de aprovechar estas herramientas mientras se mitigan las posibles desventajas. Por ahora, la curiosidad y la experimentación probablemente impulsarán una mayor adopción. Imagen destacada: sizsus art/Shutterstock window.addEventListener( 'load2', function() if( !window.ss_u ) !function(f,b,e,v,n,t,s) if(f.fbq)return;n=f.fbq=function()n.callMethod? n.callMethod.apply(n,arguments):n.queue.push(arguments); if(!f._fbq)f._fbq=n;n.push=n;n.loaded=!0;n.version='2.0'; n.queue=[];t=b.createElement(e);t.async=!0; t.src=v;s=b.getElementsByTagName(e)[0]; s.parentNode.insertBefore(t,s)(window, document,'script', 'https://connect.facebook.net/en_US/fbevents.js'); if( typeof window.sopp != "undefined" && window.sopp === 'yes' ) fbq('dataProcessingOptions', ['LDU'], 1, 1000); console.log('load_px'); fbq('init', '1321385257908563'); fbq('track', 'PageView'); fbq('trackSingle', '1321385257908563', 'ViewContent', content_name: 'generative-ai-adoption-rate-eclipsing-smartphones-tablets', content_category: 'generative-ai news' ); );
0 notes
Text
trust you, huh?
(heartsick) no, it’s not what you—
hi, i’m carmen or calamity, welcome to my blog! i hope you enjoy your stay!
most people just call me carmen or cal, but i’m also known as:
the ‘monarch of stockholm’, as written by duane capizzi himself
the ceo of the enby carmen agenda
the ceo of the history besties (player and jules!)
the ceo of the boba besties (player and xifeng, and jules by association)
the ceo of writing angst and making everyone around me suffer
lead enforcer and ceo of the #letjuliasayfuck agenda
someone who thinks li xifeng has never done anything wrong in their life ever
ceo of eda and raeda analysis
and anything other title people deem fit for me!
my pronouns are they/she, and i’m a nonbinary bisexual :) i post about cartoons, mainly carmen sandiego and the owl house, as well as whatever asks people send me. my carrd is linked in my bio with more information about my interests and socials, so you can check that out!
#my art -> takes you to my artwork
#my writing -> takes you to my writing, you can also find me on ao3 here
#my edits -> takes you to my video edits
#carmen.txt -> takes you to my original textposts
#carmen’s script adventures -> takes you to the scanned images of the stockholm and egyptian decryption caper scripts that duane sent me
#asking carmen -> takes you to my answers to asks
#sge carmen au, #heartstomper au -> takes you to two CS alternate universes i’ve created
feel free to send me an ask or DM if you have any other questions or want to chat! i’ll try my best to answer as soon as possible!
keep fighting the good fight :)
love, carmen ♥️
27 notes
·
View notes
Text
Really can’t stop imagining the SGE sister squad on here singing songs together, can I? Now I keep thinking of us singing What The H*ll by OR3O (which is a fan song for Helltaker which is a video game I’ve never played but looked up on Wikipedia and TV Tropes after listening to the song), with me as Beelzebub the Great Fly and Cerberus the Triple Demon, Kenzie as Pandemonica the Tired Demon and Azazel the Curious Angel, Scuba as Modeus the Lustful Demon and Malina the Sour Demon, Sparkle as Zdrada the B-word Demon and Lucifer the CEO of Hell, and Berry as Justice the Awesome Demon and Judgment the High Prosecutor. I do not know why I feel the need to keep posting about these when I’m pretty sure y’all don’t even know what songs or characters I’m talking about.
#should i go off in the tags about my reasons for the casting again?#beelzebub's apparently the narrator for the game and has some dramatic flair and has been banished to the abyss for doing who knows what#tells the story from there; it's unclear whether it's true or whether she was making it up to entertain herself while stuck in the abyss#i dunno i feel like i could swing that#cerberus apparently comes off as the most energetic and kind of puppy-like; which works for me#why did a coffee-loving demon and an angel studying demons to write a thesis somehow seem to fit with our brain cell kenzie in my mind?#i dunno man#it just did#i put scuba as modeus mainly because of like a few kinda love-thirsty poop posts from her and the fact that they're both cozy sweater girls#as for malina NO i'm not saying you're sour scuba it was mostly 'cause of her stay-at-home video game nerd personality#i don't know if you're a gamer and i don't know how that fits either i dunno how my mind works it just seemed right somehow i dunno#zdrada is her more outgoing and punk sister who's super smug and sadistic#seemed best-fitting for sparkle#and lucifer...literally has the line ''prove to me you can be my most treasured slave''#what; was i NOT gonna give sparkle the role that had lines like that in the song?#justice is cool and laid-back and friendly; she's apparently the one demon that DOESN'T try to kill you#but like she also used to be the high prosecutor back in the day so if that's anything to go by she totally COULD if she wanted to#yeah that works for berry#and judgment the current high prosecutor DOES seem the most threatening and puts you through the sin machine to destroy you#she's also a VERY large ham#but apparently all that's more because of her job than anything and once you get beat that part of the game she's actually pretty nice#berry definitely has some experience playing the big bad villain and nearly destroying people but dropping the act and being nice afterwards#sge sisters
0 notes
Link
MOST PEOPLE REMEMBER the last day of a temp job. Maybe colleagues take you out to lunch; maybe you send that goodbye email promising to keep in touch. But for Keith Noreika, the temporary head of the Office of the Comptroller of the Currency, his final temp day is just like any other Tuesday. Because he doesn’t plan on leaving.
Noreika’s unusual stint running the OCC, a top bank regulatory agency, as a “special government employee” enabled him to sidestep congressional vetting and ethics rules for members of the executive branch. But a watchdog group believes that unique status runs out today, and they want Noreika investigated for illegally overstaying his welcome.
“The more the Trump administration gets away with breaking the rules on ethics, the more they create precedents,” said Jeff Hauser, executive director of the Revolving Door Project at the Center for Economic and Policy Research, who filed the complaint against Noreika with the Treasury Department’s inspector general. “We’re only seeing the tip of the corrupt iceberg.”
Noreika was a corporate lawyer defending Wells Fargo, JPMorgan Chase, and others, until being selected to head the second most important bank regulator in the federal government. Instead of taking Noreika through the Senate confirmation process, the Trump administration named him “first deputy comptroller,” a non-confirmable position. When Comptroller Thomas Curry stepped down on May 5, Noreika slid into the top slot.
The position was considered temporary until Trump got a permanent replacement for Curry confirmed. This enabled Noreika to spend time at the OCC, learn key details of the bank examination process from the inside, and use that knowledge to assist corporate clients when he spun back out to Big Law. The temp gig would be a huge resume-builder, as Noreika could command mind-boggling legal fees with an OCC directorship under his belt.
But there was a catch: Federal ethics laws would create conflict of interest burdens in office, and the Trump ethics pledge would prohibit Noreika from communicating with OCC colleagues for a year after exiting, or from lobbying the agency for five years.
So administration officials found a loophole. If Noreika was made a “special government employee,” a status typically reserved for part-time members of advisory committees — or for Huma Abedin — he would be exempt from ethics statutes or the Trump pledge, enabling him to flow smoothly between the OCC and a law firm partnership without restrictions. The only condition? Special government employees can only work “one hundred and thirty days during any period of three hundred and sixty-five consecutive days,” per law.
The end of that 130-day cycle is September 12. This means that, according to critics, either today is Noreika’s last day, or he must transition to being a permanent federal employee, thereby subject to all applicable ethics laws both in office and after he leaves.
The nominee to take over the OCC, former OneWest Bank CEO Joseph Otting, justadvanced through the Senate Banking Committee last week; he has yet to receive a vote on the Senate floor. Noreika has said he would remain acting comptroller until Otting was confirmed.
Hauser believes that’s not good enough. In his complaint to the inspector general, he contends that Noreika “is continuing to serve as a special government employee longer than legally permissible.”
OCC spokesperson Bryan Hubbard argued in comments to The Intercept that “the 130 days are business days not calendar days, which translates to sometime in November.” But the phrase “business days” appears nowhere in the statute. Hauser added that the reference to a “period of 365 consecutive days” clearly refers to a year, meaning that “the word ‘days’ is being used in a dictionary sense, rather than as shorthand for business days.”
When pressed on this, Hubbard claimed that the Office of Government Ethics has long interpreted the language in the U.S. Code to refer only to working days on which services were performed. He referred to this 2007 guidance from the OGE on how to count special government employee days.
The OGE does say there that the 130-day test reflects a “good-faith estimate” of days of service that can be exceeded to an unspecified degree. It adds that days when the employee only performed “de minimis” operations for an agency — like a one-minute telephone call — wouldn’t count toward the total. (A Kennedy-era presidential memorandum, by contrast, states that “a part of a day should be counted as a full day… and a Saturday, Sunday or holiday on which duty is to be performed should be counted equally with a regular work day.”)
A separate 2007 Office of Legal Counsel memo, denying a request to tally up hours worked in calculating days of service, states “The term ‘days’ of service … is better read to mean calendar days on which the SGE performed work for the agency.” It further says that “context also supports the conclusion that the term refers to calendar days.”
Continue Reading
Phroyd
10 notes
·
View notes
Text
EDC Las Vegas To Be Broadcast Across China via Tencent Video
LiveXLive and Tencent are partnering up in a big way. LiveXLive Media just announced a distribution agreement with Tencent Corp’s video division.
As it concerns the EDM world, the agreement gives Tencent Video exclusive rights to distribute Insomniac’s EDC Las Vegas throughout China. The music festival’s live stream is produced by LiveXLive.
Tencent is one of the biggest media companies in the world and China’s leading video platform, with 89 million subscribers and more than 900 million monthly active users.
Robert S. Ellin, LiveXLive’s Chairman and CEO, further explains:
Our agreement with Tencent marks a milestone as we continue to expand our global distribution partnerships. Last year, we streamed EDC Las Vegas to more than 10 million people. Our partnership with Tencent provides us with a unique opportunity to deliver one of the best dance music events in the world to an even larger audience.
People can access the live stream outside of China via LiveXLive.com. Or through LiveXLive’s mobile and OTT apps, available for iOS, Android, Roku, Amazon Fire TV, and Apple TV. Footage from EDC Las Vegas should be more accessible than ever.
LiveXLive also recently confirmed a distribution agreement with Synergy Global Entertainment (SGE). This will see select performances from 11 upcoming SGE festivals and events.
Source: Digital Music News | Photo via Jake West for Insomniac Events
This article was first published on Your EDM. Source: EDC Las Vegas To Be Broadcast Across China via Tencent Video
EDC Las Vegas To Be Broadcast Across China via Tencent Video published first on https://soundwizreview.tumblr.com/
0 notes
Text
EDC Las Vegas To Be Broadcast Across China via Tencent Video
LiveXLive and Tencent are partnering up in a big way. LiveXLive Media just announced a distribution agreement with Tencent Corp’s video division.
As it concerns the EDM world, the agreement gives Tencent Video exclusive rights to distribute Insomniac’s EDC Las Vegas throughout China. The music festival’s live stream is produced by LiveXLive.
Tencent is one of the biggest media companies in the world and China’s leading video platform, with 89 million subscribers and more than 900 million monthly active users.
Robert S. Ellin, LiveXLive’s Chairman and CEO, further explains:
Our agreement with Tencent marks a milestone as we continue to expand our global distribution partnerships. Last year, we streamed EDC Las Vegas to more than 10 million people. Our partnership with Tencent provides us with a unique opportunity to deliver one of the best dance music events in the world to an even larger audience.
People can access the live stream outside of China via LiveXLive.com. Or through LiveXLive’s mobile and OTT apps, available for iOS, Android, Roku, Amazon Fire TV, and Apple TV. Footage from EDC Las Vegas should be more accessible than ever.
LiveXLive also recently confirmed a distribution agreement with Synergy Global Entertainment (SGE). This will see select performances from 11 upcoming SGE festivals and events.
Source: Digital Music News | Photo via Jake West for Insomniac Events
This article was first published on Your EDM. Source: EDC Las Vegas To Be Broadcast Across China via Tencent Video
source https://www.youredm.com/2019/05/09/livexlive-tencent-deal-edc-las-vegas/
0 notes
Text
Subscribe To My YouTube‼️‼️‼️‼️
Like, Share, Comment On All Videos‼️‼️‼️
#youtubechannel#youtubegaming#youtuber#twitchgamer#twitchgaming#twitchpromotion#money#nba2k19#freemoney#nba#nba2k20#nba2k#madden#madden20#madden nfl 20#nfl#ps4#psn#playstation
0 notes
Text
2 stocks I’d buy today with high growth prospects and dividend income
Today I’d like to discuss the outlook for PayPoint (LSE: PAY), the payment processing firm and The Sage Group (LSE: SGE), the FTSE 100 accounting and enterprise software group.
I regard both of them as shares with robust growth prospects that may deserve a place in a diversified portfolio.
Organic growth and dividends
Many of our readers would have either noticed the yellow logo or possibly used the services of PayPoint at their local convenience stores or supermarkets.
On 24 January, the company released a trading update and reported a “solid quarter“. Net revenue from UK retail services increased by 4% with its core business.
And there was lots more good news: Its core business, over-the-counter utility bill payments, is a steady earner. Its aggressive rollout of the updated PayPoint One terminals has gone better than expected. On average shops pay a weekly service fee £14.89 to use PayPoint One. Its flagship EPoS Pro terminal, launched over a year ago, is now in 520 convenience retailers and management is hopeful about growing numbers.
The group which boasts a 43% share of the UK convenience sector is working to up its game with mobile payments, offering customers plenty of choice, from app to in-store. And its parcel delivery and collection service, Collect+, is profitable as more customers turn to convenience stores to receive and/or return purchased items.
The UK click-and-collect market handles about 120m parcels a year, a number that online retail association IMRG expects to double within the decade, so Collect+ is likely to contribute to the bottom line with growing momentum.
For income investors, the group’s dividend yield is almost 5.5% and PayPoint also has a policy of paying out special dividends.
And the firm isn’t only exposed to the UK market. It has similar operations in Romania. After Brexit, this small but profitable base could serve as an important gateway into the EU and further contribute to the bottom line.
Subscription-based monetisation
Investors are increasingly paying attention to software-as-a-service (SaaS) companies with high recurring revenues and strong client retention.
In January, the UK’s largest listed software business group, Sage, released its trading update for the three months to 31 December. CEO Steve Hare noted the “strong start to FY19” and focused on the “high-quality subscription and recurring revenue” as the group worked on “becoming a great SaaS business.”
Organic revenue growth was 7.6% and increased to £465m. The solid results were driven by 28% growth in subscription revenues. North American operations were also up 10.4% and turned over £154m in sales.
Most of Sage’s customers are small and medium-sized enterprises that tend to stay as customers for years. Therefore the group’s revenues are quite predictable, a big attraction for investors who look for reliable companies.
Meanwhile Making Tax Digital (MTD), the UK government’s flagship scheme to move the tax system online, will begin to affect most businesses from April. MTD-compatible software enterprises, such as Sage, will help many customers become MTD-ready. And this process could see many of them inclined to stay on as long-term customers.
The company’s shares now trade around 660p, about 25% below a high of 820p seen in January 2018. This lower price may offer long-term investors a good entry point into the shares as I believe they’d be rewarded handsomely within three to four years.
Want To Boost Your Savings?
Do you want to retire early and give up the rat race to enjoy the rest of your life? Of course you do, and to help you accomplish this goal, the Motley Fool has put together this free report titled "The Foolish Guide To Financial Independence", which is packed full of wealth-creating tips as well as ideas for your money.
The report is entirely free and available for download today, so if you're interested in exiting the rat race and achieving financial independence, click here to download the report. What have you got to lose?
More reading
Is the Sirius Minerals share price the bargain of the year?
Will the Vodafone share price ever make a successful comeback?
3 embarrassingly cheap dividend stocks I’d buy
Want to make a million? The National Lottery won’t get you there. This strategy will
Retirement saving: 3 ways to avoid running out of cash when you retire
tezcang has no position in any of the shares mentioned. The Motley Fool UK owns shares of PayPoint. The Motley Fool UK has recommended Sage Group. Views expressed on the companies mentioned in this article are those of the writer and therefore may differ from the official recommendations we make in our subscription services such as Share Advisor, Hidden Winners and Pro. Here at The Motley Fool we believe that considering a diverse range of insights makes us better investors.
0 notes
Link
Twitter ist und bleibt ein spannendes Informationsmedium – auch wenn einige Politiker und Journalisten dies anders sehen! Wir bleiben weiter bei unserer Meinung: Startups, Gründer, Unternehmer und Investoren sollten unbedingt twittern. Warum? Weil Twitter ein geniales Medium für den schnellen Austausch ist. An dieser Stelle präsentieren wir einmal über 50 aktive und einflussreiche Venture Capital-Geber, die auf Twitter unterwegs sind und denen man unbedingt folgen sollte – in alphabetischer Reihenfolge. Über 50 VCs, denen Gründer unbedingt folgen sollten Jan Alberti / @Jan_Alberti Managing Partner @bmp_ventures #VC #BusinessAngel #investor: #SaaS #mobile #mobility #automotive and any other tech / auto & bike geek Dominik Alvermann/ @domalv Investor at @ActonCapital, Munich. Jörg Binnenbrücker / @bibrue #VentureCapital(ist) from Cologne/ Berlin/ Germany for tech, Founding Partner #capnamic #dumontventure & big time football supporter (1.FC Köln) Otto Birnbaum / @ojbirnbaum VC @partechpartners supporting visionary entrepreneurs in building outstanding companies! Lives in #berlin loves #changingtheworld & #kitesurfing Pawel Chudzinski / @pawell Early-stage (seed, small A) #VC @pointninecap; #b2b #saas #marketplaces #crypto Louis Coppey / @louicop VC @PointNineCap | I like and tweet mainly about SaaS, ML and random stuffs related to politics Filip Dames / @filipdames Entrepreneur, investor and professional optimist. Founder @cherryventures, ex @zalando, @bvb fan #foundersfirst Roland Dennert / @rgwd67 Tech Investor, Marathon Runner, Managing Partner at Cipio Partners Iskender Dirik / @iskenderdirik MD/CEO Microsoft ScaleUp/Microsoft for Startups • Venture Partner EQT Ventures • #VC + #Entrepreneur • Loving the intersection of #AI and #Creativity Rouven Dresselhaus / @RDresselhaus Founding Partner @CavalryVentures supporting (pre)Seed tech startups Robert Gallenberger / @Gallrob Industrial Tech at btov partners . Early stage investments in innovation across the industrial value chain (IIoT, Data&Analytics, Additive Manuf., robotics,..) Stefan Glänzer / @StefanGlaenzer Co-founder since ever, entrepreneur turned angel turned VC turned … In waiting line. Like it Markus Grundmann / @markusgrundmann Startups, entrepreneurship and technology. Partner at B2B SaaS VC @senovovc Henrik Grosse Hokamp / @HenrikGrH Dreamer of a Tech Utopia. Investor @partechpartners Alexander Hahn / @HAHNmeint Berlin und die Welt, 31, Early Stage Investor bei seed+speed Ventures, Ex JuLi-Vorsitzender, FDP-Bundesvorstand und Überzeugungstäter. Hier: Privatmeinung. Luis Hanemann / @LuisHanemann Partner @eventuresVC, Founder of digital marketing agency Trust Agents. Passion for change. Fabian Hansmann / @FabianHansmann Managing Director Netz Holding; Co-Founder Netzsieger, Netzshopping, RatePAY, Steganos, Talentory; Seed investor MisterSpex, Seedcamp… Benedikt Herles / @BenediktHerles Venture Capitalist & Author Uwe Horstmann / @uwehorstmann #venturecapital investor, current interests #digitalhealth and #industrie40 lived in Saudi Arabia, Mexico, Eastern Africa, North Korea, Japan West Wing Olaf Jacobi / @olafjacobi Venture Capital Investor @Capnamic, Business Angel, Repeat Entrepreneur, Athlete Paul Jozefak / @pjozefak Founder at Receeve with thoughts on VC, tech & startups. Previous gigs: Otto Group, Neuhaus Partners, SAP Ventures, etc. Thoughts are purely propoganda! Christoph Janz / @chrija Entrepreneur turned #saas investor at @pointninecap. Seed investor in @zendesk, @algolia, @typeform & others. Looking for the next SaaS Massi Kamali / @MasoudMassi VC @WestTechVC Jens-Philipp Klein / @jensphilipp Partner at Atlantic Labs, based in Berlin. Alexander Kölpin / @AlexKoelpin Early Stage Tech Investor, active as PreSeed & SeedVC Investor in Berlin, Germany Maurice Kügler / @maurice_kuegler VC @htgf – investing in early stage tech start-ups Ralph Eric Kunz / @RalphEKunz Managing Director at Catagonia – Berlin-based VC firm investing in mobile internet, social network apps, location and internet based services. Alex Krausse / @alex_krausse – Views are my own! – Investor @P7S1Group // Ex @CherryVentures & @Zalando // @WHU_edu @HerthaBSC // Into #b2c #marketplaces #ecommerce #DNVBs Fabian Leipelt / @fleipelt @WestTechVC & @pfe_incubator, prev @Groupon Koen Lenssen / @koenlenssen Venture Capital @ Tengelmann Ventures / Dutchie in Germany / Runner / Views are my own Jasper Masemann / @Jasamaze Principal at hvventures.com , loving B2B & complicated solutions to complex problems Simon Math / @SimonMathVC VC @HTGF – investing in SaaS, AI, Consumer and Quantum tech @ Berlin Office Gabriel Matuschka / @gmatuschka Partner at Fly Ventures Christian Meermann / @chrismeermi Entrepreneur and investor. Founder of @cherryventures previously: @zalando @bcg @whu Christian Miele / @miele Partner eventuresVC | Board of Trustees @Startupverband | 4x operator & founder | 100% startup blood Dominik Mühl / @dominikmuehl Investor, tech lover … nsfw account Barbod Namini / @b_namini Venture Capitalist @ Holtzbrinck Ventures. Love tech gadgets, no matter how useless. Friedrich A. Neuman / @faneuman founder/ceo @makersdo, purpose over profit, long termism over short termism, building lasting cultures Max Niederhofer / @maxniederhofer General Partner, Sunstone. @lastfm @onefinestay @getyourguide @travelperk @dubsmash @lillydoo_de & more. Prev @accel @qwerly AMDG/H+. Twitter hiatus interruptus Ciarán O’Leary / @ciaranoleary VC @blueyard Mathias Ockenfels / @ockenrock I love network effects @Speedinvest | Alumni: @Uniplaces @PointNineCap @Naspers @ricardo_ch | Passion for startups | VAMO Marcus Polke / @polkemarcus Venture Partner @henqinvest, hands-on advisor, avid long distance runner. Mentor @Xpreneurs_incub & @TechFoundersMUC. Ex @ActonCapital, @Amazon, @AOL, @Abebooks Benedict Rodenstock / @Benedetto09 Observing and Commenting: The Digital Economy and beyond: on current affairs in Germany, Europe, and the world. Politics, Economics, Business. David Rosskamp / @drosskamp Venture Capitalist & Managing Partner at June Fund | drosskamp.com | @Junefund Nikolas Samios / @BerlinVC Venture Capital Entrepreneur | Founder + CEO @CoopHQ & @PropTech1 | 200+ VC deals | co-author http://DEALTERMS.VC | Music Geek, 25% Greek, Father of 2 Pawel Schapiro / @pawel_vc Partner @ TA Ventures. All things VC and Tech. Focused on Mobility & Digital Health. Proud dad. Daria Saharova / @daria_saharova Early stage investor at @VitoOneVC Tobias Schirmer / @TobiasSchirmer VC in Berlin @join_capital #enterprisetech #industry40 Carlo Schmid / @casch31 VC @CherryVentures – Supporter @Eintracht #SGE #Pokalsieger Simon Schmincke / @simonschmincke VC @creandum. Fishing with flies. Dad and Developer. Believer in Data. Stuck on an Airplane. Simon Schneider / @simon_schneider Start-Ups, VC, Seed, FCB, Karneval, Rheinland Sebastian Schüller / @sbstnschllr Early Stage #VC @Capnamic interested in #AI #data #privacy #mobile #software #venturecapital @Google, @dogpatchlabs, @AaltoITP, @WiSoUniCologne alumnus Jan Sessenhausen / @jsessenhausen Software & Tech Investments at TEV | Tengelmann Ventures, Basketball and other random stuff Samuli Sirén / @SamuliRedstone Managing Director @RedstoneVC | Redstone manages multiple corporate venture funds. We connect industry leaders and ambitious founders. Peter Specht / @specht_p VC @Creandum // X-Bain, Google // Passionate about tech and football // Berlin // Munich // Stockholm Stefan Tirtey / @toetoe MD at CommerzVentures, fintech investor in @etoro, @mambu_com, @marqeta, @omnyway, @fraugster, and @payworks. Early investor in @Soundcloud, views are my own. Andreas Thümmler / @Athuemmler Managing Partner at ACXIT Capital Partners, an independent, privately owned corporate finance firm based in Frankfurt / Germany. Dominik Tobschall / @DominikTo Developer & Lawyer; ¯\_(?)_/¯; Early-stage VC @targetpartners; @fruux, @sabredav; This is not legal advice. Alexander von Frankenberg / @Justdoitalex Investing in high-tech start-ups in Germany and running around the world @runaroundaworld Berthold von Freyberg / @bvfreyberg 1989, 1993 ETH Zürich Diploma, PhD 1997 Microsoft Office PM, Seattle 1999 TVM Capital, Munich SInce 2000 Target Partners, Munich Federico Wengi / @federicowengi Partner at @pauaventures. Food and Basketball enthusiast. Europe’s Citizen. FC Internazionale fan. Marie Wennergren / @mariewennergren Early stage investor @FlyVC, earlier PM @skiveapp. Tech & techno enthusiast Chris White / @ChrisWeiss84 Tech enthusiast and early-stage #VC @CreathorVenture Jason Whitmire / @jason_whitmire Early stage VC @blueyard, inspired to work with founders every day. Eyeing Kepler-186f & -452b. Andreas Winiarski / @andwin_official Chairman & Founding Partner @awecapital, Venture Partner @earlybirdvc #supercalifragilisticexpialidocious Startup-Jobs: Auf der Suche nach einer neuen Herausforderung? In der unserer Jobbörse findet Ihr Stellenanzeigen von Startups und Unternehmen. Foto (oben): Shutterstock
0 notes