Microsoft acquires JClarity to bolster Java workloads on Azure

Source : venturebeat.com

Microsoft today announced that it has acquired JClarity, leading contributor to the AdoptOpenJDK project, for an undisclosed amount. In a blog post published this morning, VP of program management for developer tools and services John Montgomery said the purchase will bolster Microsoft’s Azure cloud computing platform by increasing performance for Java workloads.

“Microsoft Azure and JClarity engineers will be working together to make Azure a better platform for our Java customers and internal teams, improving the experience … of the platform for Java developers and end-users,” said Montgomery. “At Microsoft, we strongly believe that we can do more for our customers by working alongside the Java community … The [JClarity] team, formed by Java champions and data scientists with proven expertise in data-driven Java Virtual Machine (JVM) optimizations, will help teams at Microsoft to leverage advancements in the Java platform.”

Microsoft and JClarity aren’t exactly strangers. Since June 2018, Microsoft has sponsored the AdoptOpenJDK project to help build binaries of OpenJDK — a free and open source implementation of the Java Platform Standard Edition — for platforms such as Linux and Windows. Microsoft is a platinum-level sponsor of AdoptOpenJDF through 2020, and it recently worked with the project to build and deliver a Java installer for its popular Visual Studio Code lightweight code editor.

JClarity CEO Martijn Verburg — now a Java principal engineering group manager at Microsoft — said JClarity will continue to contribute to various Java communities post-purchase, adding that the company’s support team will reach out to customers in the coming weeks to provide guidance on “product and support matters.”

“It’s always been JClarity’s core mission to support the Java ecosystem. We started with our world-class performance tooling and then later became a leader in the AdoptOpenJDK project,” said Verburg in a statement. “Microsoft leads the world in backing developers and their communities, and after speaking to their engineering and program leadership it was a no-brainer to enter formal discussions. With the passion and deep expertise of Microsoft’s people, we’ll be able to support the Java ecosystem better than ever before.”

Microsoft’s acquisition of JClarity comes as the Redmond tech giant is increasing its usage of Java. Azure’s open source analytics service HDInsight and Minecraft both use Java, as do big-name Azure clients like Adobe, Daimler, and Société Générale.

According to an April report published by SlashData, approximately 7.6 million developers actively code using Java worldwide.Sign up for Funding Daily: Get the latest news in your inbox every weekday.

Microsoft hires former Siri boss for AI leadership role

Image Credit: Paolo Bona / Shutterstock

MOST READ

  • Apple Watch Series 5 titanium and ceramic leak signals higher prices

UPCOMING EVENTS

  • GamesBeat Summit 2020April 21-22, 2020

After leading Apple’s Siri team for six and a half years, Bill Stasior left the company for greener pastures elsewhere. Now he’ll be traversing Microsoft’s bucolic meadows, where The Information reports that he’ll head up an artificial intelligence group for the Redmond-based software giant.

Stasior’s latest resume confirms that he joined Microsoft this month as a corporate VP of technology, working under CTO Kevin Scott on unspecified projects, and today’s report suggests that he’ll “work to help align technology strategies across the company.” Given Stasior’s background, that could mean either a wholesale revisiting of Microsoft’s digital assistant Cortana, which has recently started to fade out of the company’s consumer offerings, or something else entirely.

While Stasior presided over years of Siri’s well-documented and troubled history, he was also the executive responsible for directing its under-publicized growth within Apple. He came on board a year after Siri launched for the iPhone 4S and helped to grow the early 70-person engineering team to a group of 1,100 people, including acquiring and integrating 10 small technology companies within Apple. Stasior also takes credit for “bringing modern machine learning to Siri and Apple” and leading the team to expand Siri’s footprint to seven platforms and over 30 languages.

Apple began consolidating its machine learning and Siri teams under former Google AI head John Giannandrea in July 2018 and has since been hiring other AI experts to advance its work in the area. Stasior’s departure from Apple was reported in February, but his resume suggests that he stayed with the company until May, possibly in a non-competitive consulting role.

Before coming to Apple for Siri, Stasior spent six and a half years leading Amazon’s A9 team, which was responsible for providing the core search, advertising, personalization, and image recognition services used by Amazon; before that, he was involved in search and navigation with Amazon and AltaVista. Microsoft’s Bing search engine has remained a distant rival to Google’s core service, and like Cortana could stand to be improved with some outside expertise and perspective.

Twitter open-sources Rezolus telemetry tool

Above: Twitter’s profile page on Twitter.com

MOST READ

  • Apple Watch Series 5 titanium and ceramic leak signals higher prices
  • Microsoft acquires JClarity to bolster Java workloads on Azure

UPCOMING EVENTS

  • GamesBeat Summit 2020April 21-22, 2020

It just became easier to diagnose runtime performance issues at scale, thanks to Twitter. The tech giant today open-sourced Rezolus, a “high-resolution” telemetry agent designed to uncover anomalies and utilization spikes too brief to be captured through normal observability and metrics systems. Twitter says it’s been running Rezolus in production for over a year, and it says it’ll continue development on the public GitHub repository.

“Rezolus provides a collection of signals to help us make sense of fine-grained runtime behavior. We’ve found it particularly helpful in understanding and optimizing performance,” wrote Twitter staff site reliability engineer Brian Martin in a blog post. “With a single agent, we’re able to get telemetry from a wide range of sources. To our knowledge, no other open source project offers such comprehensive insight in a single package.”

According to Martin, Rezolus arose from an internal need to observe systems performance on a “fine-grained” timescale. Twitter engineers running high-throughput synthetic benchmarks frequently ran into seconds-long performance anomalies, which the company’s existing telemetry solutions failed to reflect because of their low sample rate relative to the length of said anomalies. The laws of digital signal processing dictate that sampling rates must be at least twice the duration of the shortest burst in order to accurately reflect the intensity of a burst.

By contrast, Rezolus can precisely measure performance degradation on a fine timescale.

Rezolus allows configurable sampling rate or aggregation on a minutely basis, letting developers match the resolution to spike length. Toggleable plug-in samplers enable it to collect telemetry from a variety of sources, including counters and gauges from Linux kernel sources to get telemetry on CPU usage, network utilization, and disk utilization. Additionally, Rezolus can tap hardware and software performance counters to measure things like the number of cycles per instruction, cache hit-rates, and branch predictor performance. And the tool supports eBPF (Extended Berkeley Packet Filter) for kernel instrumentation using kprobes and tracepoints, allowing it to capture metrics like scheduler latency, block IO size distribution, file system latency, and more.

At 10Hz sampling, Rezolus can reflect consecutive bursts running 200 milliseconds or more without requiring more than 15% processor utilization and 60MB memory. In one recent incident in which several Twitter products were throttled by a backend service, it revealed bursts of over five times the baseline traffic during which processor utilization hit 100%.

“Open-sourcing Rezolus marks an important milestone for the project,” wrote Martin. “We hope that Rezolus will be useful to others outside of Twitter, and look forward to building a community around it.”

GitHub expands token scanning to Atlassian, Dropbox, Discord, and other formats

Above: GitHub CEO Nat Friedman.Image Credit: GitHub

MOST READ

  • Apple Watch Series 5 titanium and ceramic leak signals higher prices
  • Microsoft acquires JClarity to bolster Java workloads on Azure

UPCOMING EVENTS

  • GamesBeat Summit 2020April 21-22, 2020

Roughly a year ago, GitHub expanded token scanning — a feature that identifies cryptographic secrets so they can be revoked before malicious hackers abuse them — to support a wider range of credential types. More recently, the Microsoft-owned company teamed up with third-party cloud providers to enable scanning on all public repositories, and today it revealed that new partners will soon enter the fray.

Starting sometime this week, Atlassian, Dropbox, Discord, Proctorio, and Pulumi will join Alibaba Cloud, Amazon Web Services, Azure, Google Cloud, Mailgun, NPM, Slack, Stripe, and Twilio in facilitating scanning for their token formats. Now, if someone accidentally checks in a token for products like Jira or Discord, the corresponding partner will be notified about a possible match and receive metadata, including the name of the affected code repository and the offending commit.

As GitHub product security engineering manager Patrick Toomey explains in a blog post, most commits and private repositories are scanned within seconds of becoming public. (Token scanning doesn’t currently support private codebases.) When a match to a known unencrypted SSH private key, GitHub OAuth token, personal access token, or other credential is detected, the appropriate service provider is notified, giving them time to respond by revoking tokens and notifying potentially compromised users.

“Composing cloud services like this is the norm going forward, but it comes with inherent security complexities,” wrote Toomey. “Each cloud service a developer typically uses requires one or more credentials, often in the form of API tokens. In the wrong hands, they can be used to access sensitive customer data — or vast computing resources for mining cryptocurrency, presenting significant risks to both users and cloud service providers.”

GitHub also announced today that it has sent more than a billion token matches since October 2018.

The milestone and new token scanning partnerships come months after GitHub revealed that it had acquired Dependabot, a third-party tool that automatically opens pull requests to update dependencies in popular programming languages. Around the same time, GitHub made dependency insights generally available to GitHub Enterprise Cloud subscribers, and it broadly launched security notifications that flag exploits and bugs in dependencies for GitHub Enterprise Server customers.

In May, GitHub revealed beta availability of maintainer security advisories and security policy, which offers a private place for developers to discuss and publish security advisories to select users within GitHub without risking an information breach. That same month, the company said it would collaborate with open source security and license compliance management platform WhiteSource to “broaden” and “deepen” its coverage of and remediation suggestions for potential vulnerabilities in .NET, Java, JavaScript, Python, and Ruby dependencies.

MIT CSAIL’s Minerva video protocol reduces buffering and pixelation

MIT CSAIL Minerva

Above: A diagram outlining MIT CSAIL’s Minerva protocol.Image Credit: MIT CSAIL

MOST READ

  • Apple Watch Series 5 titanium and ceramic leak signals higher prices
  • Microsoft acquires JClarity to bolster Java workloads on Azure
  • Cerebras Systems unveils a record 1.2 trillion transistor chip for AI
  • Tim Cook tells Donald Trump that U.S. tariffs on Chinese imports could hurt Apple, help Samsung

UPCOMING EVENTS

  • GamesBeat Summit 2020April 21-22, 2020

Video viewing is on an upswing, thanks in large part to the relative ubiquity of speedy, affordable internet connectivity. By 2021, a million minutes (17,000 hours) of video content will cross worldwide networks every second, according to Cisco. And it’s estimated that video streams accounted for 75% of all traffic in 2017, a share anticipated to rise to 82% by 2022.

In an effort to develop tech suited to delivering tens of thousands of petabytes of video each month, scientists at the Massachusetts Institute of Technology’s Computer Science and Artificial Intelligence Laboratory (CSAIL) recently investigated a system that leverages video player state data and file characteristics to optimize congestion control. (In this context, “fairness” refers to how similar the viewing experience is for different users.) They report that their end-to-end protocol — Minerva — substantially cuts down on both buffering and pixelation without requiring changes to underlying infrastructure.

“The growth of video traffic makes it increasingly likely that multiple clients share a bottleneck link, giving video content providers an opportunity to optimize the experience of multiple users jointly,” wrote the researchers in a preprint paper. “But today’s transport protocols are oblivious to video streaming applications and provide only connection-level fairness.”

Above: A comparison between a video streaming with Minerva (right) and the baseline (left).Image Credit: MIT CSAIL

As the team explained further, most video content providers are beholden to bandwidth decisions made by congestion-reducing algorithms like Reno and Cubic, which seek to achieve connection-level fairness by giving competing flows an equal share of a link’s capacity. As a result, providers fine-tune viewing experiences in isolation, rather than allocating bandwidth among clients, and they don’t take into account factors like genre, screen size, screen resolution, and device type or playback buffer size.

By contrast, Minerva dynamically adjusts video streaming rates for fairness even without explicit information about competing video clients. When several of these clients share a bottleneck link, their rates converge to a bandwidth allocation that doesn’t interfere with other internet traffic.

Specifically, Minerva implements techniques and distributed algorithms that capture the relationship between bandwidth and quality of experience. Each client computes dynamic weights for its videos through the course of the videos, and it determines bandwidth allocations proportional to the weight from network conditions and other variables.

In experiments involving a real-world residential Wi-Fi network and two Amazon Web Services instances connected to eight clients, the researchers report that a quarter of the time Minerva improved quality for 15-32% of the videos by “an amount equivalent to a bump in resolution from 720p to 1080p.” Moreover, they say the protocol reduced total rebuffering time an average 47%, even with unpredictable data arrivals and departures, by allocating bandwidth to videos at risk of rebuffering,

“If five people in your house are all streaming video at once, [Minerva] can analyze how the various videos’ visuals are affected by download speed,” said MIT professor Mohammad Alizadeh, a senior author on a related paper that’s scheduled to be presented at the Association for Computing Machinery’s Special Interest Group on Data Communications (SIGCOMM) in Los Angeles later this month. “It then uses that information to provide each video with the best possible visual quality without degrading the experience for others.”

Tim Cook tells Donald Trump that U.S. tariffs on Chinese imports could hurt Apple, help Samsung

ReutersAugust 18, 2019 08:44 PM

Image Credit: REUTERS/Leah Millis

MOST READ

  • Apple Watch Series 5 titanium and ceramic leak signals higher prices

UPCOMING EVENTS

  • GamesBeat Summit 2020April 21-22, 2020

(Reuters) — President Donald Trump said on Sunday that he had spoken with Apple’s Chief Executive Tim Cook about the impact of U.S. tariffs on Chinese imports as well as competition from South Korean company Samsung.

Trump said Cook “made a good case” that tariffs could hurt Apple, given that Samsung’s products would not be subject to those same tariffs. Tariffs on an additional $300 billion worth of Chinese goods, including consumer electronics, are scheduled to go into effect in two stages on September 1 and December 15.

By contrast, the United States and South Korea struck a trade agreement last September.

“I thought he made a very compelling argument, so I’m thinking about it,” Trump said of Cook, speaking with reporters at a New Jersey airport.

U.S. stock futures rose upon opening on Sunday after Trump’s comments. In addition to his comments on Apple, Trump said on Twitter earlier in the day that his administration was “doing very well with China.”

Apple’s MacBook laptops and iPhones would not face the additional tariffs until December 15, but some of the company’s other products, including its AirPods, Apple Watch and HomePod, would be subject to the levies on September 1.

Apple was not immediately available for comment outside normal business hours.

Subscribe
Notify of
guest

This site uses Akismet to reduce spam. Learn how your comment data is processed.

0 Comments
Inline Feedbacks
View all comments
0
Would love your thoughts, please comment.x
()
x