VideoBlocks goes beyond moving pictures and announces its stock photo marketplace

VideoBlocks made a name for itselfa few years ago when it launched its subscription-based stock video service and later on a members-onlymarketplace for buying one-off clips, too. Now, the company is followinga very similar model for its expansion into the stock photo space as well. That new service is now open for photographers who want to contribute their images and will open for userslater this year.

As with its video offering, VideoBlocks plans toreturn all the money it makes from selling images in its marketplace directly to the photographer. It can offer this 100 percent commission because its business model relies more on selling subscriptions than one-off photos. This has worked out well for videos and the companys CEO TJ Leonard tells me that VideoBlocks now has 200,000 subscribers and that the marketplace has paid content creators $6 million since its launch in 2015. Its video library nowconsists of four million videos and Leonard expects that number to hit six million by the end of the year.

Another interesting aspect of the new photo marketplace is that every image will be $3.99.

As Leonard told me, this is a big change but not much of a change at all. The company, after all, is following its video playbook here but in reverse. Its first opening up this marketplace for photos for its members andplans to use the relationship it builds with photographers now to build a member library for a future subscription service.

We have a model that is so disruptive for both the contributors and members that we believe we can build what is not just a me-too marketplace, Leonard told me.

The motivation for expanding into photos is pretty straightforward. Now that the company has built a large enough user base for its video service, those users are also looking for photos. Currently, the company offers a small set of images through its GraphicStock service, but that product has traditionally focused on illustration and vector graphics, with less than 100,000 photos in its library.

Read more:

Stop and search the flowers in this wily Easter puzzle

Remember all that time you spent looking for the cat among the owls? And the panda in the crowd of snowmen?

Get ready to do it all again with the latest illustration from Gergely Duds, known as Dudolf. His Where’s Wally-style animations are extremely vexing for things that are so cute.

It’s like he doesn’t want us to just sit back and enjoy the chocolate.

Admittedly, we reckon this one is easier than some of his others, but judging by the Facebook comments on it a lot of people think it’s a doozy (linked here, beware of spoilers in the comments).

And if you’ve really had enough, you can find the solution here.

Happy hunting!

WATCH: ‘Rogue One’ reveals some easter eggs, but plenty are still hidden

Read more:

If Animals Could Talk (10+ New Pics)

Illustrator Jimmy Craig is back with even more of his hilarious comics from his They Can Talk series (previously here), showing what animals would say if they could communicate the way humans do.

So, if you struggle to explain why your cat keeps on breaking glasses, or you can’t get your head around why your pet is always disturbing your sleep in the morning, look no more for answers because Craig’s comics have got it all covered!

And they do so in a way that not only makes you see the world through the eyes of an animal; they’re also sure to crack you up.

Read more:

Here’s how Apple could bring MagSafe back to MacBooks

Apple's new MacBook Pros dropped the MagSafe for USB-C.
Image: lili sams/mashable

The new MacBook Pros dropped a weight class and gained a Touch Bar, but they also killed the beloved MagSafe charger.

Instead of a charging port that gently breaks away when tugged (or tripped over), Apple replaced it with USB-C. Sure, USB-C is more versatile it’s reversible, and capable of transferring data and video in addition to power but it’s no MagSafe.

A newly publicized patent, however, suggests Apple could have a fix in the works.

According to patent 20170093104, Apple is entertaining the idea of a MagSafe-to-USB-C adapter for MacBooks, which would bring back the magnetic charging functionality (sorta).

From what we can tell, Apple’s MagSafe adapter would actually work like an existing product: Griffin’s BreakSafe Magnetic USB-C Power Cable. The only difference would be, you’d be using a real MagSafe plug instead of Griffin’s magnetic plug, which has been called flimsy.

An illustration of the MagSafe adapter. One part takes the MagSafe plug and the other goes into the MacBook.

Image: screenshot: USPTO

Though Apple’s only ever used MagSafe in its MacBooks, the adapter wouldn’t be limited to laptops. The patent lists a slew of potential uses for a MagSafe adapter:

“Embodiments of the present invention may provide adapters that may connect to connector receptacles on various types of devices, such as portable computing devices, tablet computers, desktop computers, laptops, all-in-one computers, wearable computing devices, cell phones, smart phones, media phones, storage devices, portable media players, navigation systems, monitors, power supplies, adapters, remote control devices, chargers, and other devices.”

In theory, Apple could create a bunch of different MagSafe adapters with different male-end plugs such as Lightning and micro USB for devices such as iPhones and Android phones.

It’s not a great or elegant solution (no dongles will ever be), but it could save your MacBook from doom. I can’t tell you how many times MagSafe has saved my old 2014 MacBook Pro from damage.

Still, don’t get your hopes up just yet. It’s just a patent and most of the time, patents never morph into anything beyond elementary drawings.

WATCH: This device fixes the worst thing about the new MacBook Pro

Read more:

How Samsung’s new voice assistant, Bixby, is different from Siri

Image: ILLUSTRATION by Ambar Del Moral/Mashable

Samsung has a new voice. And it has world-changing ambitions.

In the upcoming Galaxy S8, users will find an extra button on the left side of the phone, just below the volume controls. Pressing it will activate Bixby, Samsung’s new voice assistant. Once activated, Bixby will help you navigate what’s arguably the most sophisticated piece of technology you own the smartphone in your hand.

If Samsung gets its wish, though, Bixby will eventually do much more than just help you order Lyfts or set up complex calendar appointments. The long-term vision is for Bixby to act as a kind of uber-interface for all of Samsung’s products: TVs, wearables, washing machines, even remote controls.

Samsung designed Bixby with a specific goal in mind, one that veers away from its fellow voice assistants Apple’s Siri, Amazon’s Alexa, Microsoft’s Cortana and the Google Assistant. Those platforms were generally built to help users quickly perform common tasks (“Remind me to buy milk”) and perform search queries (“What’s the capital of Brazil?”). Bixby, on the other hand, is all about making the phone itself easier to use, replicating the functions of many apps with voice commands.

Yes, Siri et al. already do that to a certain extent you can easily set a reminder with your voice, for example but the voice integration typically only handles the basics. The goal of Bixby is to voice-enable every single action in an app that you’d normally do via touch, starting with Samsung’s apps. So, not just “set a reminder to buy pickles at 6 p.m., but “Set a reminder on my Shopping List to buy pickles at 6 p.m. and make it repeat every week, then share the list with my wife.”

Bixby speaks

Injong Rhee, CTO of Samsung Mobile and the architect behind Bixby, says the voice assistant is nothing short of an “interface revolution,” freeing users from hunting down hidden functionality within menus and hard-to-find screens.

“Bixby is an intelligent user interface, emphasis… on ‘interface,'” Rhee says. “A lot of agents are looking at being knowledgeable, meaning that you can ask questions like, ‘Who’s president of the U.S.?’ A lot of these are glorified extensions of search. What we are doing with Bixby, and what Bixby is capable of doing, is developing a new interface to our devices.”

Bixby architect Injong Rhee, CTO of Samsung Mobile.

Image: Pete Pachal/Mashable

Although it makes its debut on the Galaxy S8, it will soon spread. Rhee sees the Bixby button eventually spreading to all kinds of smart-home devices, from TVs to refrigerators to air conditioners.

“Anywhere there is an internet connection and a microphone, Bixby can be used,” he says. “There is some technology in the device, but a lot of it lives in the cloud. That’s why the range of devices goes beyond just a smartphone. It means it can be in any device we produce.”

“Anywhere there is an internet connection and a microphone, Bixby can be used.”

Samsung began work on Bixby about 18 months ago, Rhee says. It grew out of the company’s S Voice tool, which has been on Samsung phones since 2012. (The timing might explain why Samsung’s smart fridge announced right around then failed to deliver on its planned integration with Alexa.) S Voice hadn’t progressed much over the years, but then last year Samsung acquired the much-hyped Viv Labs and its sophisticated assistant, a strong indicator of the company’s renewed interest in voice control. However, Rhee says Viv’s technology is planned for future updates to Bixby and doesn’t have a role in the initial release.

The name Bixby came out of Samsung’s focus groups, but it was actually their third choice overall. It was the top pick among millennials a demographic the company is specifically targeting with the Galaxy S8 so it won out. (Rhee declined to say what the other names were.) It’s also distinctive enough, with hard consonants, for it to work well as an activation word. Bixby, which will initially speak just English and Korean, is intended to be a user’s “bright sidekick,” helping them navigate their devices in a more natural way.

“[What came before], it’s been people trying to learn how the machine interacts with the world, but… it should be the machine learns how the human interacts with the world,” Rhee says. “The learning curve shouldn’t be steep.”

All talk, all action

For an app to be considered Bixby-supported, every possible touch action needs to be mapped to a voice command. Rhee explains that, for a typical app, there are about 300 different actions the user can perform. That doesn’t sound too bad until you consider there are around 15,000 different ways to perform them. And the ways to verbalize those actions number in the millions. That’s a lot of stuff to map out.

Still, Samsung says it’s up for the challenge, at least as far as its built-in apps are concerned. But what about third-party apps? Considering the amount of development work, will Snapchat or Facebook ever work as well with Bixby as Samsung’s apps?

Bixby will take you as far as it can rather than just hitting you with a, “Sorry, I didn’t catch that.”

Rhee says Samsung has a plan to get third-party apps talking to Bixby, and an SDK to be released at a later date will introduce tools that make the mapping much easier. He also suggests Viv’s technology can help here, too.

“Viv Labs is coming in by way expanding our vision into third-party ecosystems. It doesn’t necessarily have to be all of the touch commands that they can perform. At a minimum, [Bixby will perform] the basic functionalities: like the settings, or changing the language from English to French.”

On the Galaxy S8, a total of 10 apps will be Bixby-supported, Rhee says, with a second “wave” coming a few weeks later. Out of the gate, users will be able to use Bixby with Contacts, Gallery, Settings, Camera, Reminders and a few others.

Another way Bixby is different from its peers: it will be aware of what you’re doing on the phone and suggest different actions depending on what’s on screen. So if you press the button while, say, looking at a single photo in the Gallery, editing and sharing controls are probably more relevant to you than searching. And if Bixby doesn’t understand every aspect of a complex command, it will take you as far as it can rather than just hitting you with a “Sorry, I didn’t catch that.”

All this “awareness” brings up an important question: How much data is Samsung collecting about you? Rhee says most user-specific data is kept on the device, but, as a cloud service, Bixby needs to store some information in the cloud. It’s not yet clear what the exact breakdown is.

The button

Having a dedicated button for Bixby brings a number of advantages. For starters, it means Samsung won’t have any need for Clippy-style pop-ups directing users to the assistant people will inevitably find it on their own. It also ensures there will be far fewer accidental activations than if Bixby were mixed into a home button something users of Siri are all too familiar with.

“We actually have done a lot of research to have the Bixby button as part of the home button like our friends in Cupertino,” Rhee says. “A lot of people find it a little awkward to use it in public. The home button is a very overloaded place there’s a lot of functionality into it. Having a dedicated button really removes a lot of friction.”

It’s the dedicated button that really epitomizes Samsung’s approach.

And since the idea is to press and hold, lifting your finger when you’re done, Bixby will know definitively when you’re done speaking. Still, there will also be a wake-up phrase you can just say “Hi Bixby,” to activate the assistant at any time.

It’s the dedicated button that really epitomizes Samsung’s approach, and if it indeed ends up on all Samsung products, Bixby will become much more than just a smartphone assistant it’ll become the gateway for Samsung to finally, truly become a major player in the internet of things.

Sure, Samsung has had its “Smart” devices for a long time, and its low-power Tizen OS is ideal for powering the many products with connections to the internet. It also acquired SmartThings in 2014 to strengthen its IoT brand.

But until now, Samsung has lacked a gateway for its customers to really take advantage of that interconnectivity. For most, it’s hard work hunting down the right settings on your phone to connect a smart TV to an air conditioner, but what if you could just tell Bixby to do it? And if you can talk to it from all those devices asking any question or even making phone calls then you’re really onto something.

“It’s actually omnipresent in a sense,” Rhee says. “Even if I speak to Bixby in, say, a washing machine, you can still do a lot of things that you do on your phone. For instance, you can say, ‘Bixby, send a text to my friend Michael,’ or ‘Make a phone call.’ That’s the vision.”

The more capable assistant

Amazon and Google already know this, and the success of Alexa and buzz around Home are a testament to the unquestionable efficiency of adding voice control to devices. But Samsung, with its high standard of controlling all functions of a device via Bixby, might end up with the advantage. Alexa, for all of its “skills,” often falls short of full control (you can turn on or dim LED lights, for example, but might not be able to select specific colors), so the market has room for a more capable competitor. Of course, how and when Bixby will mix with third-party products and services remains an open question.

“Philosophically, what we are looking at is revolutionizing phone interfaces,” Rhee says. “We understand our applications better than anybody else out there that’s why we started with our own technology, but going forward we have plans to work with our partners.”

Bixby may be the best thing to happen to Samsung software in a long time.

Eventually, Rhee says a Bixby app might come to non-Samsung Android phones and even iOS, possibly partnering with Google Assistant for search-related queries (though he cautions Google and Samsung haven’t “gotten to the specifics” on how that would work).

At the same time, Bixby control could extend to all kinds of smart products, not just Samsung ones. That would probably take a level of cooperation with competitors that Samsung hasn’t really shown before, but if Bixby becomes ubiquitous in the long term, whatever OS this or that device is running will become less relevant.

That’s a future Samsung is clearly hoping for, since software has traditionally been its weakness. Samsung may be a chief Android partner, but it’s struggled to differentiate its many services from Google’s, and the company lacks an OS of its own (Tizen notwithstanding). Samsung’s browser, Samsung Pay, S Health they’re all duplicates of Google products, and are widely regarded as inferior.

That’s why Bixby may be the best thing to happen to Samsung software in a long time. If customers respond, Bixby could, in the long term, finally get Samsung users to think of its phones as Samsung phones rather than just the best-performing Android phones on the market. All Android vendors try to differentiate to some extent, but Bixby’s app-simplifying skills and potential IoT capabilities are a compelling sell.

Bixby represents an important step for Samsung when it comes to services: finally a good answer to “Why should I use your software?” Effortless voice control of everything not just your phone is a tantalizing promise, and if Samsung can pull it off in the long term, its “bright sidekick” might end up being the only assistant we actually want to talk to.

Read more:

White House demands deep cuts to State, UN funds

(CNN)The White House has instructed the State Department and the US mission to the United Nations to cut their budgets for UN programs nearly in half, including US peacekeeping and development assistance, two senior US officials told CNN on Monday.

The dramatic cuts, which include a 37%, or $20 billion, slash in funding for the State Department and the US Agency for International Development, reflect a desire by the Trump administration to reduce US commitments to international organizations.
Foreign Policy first reported the details of the White House’s proposal to dramatically reduce spending on foreign aid.
    US diplomats in New York had warned their UN counterparts about the likely “steep” cuts to US funding for the UN, one Western diplomat said, but not provided any details.
    The White House wants to cut the programs funded out of the State Department’s Bureau of International Organization Affairs by half, the US officials said. While the cuts would impact UN programs the most, the White House also wants to reduce US dues to other international organizations and ask other member states to pick up the slack.
    For example, the US pays 21% of the operating budget of the Organization for Economic Co-operation and Development, which promotes democracy and good governance, particularly in Europe. Japan pays the second-highest dues at about 12%. The White House wants to drop US dues to Japan’s level.
    “Everyone else is going to have to step up,” one senior official said.
    The White House also wants to reduce funding for voluntary assessments and programs for all international organizations. The United States funds certain programs or positions in organizations such as the UN, the Organization of American States and other international bodies beyond its regular dues as a member state. Officials said such expenditures would end under the new plan.
    It is unclear what the exact timeline is for the cuts, officials said. Secretary of State Rex Tillerson has proposed making the reductions over three years, arguing that he needs more than one budget cycle.
    After “several tough exchanges” with Office of Management and Budget Director Mick Mulvaney, one official said Tillerson has been granted some flexibility as to where the State Department budget cuts are directed.
    “He said, ‘You give me a number and I will make the cuts,’ ” one senior administration official said. “He doesn’t want to be told what to cut.”
    Trump and Mulvaney warned that deep cuts were coming to foreign aid programs last month while previewing the administration’s first budget proposal at the Conservative Political Action Conference.
    “The President said we’re going to spend less money overseas and spend more of it here,” Mulvaney noted while referencing the proposal last month. “That’s going to be reflected with the number we send to the State Department.”
    But Trump could face a fight when it comes to trimming the wings of state — possibly from within his own administration.
    Defense Secretary James Mattis, for instance, warned against cutting diplomatic resources during congressional testimony in 2013.
    And last month, several prominent generals like David Petraeus and admirals like James Stavridis, the former supreme commander of NATO, wasted little time in mobilizing to challenge his proposals.
    They joined a list of 121 military figures who warned that State Department diplomacy, aid and programs were vital to preventing conflict overseas and could mitigate the need for costly and bloody military deployments.

    Read more:

    The best guesses we have for who’s flying to the moon with SpaceX

    Who will it be?
    Image: mashable/Christopher Mineses

    From the moment that SpaceX’s Elon Musk announced the company’s intention to send two unnamed people in a long loop around the moon in 2018, people started speculating about who those mystery passengers might be.

    Musk didn’t give out many clues about the individuals who contracted the company for the flight, aside from saying that they put down a hefty deposit and they know each other.

    However, that won’t stop us from wildly speculating about who the maybe famous and definitely rich folks flying to the moon with SpaceX might be.

    Richard Branson

    Yes, yes, its true that Richard Branson has his own commercial spaceflight program in Virgin Galactic and Virgin Orbit, but Virgin isnt aiming for the moon right now.

    Therefore, this SpaceX offering wouldn’t be in direct competition with his own favorite space plans as of right now.

    Branson is eccentric and daring enough to want to fly to the moon, so it would follow that he could be the one contracting SpaceX to fly him in a long loop above the lunar surface.

    A hover test of the Dragon capsule built for crew.

    Image: spacex/flickr

    James Cameron

    Even though Musk specifically said that the people contracting SpaceX to fly them to the moon aren’t from Hollywood, we’re still leaving Titanic director James Cameron on this list.

    Cameron has been a space fan for a long while, and in 2011, it was reported that he shelled out more than $100 million for a flight around the moon with Space Adventures, a private firm that pairs would-be space tourists with their rides to orbit. He has yet to take his trip, so who knows, maybe he’s one of the people who’s opted to ride with SpaceX.

    Cameron is also an adventurer who supports scientific inquiry. In 2012 he dove deep into the Mariana Trench, breaking a world record for the deepest solo dive in the process.

    South Parkhas even made fun of his somewhat odd penchant for exploration, so this doesn’t seem outside of the realm of possibility for “the bravest pioneer.”

    Random billionaires

    In all likelihood, the people who already put down a deposit with SpaceX are probably folks we’ve never heard of.

    In order to fly on a flight like this one, you basically just need a lot of expendable income millions and millions of dollars of it and a will to head out into the unknown. Plus, you probably need a lot of time on your hands for training and the like.

    Don’t be surprised if Musk announces that a couple of CEOs for huge international corporations are the ones asking to head to the moon on this first flight.


    Artist’s illustration of the Falcon Heavy rocket.

    Image: spacex

    Even though Musk said that a couple of private individuals were the ones contracting SpaceX for this flight, it’s still possible that NASA astronauts could be the first people to fly on SpaceX’s system.

    Musk made it clear that if NASA wanted to take the flight profile for itself, then SpaceX would absolutely let them fly the first flight of the Dragon and Falcon Heavy bound for the moon.

    SpaceX owes a lot to NASA, particularly because the space agency’s significant investments in the company have helped it stay afloat since its founding in 2002.

    NASA already has an uncrewed mission to circumnavigate the moon on the books for 2018 or 2019, so it’s possible that the agency will want to cooperate with SpaceX on some kind of moon venture in the future.

    Sergey Brin

    Google co-founder and current president Sergey Brin might be one of the best guesses we have for the person heading to the moon with SpaceX.

    Brin once put down some money with Space Adventures for a flight to the International Space Station, but he has yet to fly.

    Brin is also involved with the Google Lunar X Prize, a competition designed to spark commercial development of the moon by awarding a $20 million prize to the first private company to fly to and land a spacecraft on the moon and perform a series of specific tasks.

    Please just let one of them be a woman

    The only people who have ever flown to the surface of the moon or its general vicinity have been men.

    I’d say it’s about time a woman made it there, don’t you?

    Read more:

    The future of space science hinges on gravitational waves

    Artist's illustration of two neutron stars merging.
    Image: nasa

    Billions of years ago, two black holes merged in a violent explosion that rippled the fabric of our universe.

    Those cosmic ripples known as gravitational waves produced by this collision spread far and wide in all directions, carrying with them information about the black holes that brought them into being.

    In September 2015, that information made it to Earth. While these weren’t the first gravitational waves to reach our planet, they were the first we could observe.

    Two powerful tools known as theLaser Interferometer Gravitational-Wave Observatories (LIGO) were able to directly observe the gravitational waves sent out by the two black holes, opening up a new way for scientists to study the inner-workings of some of the most extreme objects in the universe.

    Until now, scientists studying the cosmos were limited to just staring at our universe using different wavelengths of light.

    Artist’s illustration of colliding black holes.

    Image: LIGO

    While this type of investigation has completely transformed our understanding of how stars, galaxies, planets and other objects work, it also has left us in the dark when trying to understand the inner lives of black holes and other exotic objects.

    All of that is changing now, however.

    In the not too distant future, scientists should be able to peer into the hearts of exploding stars, figure out how matter is changed within the hot, high-pressure center of a neutron star, and better characterize what a black hole really is all thanks to barely-detectable waves sent out to the far ends of the observable universe.

    Being an astronomer right now, as gravitational wave science begins in earnest is “kind of the equivalent of being there when Galileo put together his first telescope,” scientist Edo Berger, who is involved in LIGO-related research, said in an interview.

    A light turning on

    The entire history of astronomy has hinged on studying the universe with light, but now, we have an entirely different way to peer out into the cosmos. It’s as if astronomy as we know it has gained a new sense.

    Instead of trying to look directly at something like a black hole that doesn’t give off light, astronomers can now piece apart the “chirps” of gravitational waves to learn more about the masses, sizes and lives of the objects that created them.

    “… Using gravitational waves we can probe environments that are enshrouded with a lot of matter which blocks our view,” Harvard University astronomer Avi Loeb said via email.

    “For example, when a massive star collapses or when a neutron star gets swallowed by a stellar-mass black hole, or when two massive black holes coalesce while being surrounded by gas duringthe merger of two galaxies, we cannot easily probe the center of the action because it is hidden behind a veil of matter,” Loeb added.

    “But gravitational waves can penetrate easily through matter and reveal the inner working of such engines. “

    Image: Bob Al-greene/mashable

    You can’t feel or see gravitational waves move through Earth’s part of space, but they do affect us nonetheless.

    In fact, the signal discovered in September warped all of the matter on Earth including all of the matter in our bodies by just a fraction of a proton.

    And that’s what LIGO had to measure. Both observatories one located in Louisiana, another in Washington recorded the moment the gravitational waves passed through Earth’s part of space at the same time.

    The twin “L”-shaped observatories both have a laser that runs down each arm of the L to mirrors located at the end of the arms. If no gravitational waves pass through Earth, the lasers should each bounce back to the middle at precisely the same time, but if a wave were to pass through, that timing would be off.

    This is because the matter around the laser stretches ever so slightly as the wave passes through, changing the length of the arms but not affecting the light itself.

    “What LIGO had to do to detect the waves was to measure the motion of mirrors (due to the passing gravitational wave) that was smaller than a single proton,” LIGO researcher Nergis Mavalvala said.

    “Imagine that, put mirrors 4km (2.5 miles) apart and watch them get closer or farther to each other by a distance one-one-thousandth the size of a proton.”

    Discoveries already pouring in

    Scientists have already analyzed data brought to Earth by the gravitational waves discovered in September, characterizing the black holes that created those ripples like never before.

    A study published in June 2016 found that the two black holes which gave rise to the gravitational waves actually began their lives as massive stars orbiting one another.

    Eventually, after millions of years in orbit around one another, the stars collapsed, forming two black holes about 30 times the mass of our sun. And one day, those black holes merged, rippling the fabric of space and time like a bowling ball spinning around on a bed sheet.

    The authors of the study used data gathered by LIGO to create a computer model of the universe that would have given rise to the gravitational waves detected here on Earth billions of years after the black hole merger.

    “The black holes were monsters, and the results show that their progenitor stars would have been some of the brightest and most massive in the universe,” physicist J.J. Eldridge wrote in a piece accompanying the study at the time.

    Neutron stars and a new state of matter

    LIGO should eventually do even more than reveal the secret lives of black holes as well.

    In the future, astrophysicists are hoping to use gravitational wave tools to figure out what’s going on in the intensely hot, high pressure middle of a very mysterious class of stars known as neutron stars.

    “You build an instrument for things you want to measure, and then you see things that you didn’t expect to see”

    Neutron stars are more massive than the sun but packed down into an area the size of the city of Boston. These types of stars form when stars about four to eight times the size of our sun die.

    The hearts of these stars might actually be so dense and high pressure, that they warp molecules into a totally different state of matter than what can be observed in labs on Earth.

    “In this case, of course, it [the matter in a neutron star] exists in a state that we’re not familiar with from our own personal experience because we’ve never witnessed those kinds of pressures,” Berger said.

    At the moment, LIGO isn’t able to easily detect neutron star mergers as they are somewhat less energetic than black hole collisions, but in the future it should be able to as its sensitivity advances, revealing the hearts of those dense objects.

    Simulation of gravitational waves.

    Image: NASA/C. HENZE

    Gravitational wave science also has the ability to add to the already rich tapestry of science done by looking at light in the universe.

    Some astronomers are already attempting to pinpoint the optical sources of gravitational waves to see if there’s any kind of light signal that goes along with mergers of black holes.

    At the moment, LIGO isn’t very good at pinpointing exactly where a signal is coming from in the sky, so other technologies could be further developed to meet that challenge in the future, allowing scientists to gather precise data on the sources of gravitational waves, LIGO’s Mavalvala said.

    And one day, LIGO and the host of new technology that will be produced around it may even hear a new signal that scientists can’t even imagine now.

    “You build an instrument for things you want to measure, and then you see things that you didn’t expect to see,” Mavalvala said.

    “I think that’s the true promise of these instruments.”

    Watch black holes collide in VR

    Take a VR journey through space and time in the latest episode of The Possible.Click here to download the Within app to watch The Possible.

    BONUS: These scientists are proving Einstein wrong

    Read more:

    Zakaria: Trump has ‘hardly done anything’

    (CNN)CNN’s Fareed Zakaria had strong words on Sunday for President Donald Trump’s performance so far, imploring viewers to “not confuse motion with progress,” and arguing that Trump has “hardly done anything.”

    “The first few weeks of the Trump administration have been an illustration of that line from the writer Alfred Montalpert: “Do not confuse motion and progress. A rocking horse keeps moving but does not make any progress,'” Zakaria said.
    “We are witnessing a rocking horse presidency,” he added.
      Though the host of CNN’s “GPS” acknowledged that since winning the election the President has “dominated the news,” he couldn’t, fathom what Trump had “actually done” over the past month.
      “This week, Trump said at a news conference, “There’s never been a presidency that’s done so much in such a short period of time,” Zakaria noted.
      But the reality of the Trump White House, he said, is that it “has not even begun serious discussions with Congress on major legislation.”
      Trump had issued a series of “executive orders with great fanfare,” Zakaria acknowledged.
      However, he argued, “they are mostly hot air, lofty proclamations that direct some agency to “review” a law, “report” back to him, “consider” some action or reaffirm some long-standing practice.”

      Not much happening on serious policy

      The one order that actually “did something,” the temporary travel ban, was unsuccessful, Zakaria said, and ” so poorly conceived and phrased that it got stuck in the court system and will have to be redone or abandoned.”
      As for many of Trump’s campaign promises, from the reindustrialization of the Midwest to reviving the coal and steel industries, to imposing term limits on all members of Congress?
      “All were promised, none has been done,” said Zakaria.
      There are “two aspects” to the Trump presidency, said the CNN host. The “freak show” and “the savvy businessman.”
      “For many people, the bargain of the Trump presidency was that they would put up with the freak show in order to get tax reform, infrastructure projects and deregulation,” he said.
      Though Zakaria acknowledged that Trump may still fulfill some of his campaign promises, for now, “not much is happening in the realm of serious policy.”
      “The Romans said the way to keep people happy was to give them “bread and circus” — sustenance and entertainment” said Zakaria.
      “So far all we have gotten is the circus.”

      Read more:

      Making VR less painful for the vision-impaired

      You dont have to have perfect vision to enjoy VR, but brother, it helps. Otherwise, youre looking at having to worry about accommodating glasses, eye tracking not working, ocular distances maxing out and so on. Stanford researchers want to make things easier for people with vision problems to use VR, but its not going to be easy.

      Vision is a complicated process, and a lot of things can go wrong but common afflictions like nearsightedness or an inability to focus on objects close up affect millions. Combined with how VR presents depth of field and other effects, this leads to a variety of optical problems and inconsistencies that can produce headaches, nausea and disorientation.

      VR headsets often allow for adjusting things like the distance from your eye to the screen, how far apart your eyes are and other factors. But for many, its not enough.


      An illustration from the paper shows how even with perfect vision a vergence-accommodation conflict can arise. With vision problems, this and other effects could be more common and more intense.

      Every person needs a different optical mode to get the best possible experience in VR, said Stanfords Gordon Wetzstein in a news release.

      His teams research, published today in the Proceedings of the National Academy of Sciences, describes a set of mechanisms that together comprise what they call an adaptive focus display.


      One prototype used a modified Samsung Gear VR.

      One approach uses a liquid lens, the shape of which can be adjusted on-the-fly to adjust for certain circumstances say, when the focus of the game is on an object that the viewer normally wouldnt be able to focus on. The screen itself could also be moved in order to better fit the optical requirements of someone with a given condition.

      The technology we propose is perfectly compatible with existing head mounted displays, wrote Wetzstein in an email to TechCrunch. However, one also needs eye tracking for this to work properly. Eye tracking is a technology that everyone in the industry is working on and we expect eye trackers to be part of the next wave of [head-mounted displays]. Thus, our gaze-contingent focus displays would be directly compatible with those.

      In the paper, both commercial and built-from-scratch headsets are used to prototype various methods of adjusting optical qualities of the displays. The team tested these with 173 participants at (among other places) last years SIGGRAPH conference; the news release reports an improved viewing experiences across a wide range of vision characteristics.

      This is still early-stage research: Simple vision correction is one thing, but more complex conditions like astigmatism require more complex solutions. (Theyre looking into it, but it will not be quite as straightforward.)

      Wetzstein confirmed to TechCrunch that the team is in contact with pretty much all VR headset makers.

      I cannot reveal any specific details about these collaborations, he wrote, but I can say that there is a huge amount of interest and technology developments in industry are closely aligned with our research.

      It seems likely, then, that we can expect headsets in the next generation not just to be better optically and ergonomically, but to be more inclusive and accommodating (so to speak) of those with vision problems.

      Read more: