summary is a generic function used to produce result summaries
of the results of various model fitting functions. The function
methods which depend on the
class of the first argument.
summary(object, ...)"summary"(object, ..., digits = max(3, getOption("digits")-3)) "summary"(object, maxsum = 7, digits = max(3, getOption("digits")-3), ...)"summary"(object, maxsum = 100, ...)"summary"(object, ...)
factors, the frequency of the first
maxsum - 1
most frequent levels is shown, and the less frequent levels are
"(Others)" (resulting in at most
The form of the value returned by
summarydepends on the class of its argument. See the documentation of the particular methods for details of what is produced by that method.The default method returns an object of class
c("summaryDefault", "table")which has a specialized
factormethod returns an integer vector.The matrix and data frame methods return a matrix of class
"table", obtained by applying
summaryto each column and collating the results.
Chambers, J. M. and Hastie, T. J. (1992) Statistical Models in S. Wadsworth & Brooks/Cole.
summary(attenu, digits = 4) #-> summary.data.frame(...), default precision summary(attenu $ station, maxsum = 20) #-> summary.factor(...) lst <- unclass(attenu$station) > 20 # logical with NAs ## summary.default() for logicals -- different from *.factor: summary(lst) summary(as.factor(lst))
summary.factor You almost certainly already rely on technology to help you be a moral, responsible human being. From old-fashioned tech like alarm clocks and calendars to newfangled diet trackers or mindfulness apps, our devices nudge us to show up to work on time, eat healthy, and do the right thing. But it’s nearly impossible to create a technological angel on your right shoulder without also building in a workaround that is vulnerable to the devil on your left. Put another way: Any alarm clock user who denies that he has heard the siren song of the snooze button is lying. There must always be an opt-out mechanism and fallible, foolish humans will always use it to thwart original intent of safety measures. Technology can help us make good decisions, but outsourcing good decision-making to technology, tech companies or the government isn’t just a bad idea — it’s impossible. People already know that distracted driving is dangerous. They tell pollsters so all the time. Because of this clear customer demand, smartphone makers offer safety conscious drivers a variety of ways to minimize distraction, from handsfree headsets and voice command to mute buttons and airplane mode. But automatically disabling certain apps in a fast-moving vehicle — as the grieving family of 5-year-old distracted driving victim Moriah Modisette is suing to force Apple to do — won’t work. One of the great glories of the smartphone era is the ability to work, chat and read while on mass transit or riding shotgun, so there’s no way to build an accelerometer-based shut-down unless you also add an opt-out. And if there's an opt-out, then fallible, foolish humans will always use it to thwart the original intent. What’s more, legally mandated technological fixes tend to be even less effective than their market-driven counterparts: Think of the “Are You 18?” queries that pop up on sites peddling liquor, cigarettes or other adult products. (Has anyone in the history of the internet ever clicked “No”?) Judges and regulators consistently overvalue their ability to prevent catastrophe and undervalue the costs they impose on innocent users. The most wide-reaching effect of any kind of mandatory distracted driving safety provision will simply be to force every user of every smartphone, on every bus, train and plane to click “I am not the driver” every day unto eternity, without actually dissuading the kind of jerks who are determined to FaceTime while driving down the interstate. Technology Can Save Us From Drivers Using Social Media JASON MARS 3:20 AM Jason_mars-thumbstandard While the untimely death of an innocent 5-year-old is tragic, it's clear that Apple shouldn’t be legally responsible for the irresponsible driver who killed her. Almost any distraction can lead to an accident. If a driver slammed his car into someone because he took his hands off the steering wheel to unwrap a taco, surely we wouldn’t hold Taco Bell responsible, or outlaw the eating of tacos while driving. That being said, companies do have a social responsibility to be mindful of hazards that arise from misuse of their products and take sensible precautions. In the case of Apple, it would be absolutely reasonable for it to use a non-intrusive mechanism to detect with near perfect accuracy when a user is driving to prevent hazardous distractions. The challenge that arises here is whether the technology can achieve near-perfect accuracy in driver detection. From a technical standpoint, its straightforward to sense the rate that a phone is moving. For example Apple provides a set of software protocols called CoreMotion that lets programmers glean insights about the phone’s movement and even has an "automotive" property to predict whether the user is in a vehicle. However, detecting whether the user or owner of the phone is the driver or a passenger is trickier with just this approach. In the case of FaceTime and other apps involving a camera, there is an opportunity to use the camera, along with deep-learning algorithms, to literally look at the user and environment and discern whether the user in view is driving. There has been a wealth of research on detecting driver fatigue and other attributes, some of which has been discussed at the IEEE Intelligent Vehicles Symposium. I would expect such a solution to be readily adopted by users if the accuracy is high enough, as mispredictions can create frustration and discourage use. The state of deep learning technology is at a place where companies like Apple should explore its use for safety purposes. While a staunch libertarian would be opposed to the infringement on freedom, I simply can’t think of a situation where someone should be FaceTiming and driving, ever.