• see
    17
    more
  • add
    your

Submit your own pictures of this place and show them to the World

  • OSM Map
  • General Map
  • Google Map
  • MSN Map
  • OSM Map
  • General Map
  • Google Map
  • MSN Map
El Dorado Hills, CA map
  • OSM Map
  • General Map
  • Google Map
  • MSN Map
  • OSM Map
  • General Map
  • Google Map
  • MSN Map

Please wait while loading the map...

Current weather forecast for El Dorado Hills, CA

Population in 2010: 42,108. Population change since 2000: +133.7%
Males: 21,319   (50.6%)
Females: 20,789   (49.4%)
Median resident age:   44.4 years
California median age:   36.4 years

Zip codes: 95762.

Estimated median household income in 2016: 0,997 (it was ,483 in 2000)
El Dorado Hills: 0,997
CA: ,739

Estimated per capita income in 2016: ,166 (it was ,239 in 2000)
El Dorado Hills CDP income, earnings, and wages data
Estimated median house or condo value in 2016: 4,071 (it was 7,900 in 2000)
El Dorado Hills: 4,071
CA: 7,500

Mean prices in 2016: All housing units: 7,586; Detached houses: 0,111; Townhouses or other attached units: 6,031; In 2-unit structures: 5,909; In 3-to-4-unit structures: 2,772; In 5-or-more-unit structures: 7,719; Mobile homes: 6,121; Occupied boats, RVs, vans, etc.: ,158

Median gross rent in 2016: ,133.

Recent home sales, real estate maps, and home value estimator for zip code 95672
Recent home sales, real estate maps, and home value estimator for zip code 95762

El Dorado Hills, CA residents, houses, and apartments details

Business Profiles

Profiles of local businesses

Put your B&M business profile right here for free. 50,000 businesses already created their profiles!

Data:

Median household income ($) Median household income (% change since 2000) Household income diversity Ratio of average income to average house value (%) Ratio of average income to average rent Median household income ($) - White Median household income ($) - Black or African American Median household income ($) - Asian Median household income ($) - Hispanic or Latino Median household income ($) - American Indian and Alaska Native Median household income ($) - Multirace Median household income ($) - Other Race Median household income for houses/condos with a mortgage ($) Median household income for apartments without a mortgage ($) Races - White alone (%) Races - White alone (% change since 2000) Races - Black alone (%) Races - Black alone (% change since 2000) Races - American Indian alone (%) Races - American Indian alone (% change since 2000) Races - Asian alone (%) Races - Asian alone (% change since 2000) Races - Hispanic (%) Races - Hispanic (% change since 2000) Races - Native Hawaiian and Other Pacific Islander alone (%) Races - Native Hawaiian and Other Pacific Islander alone (% change since 2000) Races - Two or more races (%) Races - Two or more races (% change since 2000) Races - Other race alone (%) Races - Other race alone (% change since 2000) Racial diversity Unemployment (%) Unemployment (% change since 2000) Unemployment (%) - White Unemployment (%) - Black or African American Unemployment (%) - Asian Unemployment (%) - Hispanic or Latino Unemployment (%) - American Indian and Alaska Native Unemployment (%) - Multirace Unemployment (%) - Other Race Population density (people per square mile) Population - Males (%) Population - Females (%) Population - Males (%) - White Population - Males (%) - Black or African American Population - Males (%) - Asian Population - Males (%) - Hispanic or Latino Population - Males (%) - American Indian and Alaska Native Population - Males (%) - Multirace Population - Males (%) - Other Race Population - Females (%) - White Population - Females (%) - Black or African American Population - Females (%) - Asian Population - Females (%) - Hispanic or Latino Population - Females (%) - American Indian and Alaska Native Population - Females (%) - Multirace Population - Females (%) - Other Race Likely homosexual households (%) Likely homosexual households (% change since 2000) Likely homosexual households - Lesbian couples (%) Likely homosexual households - Lesbian couples (% change since 2000) Likely homosexual households - Gay men (%) Likely homosexual households - Gay men (% change since 2000) Cost of living index Median gross rent ($) Median contract rent ($) Median monthly housing costs ($) Median house or condo value ($) Median house or condo value ($ change since 2000) Mean house or condo value by units in structure - 1, detached ($) Mean house or condo value by units in structure - 1, attached ($) Mean house or condo value by units in structure - 2 ($) Mean house or condo value by units in structure by units in structure - 3 or 4 ($) Mean house or condo value by units in structure - 5 or more ($) Mean house or condo value by units in structure - Boat, RV, van, etc. ($) Mean house or condo value by units in structure - Mobile home ($) Median house or condo value ($) - White Median house or condo value ($) - Black or African American Median house or condo value ($) - Asian Median house or condo value ($) - Hispanic or Latino Median house or condo value ($) - American Indian and Alaska Native Median house or condo value ($) - Multirace Median house or condo value ($) - Other Race Median resident age Resident age diversity Median resident age - Males Median resident age - Females Median resident age - White Median resident age - Black or African American Median resident age - Asian Median resident age - Hispanic or Latino Median resident age - American Indian and Alaska Native Median resident age - Multirace Median resident age - Other Race Median resident age - Males - White Median resident age - Males - Black or African American Median resident age - Males - Asian Median resident age - Males - Hispanic or Latino Median resident age - Males - American Indian and Alaska Native Median resident age - Males - Multirace Median resident age - Males - Other Race Median resident age - Females - White Median resident age - Females - Black or African American Median resident age - Females - Asian Median resident age - Females - Hispanic or Latino Median resident age - Females - American Indian and Alaska Native Median resident age - Females - Multirace Median resident age - Females - Other Race Commute - mean travel time to work (minutes) Travel time to work - Less than 5 minutes (%) Travel time to work - Less than 5 minutes (% change since 2000) Travel time to work - 5 to 9 minutes (%) Travel time to work - 5 to 9 minutes (% change since 2000) Travel time to work - 10 to 19 minutes (%) Travel time to work - 10 to 19 minutes (% change since 2000) Travel time to work - 20 to 29 minutes (%) Travel time to work - 20 to 29 minutes (% change since 2000) Travel time to work - 30 to 39 minutes (%) Travel time to work - 30 to 39 minutes (% change since 2000) Travel time to work - 40 to 59 minutes (%) Travel time to work - 40 to 59 minutes (% change since 2000) Travel time to work - 60 to 89 minutes (%) Travel time to work - 60 to 89 minutes (% change since 2000) Travel time to work - 90 or more minutes (%) Travel time to work - 90 or more minutes (% change since 2000) Marital status - Never married (%) Marital status - Now married (%) Marital status - Separated (%) Marital status - Widowed (%) Marital status - Divorced (%) Median family income ($) Median family income (% change since 2000) Median non-family income ($) Median non-family income (% change since 2000) Median per capita income ($) Median per capita income (% change since 2000) Median family income ($) - White Median family income ($) - Black or African American Median family income ($) - Asian Median family income ($) - Hispanic or Latino Median family income ($) - American Indian and Alaska Native Median family income ($) - Multirace Median family income ($) - Other Race Median year house/condo built Median year apartment built Year house built - Built 2005 or later (%) Year house built - Built 2000 to 2004 (%) Year house built - Built 1990 to 1999 (%) Year house built - Built 1980 to 1989 (%) Year house built - Built 1970 to 1979 (%) Year house built - Built 1960 to 1969 (%) Year house built - Built 1950 to 1959 (%) Year house built - Built 1940 to 1949 (%) Year house built - Built 1939 or earlier (%) Average household size Household density (households per square mile) Average household size - White Average household size - Black or African American Average household size - Asian Average household size - Hispanic or Latino Average household size - American Indian and Alaska Native Average household size - Multirace Average household size - Other Race Occupied housing units (%) Vacant housing units (%) Owner occupied housing units (%) Renter occupied housing units (%) Vacancy status - For rent (%) Vacancy status - For sale only (%) Vacancy status - Rented or sold, not occupied (%) Vacancy status - For seasonal, recreational, or occasional use (%) Vacancy status - For migrant workers (%) Vacancy status - Other vacant (%) Residents with income below the poverty level (%) Residents with income below 50% of the poverty level (%) Children below poverty level (%) Poor families by family type - Married-couple family (%) Poor families by family type - Male, no wife present (%) Poor families by family type - Female, no husband present (%) Poverty status for native-born residents (%) Poverty status for foreign-born residents (%) Poverty among high school graduates not in families (%) Poverty among people who did not graduate high school not in families (%) Residents with income below the poverty level (%) - White Residents with income below the poverty level (%) - Black or African American Residents with income below the poverty level (%) - Asian Residents with income below the poverty level (%) - Hispanic or Latino Residents with income below the poverty level (%) - American Indian and Alaska Native Residents with income below the poverty level (%) - Multirace Residents with income below the poverty level (%) - Other Race Air pollution - Air Quality Index (AQI) Air pollution - CO Air pollution - NO2 Air pollution - SO2 Air pollution - Ozone Air pollution - PM10 Air pollution - PM25 Air pollution - Pb Crime - Murders per 100,000 population Crime - Rapes per 100,000 population Crime - Robberies per 100,000 population Crime - Assaults per 100,000 population Crime - Burglaries per 100,000 population Crime - Thefts per 100,000 population Crime - Auto thefts per 100,000 population Crime - Arson per 100,000 population Crime - City-data.com crime index Crime - Violent crime index Crime - Property crime index 1996 Presidential Elections Results - Democratic Party (Clinton) 1996 Presidential Elections Results - Republican Party (Dole) 1996 Presidential Elections Results - Other 2000 Presidential Elections Results - Democratic Party (Gore) 2000 Presidential Elections Results - Republican Party (Bush) 2000 Presidential Elections Results - Other 2004 Presidential Elections Results - Democratic Party (Kerry) 2004 Presidential Elections Results - Republican Party (Bush) 2004 Presidential Elections Results - Other 2008 Presidential Elections Results - Democratic Party (Obama) 2008 Presidential Elections Results - Republican Party (McCain) 2008 Presidential Elections Results - Other 2012 Presidential Elections Results - Democratic Party (Obama) 2012 Presidential Elections Results - Republican Party (Romney) 2012 Presidential Elections Results - Other 2012 Presidential Elections Results - Democratic Party (Obama) 2012 Presidential Elections Results - Republican Party (Romney) 2012 Presidential Elections Results - Other Ancestries Reported - Arab (%) Ancestries Reported - Czech (%) Ancestries Reported - Danish (%) Ancestries Reported - Dutch (%) Ancestries Reported - English (%) Ancestries Reported - French (%) Ancestries Reported - French Canadian (%) Ancestries Reported - German (%) Ancestries Reported - Greek (%) Ancestries Reported - Hungarian (%) Ancestries Reported - Irish (%) Ancestries Reported - Italian (%) Ancestries Reported - Lithuanian (%) Ancestries Reported - Norwegian (%) Ancestries Reported - Polish (%) Ancestries Reported - Portuguese (%) Ancestries Reported - Russian (%) Ancestries Reported - Scotch-Irish (%) Ancestries Reported - Scottish (%) Ancestries Reported - Slovak (%) Ancestries Reported - Subsaharan African (%) Ancestries Reported - Swedish (%) Ancestries Reported - Swiss (%) Ancestries Reported - Ukrainian (%) Ancestries Reported - United States (%) Ancestries Reported - Welsh (%) Ancestries Reported - West Indian (%) Ancestries Reported - Other (%) Educational Attainment - No schooling completed (%) Educational Attainment - Less than high school (%) Educational Attainment - High school or equivalent (%) Educational Attainment - Less than 1 year of college (%) Educational Attainment - 1 or more years of college (%) Educational Attainment - Associate degree (%) Educational Attainment - Bachelor's degree (%) Educational Attainment - Master's degree (%) Educational Attainment - Professional school degree (%) Educational Attainment - Doctorate degree (%) School Enrollment - Nursery, preschool (%) School Enrollment - Kindergarten (%) School Enrollment - Grade 1 to 4 (%) School Enrollment - Grade 5 to 8 (%) School Enrollment - Grade 9 to 12 (%) School Enrollment - College undergrad (%) School Enrollment - Graduate or professional (%) School Enrollment - Not enrolled in school (%) School enrollment - Public schools (%) School enrollment - Private schools (%) School enrollment - Not enrolled (%) Median number of rooms in houses and condos Median number of rooms in apartments Housing units lacking complete plumbing facilities (%) Housing units lacking complete kitchen facilities (%) Density of houses Urban houses (%) Rural houses (%) Households with people 60 years and over (%) Households with people 65 years and over (%) Households with people 75 years and over (%) Households with one or more nonrelatives (%) Households with no nonrelatives (%) Population in households (%) Family households (%) Nonfamily households (%) Population in families (%) Family households with own children (%) Median number of bedrooms in owner occupied houses Mean number of bedrooms in owner occupied houses Median number of bedrooms in renter occupied houses Mean number of bedrooms in renter occupied houses Median number of vehichles in owner occupied houses Mean number of vehichles in owner occupied houses Median number of vehichles in renter occupied houses Mean number of vehichles in renter occupied houses Mortgage status - with mortgage (%) Mortgage status - with second mortgage (%) Mortgage status - with home equity loan (%) Mortgage status - with both second mortgage and home equity loan (%) Mortgage status - without a mortgage (%) Average family size Average family size - White Average family size - Black or African American Average family size - Asian Average family size - Hispanic or Latino Average family size - American Indian and Alaska Native Average family size - Multirace Average family size - Other Race Geographical mobility - Same house 1 year ago (%) Geographical mobility - Moved within same county (%) Geographical mobility - Moved from different county within same state (%) Geographical mobility - Moved from different state (%) Geographical mobility - Moved from abroad (%) Place of birth - Born in state of residence (%) Place of birth - Born in other state (%) Place of birth - Native, outside of US (%) Place of birth - Foreign born (%) Housing units in structures - 1, detached (%) Housing units in structures - 1, attached (%) Housing units in structures - 2 (%) Housing units in structures - 3 or 4 (%) Housing units in structures - 5 to 9 (%) Housing units in structures - 10 to 19 (%) Housing units in structures - 20 to 49 (%) Housing units in structures - 50 or more (%) Housing units in structures - Mobile home (%) Housing units in structures - Boat, RV, van, etc. (%) House/condo owner moved in on average (years ago) Renter moved in on average (years ago) Year householder moved into unit - Moved in 1999 to March 2000 (%) Year householder moved into unit - Moved in 1995 to 1998 (%) Year householder moved into unit - Moved in 1990 to 1994 (%) Year householder moved into unit - Moved in 1980 to 1989 (%) Year householder moved into unit - Moved in 1970 to 1979 (%) Year householder moved into unit - Moved in 1969 or earlier (%) Means of transportation to work - Drove car alone (%) Means of transportation to work - Carpooled (%) Means of transportation to work - Public transportation (%) Means of transportation to work - Bus or trolley bus (%) Means of transportation to work - Streetcar or trolley car (%) Means of transportation to work - Subway or elevated (%) Means of transportation to work - Railroad (%) Means of transportation to work - Ferryboat (%) Means of transportation to work - Taxicab (%) Means of transportation to work - Motorcycle (%) Means of transportation to work - Bicycle (%) Means of transportation to work - Walked (%) Means of transportation to work - Other means (%) Working at home (%) Industry diversity Most Common Industries - Agriculture, forestry, fishing and hunting, and mining (%) Most Common Industries - Agriculture, forestry, fishing and hunting (%) Most Common Industries - Mining, quarrying, and oil and gas extraction (%) Most Common Industries - Construction (%) Most Common Industries - Manufacturing (%) Most Common Industries - Wholesale trade (%) Most Common Industries - Retail trade (%) Most Common Industries - Transportation and warehousing, and utilities (%) Most Common Industries - Transportation and warehousing (%) Most Common Industries - Utilities (%) Most Common Industries - Information (%) Most Common Industries - Finance and insurance, and real estate and rental and leasing (%) Most Common Industries - Finance and insurance (%) Most Common Industries - Real estate and rental and leasing (%) Most Common Industries - Professional, scientific, and management, and administrative and waste management services (%) Most Common Industries - Professional, scientific, and technical services (%) Most Common Industries - Management of companies and enterprises (%) Most Common Industries - Administrative and support and waste management services (%) Most Common Industries - Educational services, and health care and social assistance (%) Most Common Industries - Educational services (%) Most Common Industries - Health care and social assistance (%) Most Common Industries - Arts, entertainment, and recreation, and accommodation and food services (%) Most Common Industries - Arts, entertainment, and recreation (%) Most Common Industries - Accommodation and food services (%) Most Common Industries - Other services, except public administration (%) Most Common Industries - Public administration (%) Occupation diversity Most Common Occupations - Management, business, science, and arts occupations (%) Most Common Occupations - Management, business, and financial occupations (%) Most Common Occupations - Management occupations (%) Most Common Occupations - Business and financial operations occupations (%) Most Common Occupations - Computer, engineering, and science occupations (%) Most Common Occupations - Computer and mathematical occupations (%) Most Common Occupations - Architecture and engineering occupations (%) Most Common Occupations - Life, physical, and social science occupations (%) Most Common Occupations - Education, legal, community service, arts, and media occupations (%) Most Common Occupations - Community and social service occupations (%) Most Common Occupations - Legal occupations (%) Most Common Occupations - Education, training, and library occupations (%) Most Common Occupations - Arts, design, entertainment, sports, and media occupations (%) Most Common Occupations - Healthcare practitioners and technical occupations (%) Most Common Occupations - Health diagnosing and treating practitioners and other technical occupations (%) Most Common Occupations - Health technologists and technicians (%) Most Common Occupations - Service occupations (%) Most Common Occupations - Healthcare support occupations (%) Most Common Occupations - Protective service occupations (%) Most Common Occupations - Fire fighting and prevention, and other protective service workers including supervisors (%) Most Common Occupations - Law enforcement workers including supervisors (%) Most Common Occupations - Food preparation and serving related occupations (%) Most Common Occupations - Building and grounds cleaning and maintenance occupations (%) Most Common Occupations - Personal care and service occupations (%) Most Common Occupations - Sales and office occupations (%) Most Common Occupations - Sales and related occupations (%) Most Common Occupations - Office and administrative support occupations (%) Most Common Occupations - Natural resources, construction, and maintenance occupations (%) Most Common Occupations - Farming, fishing, and forestry occupations (%) Most Common Occupations - Construction and extraction occupations (%) Most Common Occupations - Installation, maintenance, and repair occupations (%) Most Common Occupations - Production, transportation, and material moving occupations (%) Most Common Occupations - Production occupations (%) Most Common Occupations - Transportation occupations (%) Most Common Occupations - Material moving occupations (%) People in Group quarters - Institutionalized population (%) People in Group quarters - Correctional institutions (%) People in Group quarters - Federal prisons and detention centers (%) People in Group quarters - Halfway houses (%) People in Group quarters - Local jails and other confinement facilities (including police lockups) (%) People in Group quarters - Military disciplinary barracks (%) People in Group quarters - State prisons (%) People in Group quarters - Other types of correctional institutions (%) People in Group quarters - Nursing homes (%) People in Group quarters - Hospitals/wards, hospices, and schools for the handicapped (%) People in Group quarters - Hospitals/wards and hospices for chronically ill (%) People in Group quarters - Hospices or homes for chronically ill (%) People in Group quarters - Military hospitals or wards for chronically ill (%) People in Group quarters - Other hospitals or wards for chronically ill (%) People in Group quarters - Hospitals or wards for drug/alcohol abuse (%) People in Group quarters - Mental (Psychiatric) hospitals or wards (%) People in Group quarters - Schools, hospitals, or wards for the mentally retarded (%) People in Group quarters - Schools, hospitals, or wards for the physically handicapped (%) People in Group quarters - Institutions for the deaf (%) People in Group quarters - Institutions for the blind (%) People in Group quarters - Orthopedic wards and institutions for the physically handicapped (%) People in Group quarters - Wards in general hospitals for patients who have no usual home elsewhere (%) People in Group quarters - Wards in military hospitals for patients who have no usual home elsewhere (%) People in Group quarters - Juvenile institutions (%) People in Group quarters - Long-term care (%) People in Group quarters - Homes for abused, dependent, and neglected children (%) People in Group quarters - Residential treatment centers for emotionally disturbed children (%) People in Group quarters - Training schools for juvenile delinquents (%) People in Group quarters - Short-term care, detention or diagnostic centers for delinquent children (%) People in Group quarters - Type of juvenile institution unknown (%) People in Group quarters - Noninstitutionalized population (%) People in Group quarters - College dormitories (includes college quarters off campus) (%) People in Group quarters - Military quarters (%) People in Group quarters - On base (%) People in Group quarters - Barracks, unaccompanied personnel housing (UPH), (Enlisted/Officer), ;and similar group living quarters for military personnel (%) People in Group quarters - Transient quarters for temporary residents (%) People in Group quarters - Military ships (%) People in Group quarters - Group homes (%) People in Group quarters - Homes or halfway houses for drug/alcohol abuse (%) People in Group quarters - Homes for the mentally ill (%) People in Group quarters - Homes for the mentally retarded (%) People in Group quarters - Homes for the physically handicapped (%) People in Group quarters - Other group homes (%) People in Group quarters - Religious group quarters (%) People in Group quarters - Dormitories (%) People in Group quarters - Agriculture workers' dormitories on farms (%) People in Group quarters - Job Corps and vocational training facilities (%) People in Group quarters - Other workers' dormitories (%) People in Group quarters - Crews of maritime vessels (%) People in Group quarters - Other nonhousehold living situations (%) People in Group quarters - Other noninstitutional group quarters (%) Residents speaking English at home (%) Residents speaking English at home - Born in the United States (%) Residents speaking English at home - Native, born elsewhere (%) Residents speaking English at home - Foreign born (%) Residents speaking Spanish at home (%) Residents speaking Spanish at home - Born in the United States (%) Residents speaking Spanish at home - Native, born elsewhere (%) Residents speaking Spanish at home - Foreign born (%) Residents speaking other language at home (%) Residents speaking other language at home - Born in the United States (%) Residents speaking other language at home - Native, born elsewhere (%) Residents speaking other language at home - Foreign born (%) Class of Workers - Employee of private company (%) Class of Workers - Self-employed in own incorporated business (%) Class of Workers - Private not-for-profit wage and salary workers (%) Class of Workers - Local government workers (%) Class of Workers - State government workers (%) Class of Workers - Federal government workers (%) Class of Workers - Self-employed workers in own not incorporated business and Unpaid family workers (%) House heating fuel used in houses and condos - Utility gas (%) House heating fuel used in houses and condos - Bottled, tank, or LP gas (%) House heating fuel used in houses and condos - Electricity (%) House heating fuel used in houses and condos - Fuel oil, kerosene, etc. (%) House heating fuel used in houses and condos - Coal or coke (%) House heating fuel used in houses and condos - Wood (%) House heating fuel used in houses and condos - Solar energy (%) House heating fuel used in houses and condos - Other fuel (%) House heating fuel used in houses and condos - No fuel used (%) House heating fuel used in apartments - Utility gas (%) House heating fuel used in apartments - Bottled, tank, or LP gas (%) House heating fuel used in apartments - Electricity (%) House heating fuel used in apartments - Fuel oil, kerosene, etc. (%) House heating fuel used in apartments - Coal or coke (%) House heating fuel used in apartments - Wood (%) House heating fuel used in apartments - Solar energy (%) House heating fuel used in apartments - Other fuel (%) House heating fuel used in apartments - No fuel used (%) Armed forces status - In Armed Forces (%) Armed forces status - Civilian (%) Armed forces status - Civilian - Veteran (%) Armed forces status - Civilian - Nonveteran (%)

Options

Loading data...

Based on 2000-2016 data

Loading data...


Hide US histogram
  • El Dorado Hills races chart
    • 32,23873.1%White alone
    • 5,39712.2%Asian alone
    • 3,9809.0%Hispanic
    • 1,2462.8%Two or more races
    • 8822.0%Black alone
    • 5411.2%Other race alone
    • 3060.7%American Indian alone
    • 1470.3%Native Hawaiian and Other
      Pacific Islander alone

Races in El Dorado Hills detailed stats: ancestries, foreign born residents, place of birth

Mar. 2016 cost of living index in El Dorado Hills: 102.2 (near average, U.S. average is 100)
City-Data.com Blog Recent articles from our blog. Our writers, many of them Ph.D. graduates or candidates, create easy-to-read articles on a wide variety of topics.
El Dorado Hills / Granite Bay commute to Santa Clara vs. living in Salt Lake City  (28 replies)
Watermark and Serrano - El Dorado Hills (HELP!!!:)  (4 replies)
El Dorado Hills or Elk Grove?  (10 replies)
Carmichael, Folsom, El Dorado Hills, Orangeville, Fair Oaks Elementary School advice  (7 replies)
folsom vs. el dorado hills  (68 replies)
Roseville, El Dorado Hills, Rocklin, Folsom? Anywhere Else?  (19 replies)

Latest news from El Dorado Hills, CA collected exclusively by city-data.com from local newspapers, TV, and radio stations

Ancestries: German (8.1%), English (6.7%), Italian (5.1%), American (4.8%), Irish (4.0%), European (3.8%).

Current Local Time: PST time zone

Elevation: 765 feet

Land area: 17.9 square miles.

Population density: 2,352 people per square mile   (low).

El Dorado Hills,CA real estate house value index trend

El Dorado Hills, California map

For population 25 years and over in El Dorado Hills:

  • High school or higher: 96.9%
  • Bachelor's degree or higher: 53.5%
  • Graduate or professional degree: 20.2%
  • Unemployed: 3.1%
  • Mean travel time to work (commute): 27.5 minutes

For population 15 years and over in El Dorado Hills CDP:

  • Never married: 23.0%
  • Now married: 63.5%
  • Separated: 0.9%
  • Widowed: 3.6%
  • Divorced: 8.9%

6,401 residents are foreign born (2.6% Europe, 1.6% Latin America).

This place: 14.5%
California: 27.0%

According to our research of California and other state lists there were 16 registered sex offenders living in El Dorado Hills, California as of November 14, 2018.
The ratio of number of residents in El Dorado Hills to the number of sex offenders is 2,757 to 1.
The number of registered sex offenders compared to the number of residents in this city is a lot smaller than the state average.


Median real estate property taxes paid for housing units with mortgages in 2016: ,116 (1.0%)
Median real estate property taxes paid for housing units with no mortgage in 2016: ,005 (0.7%)

El Dorado Hills satellite photo by USGS

Nearest city with pop. 50,000+: Folsom, CA (4.4 miles , pop. 51,884).

Nearest city with pop. 200,000+: Sacramento, CA (23.0 miles , pop. 407,018).

Nearest city with pop. 1,000,000+: Los Angeles, CA (352.0 miles , pop. 3,694,820).

Nearest cities: Folsom, CA (2.1 miles ), Cameron Park, CA (2.2 miles ), Granite Bay, CA (2.6 miles ), Shingle Springs, CA (2.7 miles ), Orangevale, CA (2.7 miles ), Loomis Basin-Folsom Lake, CA (3.1 miles ), Fair Oaks, CA (3.2 miles ), Gold River, CA (3.2 miles ).

Latitude: 38.69 N, Longitude: 121.08 W

Daytime population change due to commuting: -4,574 (-10.4%)
Workers who live and work in this place: 5,328 (27.0%)

El Dorado Hills household income distributionEl Dorado Hills home values distribution

This place's Wikipedia profile

Unemployment in September 2015:
Here: 4.8%
California: 5.5%

Unemployment by year (%)

Most common industries in 2016 (%)

Males Females
  • Manufacturing (14%)
  • Professional, scientific, and technical services (12%)
  • Retail trade (10%)
  • Public administration (10%)
  • Finance and insurance (9%)
  • Health care and social assistance (7%)
  • Construction (5%)

Most common occupations in 2016 (%)

Males Females
  • Management occupations (21%)
  • Sales and related occupations (16%)
  • Business and financial operations occupations (8%)
  • Architecture and engineering occupations (7%)
  • Computer and mathematical occupations (6%)
  • Office and administrative support occupations (6%)
  • Health diagnosing and treating practitioners and other technical occupations (4%)

Work and jobs in El Dorado Hills: detailed stats about occupations, industries, unemployment, workers, commute

Average climate in El Dorado Hills, California

Based on data reported by over 4,000 weather stations

El Dorado Hills, California average temperaturesEl Dorado Hills, California average precipitationEl Dorado Hills, California humidityEl Dorado Hills, California wind speedEl Dorado Hills, California snowfallEl Dorado Hills, California sunshineEl Dorado Hills, California clear and cloudy days

Earthquake activity:

El Dorado Hills-area historical earthquake activity is near California state average. It is 762% greater than the overall U.S. average.
On 4/18/1906 at 13:12:21, a magnitude 7.9 (7.9 UK, Class: Major, Intensity: VIII - XII) earthquake occurred 114.6 miles away from the city center, causing 4,000,000 total damage
On 10/3/1915 at 06:52:48, a magnitude 7.6 (7.6 UK) earthquake occurred 228.2 miles away from El Dorado Hills center
On 10/18/1989 at 00:04:15, a magnitude 7.1 (6.5 MB, 7.1 MS, 6.9 MW, 7.0 ML) earthquake occurred 115.5 miles away from the city center, causing 62 deaths (62 shaking deaths) and 3757 injuries, causing ,305,032,704 total damage
On 12/21/1932 at 06:10:09, a magnitude 7.2 (7.2 UK) earthquake occurred 162.3 miles away from the city center
On 7/21/1952 at 11:52:14, a magnitude 7.7 (7.7 UK) earthquake occurred 279.8 miles away from El Dorado Hills center, causing ,000,000 total damage
On 1/31/1922 at 13:17:28, a magnitude 7.6 (7.6 UK) earthquake occurred 275.8 miles away from the city center
Magnitude types: body-wave magnitude (MB), local magnitude (ML), surface-wave magnitude (MS), moment magnitude (MW)

Natural disasters:

The number of natural disasters in El Dorado County (12) is near the US average (13).
Major Disasters (Presidential) Declared: 8
Emergencies Declared: 2

Causes of natural disasters: Floods: 8, Storms: 5, Landslides: 4, Fires: 2, Mudslides: 2, Winter Storms: 2, Drought: 1, Heavy Rain: 1, Hurricane: 1 (Note: Some incidents may be assigned to more than one category). El Dorado Hills topographic map

Birthplace of: Hiram Thompson - College basketball player (Hawaii Warriors).

Hospitals and medical centers in El Dorado Hills:

  • ACTION HOME NRSG SRVS (897 EMBARCADERO DRIVE STE #213)

Other hospitals and medical centers near El Dorado Hills:

  • MERCY HOSPITAL OF FOLSOM Acute Care Hospitals (about 4 miles away; FOLSOM, CA)
  • ST JUDES OF FOLSOM (Hospital, about 5 miles away; FOLSOM, CA)
  • FOLSOM CONVALESCENT HOSPITAL (Nursing Home, about 5 miles away; FOLSOM, CA)
  • CAMERON PARK DIALYSIS (Dialysis Facility, about 7 miles away; CAMERON PARK, CA)
  • ORANGEVALE DIALYSIS CENTER (Dialysis Facility, about 8 miles away; ORANGEVALE, CA)
  • ALWAYS HOME NRSG SERVICES, INC (Home Health Center, about 9 miles away; ORANGEVALE, CA)
  • ESKATON CARE CENTER FAIR OAKS (Nursing Home, about 11 miles away; FAIR OAKS, CA)

Political contributions by individuals in El Dorado Hills, CA

Amtrak stations near El Dorado Hills:

  • 7 miles: CAMERON PARK (US HWY. 50 & CAMERON PARK DR.) - Bus Station . Services: fully wheelchair accessible, enclosed waiting area, public restrooms, public payphones, full-service food facilities, free short-term parking.
  • 12 miles: ROCKLIN (ROCKLIN RD. & RAILROAD AVE.) - Bus Station . Services: partially wheelchair accessible, free short-term parking.
  • 12 miles: ROSEVILLE (201 PACIFIC ST.) . Services: partially wheelchair accessible, public payphones, free short-term parking, free long-term parking, taxi stand, intercity bus service.

Colleges/universities with over 2000 students nearest to El Dorado Hills:

  • Folsom Lake College (about 4 miles; Folsom, CA; Full-time enrollment: 5,308)
  • Sierra College (about 11 miles; Rocklin, CA; FT enrollment: 11,488)
  • American River College (about 15 miles; Sacramento, CA; FT enrollment: 20,452)
  • California State University-Sacramento (about 21 miles; Sacramento, CA; FT enrollment: 22,234)
  • University of Phoenix-Sacramento Valley Campus (about 25 miles; Sacramento, CA; FT enrollment: 2,855)
  • Sacramento City College (about 25 miles; Sacramento, CA; FT enrollment: 15,963)
  • Universal Technical Institute of Northern California Inc (about 25 miles; Sacramento, CA; FT enrollment: 3,193)

Public high school in El Dorado Hills:

  • OAK RIDGE HIGH (Students: 1,632, Location: 1120 HARVARD WAY, Grades: 9-12)

Public elementary/middle schools in El Dorado Hills:

  • ROLLING HILLS MIDDLE (Students: 972, Location: 7141 SILVA VALLEY PKWY., Grades: 6-8)
  • MARINA VILLAGE MIDDLE (Students: 708, Location: 1901 FRANCISCO DR., Grades: 6-8)
  • OAK MEADOW ELEMENTARY (Students: 473, Location: 7701 SILVA VALLEY PKWY., Grades: KG-5)
  • SILVA VALLEY ELEMENTARY (Students: 429, Location: 3001 GOLDEN EAGLE LN., Grades: KG-5)
  • LAKE FOREST ELEMENTARY (Students: 389, Location: 2240 SAILSBURY DR., Grades: KG-5)
  • WILLIAM BROOKS ELEMENTARY (Students: 312, Location: 3610 PARK DR., Grades: KG-5)
  • JACKSON ELEMENTARY (Students: 296, Location: 2561 FRANCISCO DR., Grades: KG-5)
  • RISING SUN MONTESSORI (Location: 7006 ROSSMORE LN., Grades: 1-8, Charter school)
  • LAKEVIEW ELEMENTARY (Location: 3371 BRITTANY WAY, Grades: KG-5)

Private elementary/middle schools in El Dorado Hills:

  • HOLY TRINITY SCHOOL (Students: 301, Location: 3115 TIERRA DE DIOS DR, Grades: KG-8)
  • GHS ACADEMY- GOLDEN HILLS SCHOOL (Students: 205, Location: 1060 SUNCAST LN, Grades: PK-8)
  • MARBLE VALLEY CENTER FOR EXCELLENCE (Students: 157, Location: 5005 HILLSDALE CIRCLE, Grades: PK-8)
  • GUIDING HANDS SCHOOL, INC (Students: 147, Location: 4900 WINDPLAY DR, Grades: UG-8)
  • MADRONE MONTESSORI SCHOOL (Students: 23, Location: 5001 WINDPLAY DR STE 1, Grades: UG-1)
See full list of schools located in El Dorado Hills

User submitted facts and corrections:

  • Private School: Golden Hills Academy - 1060 Suncast Lane K-8 approximately 180 students
  • KGBY (92.5), KSSJ (94.7), and KYMX (96.1) are three of our strongest FM radio stations, although they are not listed.

Click to draw/clear place borders

Notable locations in El Dorado Hills: El Dorado Marina (A), El Dorado Hills Golf Course (B), El Dorado Hills Business Park (C), Oakridge-Eldorado Hills Branch El Dorado County Library (D), El Dorado Hills Fire Department Station 86 (E), El Dorado Hills Fire Department Station 84 (F), El Dorado Hills Fire Department Station 85 Headquarters (G), El Dorado Hills Fire Department Station 87 (H). Display/hide their locations on the map

Shopping Center: El Dorado Hills Village Center Shopping Center (1). Display/hide its location on the map

Cemetery: Clarksville Cemetery (1). Display/hide its location on the map

Creek: Allegheny Creek (A). Display/hide its location on the map

Parks in El Dorado Hills include: Art Weisburg Park (1), Browns Ravine Recreation Area (2), Crescent Ridge Park (3), El Dorado Hills Archery Range (4), El Dorado Hills Community Park (5), Ridgeview Park (6), Saint Andrews Park (7). Display/hide their locations on the map

Tourist attraction: Bounceopolis (Amusement & Theme Parks; 5041 Robert J Mathews Pkw Suite 200) (1). Display/hide its approximate location on the map

El Dorado County has a predicted average indoor radon screening level between 2 and 4 pCi/L (pico curies per liter) - Moderate Potential

Air pollution and air quality trends
(lower is better)

Air Quality Index (AQI) level in 2013 was 108. This is significantly worse than average.

City: 108
U.S.: 75

Carbon Monoxide (CO) [ppm] level in 2013 was 0.309. This is about average. Closest monitor was 5.9 miles away from the city center.

City: 0.309
U.S.: 0.293

Nitrogen Dioxide (NO2) [ppb] level in 2013 was 6.60. This is about average. Closest monitor was 4.7 miles away from the city center.

City: 6.60
U.S.: 6.40

Sulfur Dioxide (SO2) [ppb] level in 2010 was 0.459. This is significantly better than average. Closest monitor was 9.4 miles away from the city center.

City: 0.459
U.S.: 1.912

Ozone [ppb] level in 2013 was 26.9. This is about average. Closest monitor was 5.9 miles away from the city center.

City: 26.9
U.S.: 32.0

Particulate Matter (PM10) [µg/m3] level in 2013 was 19.9. This is about average. Closest monitor was 9.4 miles away from the city center.

City: 19.9
U.S.: 20.0

Particulate Matter (PM2.5) [µg/m3] level in 2013 was 7.20. This is better than average. Closest monitor was 4.7 miles away from the city center.

City: 7.20
U.S.: 9.35

Lead (Pb) [µg/m3] level in 2012 was 0.00169. This is significantly better than average. Closest monitor was 9.4 miles away from the city center.

City: 0.00169
U.S.: 0.01376
Percentage of residents living in poverty in 2016: 4.1%
(3.7% for White Non-Hispanic residents, 5.1% for Black residents, 7.5% for Hispanic or Latino residents, 16.5% for American Indian residents, 7.2% for other race residents, 13.5% for two or more races residents)

Detailed information about poverty and poor residents in El Dorado Hills, CA

Average household size:
This place:   2.9 people
California:   2.9 people

Percentage of family households:
This place:   84.0%
Whole state:   68.7%

Percentage of households with unmarried partners:
This place:   3.8%
Whole state:   7.2%

Likely homosexual households (counted as self-reported same-sex unmarried-partner households)
  • Lesbian couples: 0.3% of all households
  • Gay men: 0.3% of all households
10 people in workers' group living quarters and job corps centers in 2010
6 people in group homes intended for adults in 2010

Banks with branches in El Dorado Hills (2011 data):

  • JPMorgan Chase Bank, National Association: El Dorado Town Center Banking Center at 4363 Town Center Blvd, branch established on 2010/09/07; Green Valley And Francisco Branch at 2215 Francisco Dr, Ste 110, branch established on 2010/11/30. Info updated 2011/11/10: Bank assets: ,811,678.0 mil, Deposits: ,190,738.0 mil, headquarters in Columbus, OH, positive income, International Specialization, 5577 total offices, Holding Company: Jpmorgan Chase & Co.
  • Wells Fargo Bank, National Association: El Dorado Hills Raley's Branch at 3935 Park Drive, branch established on 1995/01/20; El Dorado Hills Town Center Branch at 4355 Town Center Boulevard, Suite 110, branch established on 2004/06/08. Info updated 2011/04/05: Bank assets: ,161,490.0 mil, Deposits: 5,653.0 mil, headquarters in Sioux Falls, SD, positive income, 6395 total offices, Holding Company: Wells Fargo & Company
  • Umpqua Bank: El Dorado Hills Branch at 3880 El Dorado Hills Blvd, branch established on 1990/08/23. Info updated 2011/09/02: Bank assets: ,556.7 mil, Deposits: ,325.3 mil, headquarters in Roseburg, OR, positive income, Commercial Lending Specialization, 193 total offices, Holding Company: Umpqua Holdings Corporation
  • Mechanics Bank: El Dorado Hills Branch at 4354 Town Center Blvd, branch established on 2004/06/28. Info updated 2011/03/24: Bank assets: ,991.7 mil, Deposits: ,573.6 mil, headquarters in Richmond, CA, positive income, Commercial Lending Specialization, 31 total offices
  • Bank of America, National Association: El Dorado Hills Banking Center Branc at 3901 Park Drive, Building A, branch established on 2003/03/17. Info updated 2009/11/18: Bank assets: ,451,969.3 mil, Deposits: ,077,176.8 mil, headquarters in Charlotte, NC, positive income, 5782 total offices, Holding Company: Bank Of America Corporation
  • U.S. Bank National Association: El Dorado Hills Safeway at 2207 Francisco Drive, branch established on 2006/12/15. Info updated 2012/01/30: Bank assets: 0,470.8 mil, Deposits: 6,091.5 mil, headquarters in Cincinnati, OH, positive income, 3121 total offices, Holding Company: U.S. Bancorp
  • Bank of the West: El Dorado Hills at 2211 Francisco Drive #100, branch established on 2007/04/02. Info updated 2009/11/16: Bank assets: ,408.3 mil, Deposits: ,995.2 mil, headquarters in San Francisco, CA, positive income, 647 total offices, Holding Company: Bnp Paribas
  • El Dorado Savings Bank, F.S.B.: El Dorado Hills Branch at 3963 Park Dr, branch established on 1956/01/01. Info updated 2011/07/21: Bank assets: ,706.7 mil, Deposits: ,536.5 mil, headquarters in Placerville, CA, positive income, Mortgage Lending Specialization, 35 total offices

Educational Attainment (%) in 2016

School Enrollment by Level of School (%) in 2016

Education Gini index (Inequality in education)
Here: 9.9
California average: 15.3

El Dorado Hills travel time to work - commuteEl Dorado Hills mode of transportation to work chart

Presidential Elections Results

1996 2000 2004 2008 2012 2016

2016 Presidential Elections Results

Graphs represent county-level data. Detailed 2008 Election Results

Religion statistics for El Dorado Hills CDP (based on El Dorado County data)

Religion Adherents Congregations
Catholic 23,298 6
Evangelical Protestant 16,538 69
Other 8,928 30
Mainline Protestant 2,937 11
Orthodox 315 2
None 129,042 -
Source: Clifford Grammich, Kirk Hadaway, Richard Houseal, Dale E.Jones, Alexei Krindatch, Richie Stanley and Richard H.Taylor. 2012. 2010 U.S.Religion Census: Religious Congregations & Membership Study. Association of Statisticians of American Religious Bodies. Jones, Dale E., et al. 2002. Congregations and Membership in the United States 2000. Nashville, TN: Glenmary Research Center. Graphs represent county-level data

Food Environment Statistics:

Number of grocery stores: 33
El Dorado County: 1.88 / 10,000 pop.
California: 2.14 / 10,000 pop.
Number of supercenters and club stores: 1
This county: 0.06 / 10,000 pop.
State: 0.04 / 10,000 pop.
Number of convenience stores (no gas): 17
El Dorado County: 0.97 / 10,000 pop.
California: 0.62 / 10,000 pop.
Number of convenience stores (with gas): 40
El Dorado County: 2.28 / 10,000 pop.
State: 1.49 / 10,000 pop.
Number of full-service restaurants: 173
Here: 9.87 / 10,000 pop.
State: 7.42 / 10,000 pop.
Adult diabetes rate:
This county: 6.7%
California: 7.3%
Adult obesity rate:
El Dorado County: 19.9%
California: 21.3%
Low-income preschool obesity rate:
This county: 13.1%
State: 17.9%

Health and Nutrition:

Healthy diet rate:
El Dorado Hills: 51.8%
California: 48.8%

Average overall health of teeth and gums:
This city: 50.6%
California: 47.1%

Average BMI:
El Dorado Hills: 28.7
California: 28.1

People feeling badly about themselves:
Here: 21.3%
State: 20.9%

People not drinking alcohol at all:
This city: 8.5%
California: 11.4%

Average hours sleeping at night:
Here: 6.8
California: 6.8

Overweight people:
This city: 36.9%
California: 31.3%

General health condition:
This city: 58.5%
California: 55.7%

Average condition of hearing:
Here: 80.2%
State: 80.6%

More about Health and Nutrition of El Dorado Hills, CA Residents

6.66% of this county's 2011 resident taxpayers lived in other counties in 2010 (,235 average adjusted gross income)

Here: 6.66%
California average: 5.03%

0.03% of residents moved from foreign countries (7 average AGI)

El Dorado County: 0.03%
California average: 0.07%

Top counties from which taxpayers relocated into this county between 2010 and 2011:
from Sacramento County, CA   2.00% (,387 average AGI)
from Placer County, CA   0.40% (,996)
from Santa Clara County, CA   0.22% (,841)

7.14% of this county's 2010 resident taxpayers moved to other counties in 2011 (,628 average adjusted gross income)

Here: 7.14%
California average: 5.12%

0.05% of residents moved to foreign countries (9 average AGI)

El Dorado County: 0.05%
California average: 0.10%

Top counties to which taxpayers relocated from this county between 2010 and 2011:
to Sacramento County, CA   2.11% (,329 average AGI)
to Placer County, CA   0.51% (,861)
to Douglas County, NV   0.28% (,367)

Strongest AM radio stations in El Dorado Hills:

  • KLIB (1110 AM; 10 kW; ROSEVILLE, CA; Owner: WAY BROADCASTING, INC.)
  • KFIA (710 AM; 25 kW; CARMICHAEL, CA; Owner: VISTA BROADCASTING INC.)
  • KSTE (650 AM; 25 kW; RANCHO CORDOVA, CA; Owner: AMFM RADIO LICENSES, L.L.C.)
  • KHTK (1140 AM; 50 kW; SACRAMENTO, CA; Owner: INFINITY RADIO SUBSIDIARY OPERATIONS INC.)
  • KFSG (1690 AM; 10 kW; ROSEVILLE, CA; Owner: WAY BROADCASTING, INC.)
  • KAHI (950 AM; 10 kW; AUBURN, CA; Owner: IHR EDUCATIONAL BROADCASTING)
  • KFBK (1530 AM; 50 kW; SACRAMENTO, CA; Owner: AMFM RADIO LICENSES, L.L.C.)
  • KTKZ (1380 AM; 5 kW; SACRAMENTO, CA; Owner: VISTA BROADCASTING, INC.)
  • KEBR (1210 AM; 5 kW; ROCKLIN, CA; Owner: FAMILY STATIONS, INC.)
  • KCBC (770 AM; 50 kW; RIVERBANK, CA; Owner: KIERTRON, INC.)
  • KSMH (1620 AM; 10 kW; WEST SACRAMENTO, CA; Owner: IHR EDUCATIONAL BROADCASTING)
  • KCBS (740 AM; 50 kW; SAN FRANCISCO, CA; Owner: INFINITY BROADCASTING OPERATIONS, INC.)
  • KIID (1470 AM; 5 kW; SACRAMENTO, CA; Owner: ABC, INC.)

Strongest FM radio stations in El Dorado Hills:

  • KRXQ (98.5 FM; SACRAMENTO, CA; Owner: ENTERCOM SACRAMENTO LICENSE, LLC)
  • KNCI (105.1 FM; SACRAMENTO, CA; Owner: INFINITY RADIO SUBSIDIARY OPERATIONS INC.)
  • K256AG (99.1 FM; CLARKSVILLE, CA; Owner: EDUCATIONAL MEDIA FOUNDATION)
  • KWOD (106.5 FM; SACRAMENTO, CA; Owner: ENTERCOM SACRAMENTO LICENSE, LLC)
  • KZZO (100.5 FM; SACRAMENTO, CA; Owner: INFINITY RADIO OPERATIONS INC.)
  • KCCL-FM (101.9 FM; SHINGLE SPRINGS, CA; Owner: ENTRAVISION HOLDINGS, LLC)
  • KXCL (103.9 FM; YUBA CITY, CA; Owner: HARLAN COMMUNICATIONS, INC.)
  • KEDR (88.1 FM; SACRAMENTO, CA; Owner: FAMILY STATIONS, INC.)
  • KHYL (101.1 FM; AUBURN, CA; Owner: AMFM RADIO LICENSES, L.L.C.)
  • KXOA (93.7 FM; ROSEVILLE, CA; Owner: INFINITY RADIO SUBSIDIARY OPERATIONS INC.)
  • KZSA (92.1 FM; PLACERVILLE, CA; Owner: FIRST BROADCASTING INVESTMENTS, L.P.)
  • KWYL (102.9 FM; SOUTH LAKE TAHOE, CA; Owner: CITADEL BROADCASTING COMPANY)
  • KOSL (94.3 FM; JACKSON, CA; Owner: HBC LICENSE CORPORATION)
  • KKSF-FM1 (103.7 FM; PLEASANTON, ETC., CA; Owner: AMFM RADIO LICENSES, L.L.C.)
  • KRCX-FM (99.9 FM; MARYSVILLE, CA; Owner: ENTRAVISION HOLDINGS, LLC)
  • KBLX-FM2 (102.9 FM; PLEASANTON, CA; Owner: ICBC BROADCAST HOLDINGS -CA, INC.)
  • KSOL-FM3 (98.9 FM; PLEASANTON, CA; Owner: TMS LICENSE CALIFORNIA, INC)
  • KFRC-FM3 (99.7 FM; WALNUT CREEK, CA; Owner: INFINITY KFRC-FM, INC.)
  • KZBR-FM1 (95.7 FM; WALNUT CREEK, CA; Owner: BONNEVILLE HOLDING COMPANY)
  • KMJE (101.5 FM; GRIDLEY, CA; Owner: RESULTS RADIO LICENSEE, LLC)

TV broadcast stations around El Dorado Hills:

  • KMMK-LP (Channel 14; SACRAMENTO, CA; Owner: CABALLERO TELEVISION TEXAS, L.L.C.)
  • KEZT-CA (Channel 23; SACRAMENTO, CA; Owner: TELEFUTURA SACRAMENTO LLC)
  • KSPX (Channel 29; SACRAMENTO, CA; Owner: PAXSON SACRAMENTO LICENSE, INC.)
  • KGTN-LP (Channel 62; PLACERVILLE, CA; Owner: PRAISE THE LORD STUDIO CHAPEL)
  • KTXL (Channel 40; SACRAMENTO, CA; Owner: CHANNEL 40, INC.)
  • KQCA (Channel 58; STOCKTON, CA; Owner: KCRA HEARST-ARGYLE TELEVISION, INC.)
  • KCRA-TV (Channel 3; SACRAMENTO, CA; Owner: KCRA HEARST-ARGYLE TELEVISION, INC.)
  • KOVR (Channel 13; STOCKTON, CA; Owner: SCI - SACRAMENTO LICENSEE, LLC)
  • KXTV (Channel 10; SACRAMENTO, CA; Owner: KXTV, INC.)
  • KUVS (Channel 19; MODESTO, CA; Owner: KUVS LICENSE PARTNERSHIP, G.P.)
  • KVIE (Channel 6; SACRAMENTO, CA; Owner: KVIE, INC.)
  • KMAX-TV (Channel 31; SACRAMENTO, CA; Owner: UPN STATIONS GROUP INC.)
  • K27EU (Channel 27; SACRAMENTO, CA; Owner: ABUNDANT LIFE BROADCASTING, INC.)
  • K22FR (Channel 22; SACRAMENTO, CA; Owner: NATIONAL MINORITY T.V., INC.)
  • KCSO-LP (Channel 33; SACRAMENTO, CA; Owner: SAINTE 51, L.P.)
  • K17EH (Channel 17; EUREKA, CA; Owner: MS COMMUNICATIONS, LLC)
  • K69FB (Channel 69; SACRAMENTO, CA; Owner: TRINITY BROADCASTING NETWORK)
  • KSAO-LP (Channel 49; SACRAMENTO, CA; Owner: GARY M. COCOLA FAMILY TRUST, GARY M. COCOLA TRUSTEE)
  • K27FX (Channel 27; EUREKA, CA; Owner: MS COMMUNICATIONS, LLC)
  • KBTV-LP (Channel 8; SACRAMENTO, CA; Owner: INCISOR COMMUNICATIONS, L.L.C.)
  • KMUM-CA (Channel 15; SACRAMENTO, CA; Owner: CABALLERO TELEVISION TEXAS, L.L.C.)
  • KRJR-LP (Channel 47; SACRAMENTO, CA; Owner: WORD OF GOD FELLOWSHIP, INC.)

El Dorado Hills fatal accident list:

Jul 27, 2011 03:01 AM, Green Valley Rd, Lat: 38.69867, Lon: -121.01164, Vehicles: 2, Persons: 2, Fatalities: 1

  • 11Number of bridges
  • 48m157ftTotal length
  • 2,000Total costs
  • 293,577Total average daily traffic
  • 17,062Total average daily truck traffic
  • 11930-1939
  • 21960-1969
  • 21990-1999
  • 42000-2009
  • 22010-2015
See Full National Bridge Inventory Statistics for El Dorado Hills, CA
A) FHA, FSA/RHS & VA
Home Purchase Loans B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Home Mortgage Disclosure Act Aggregated Statistics For Year 2009
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 79 9,914 234 1,514 938 9,258 47 3,456 50 6,785
APPLICATIONS APPROVED, NOT ACCEPTED 6 2,163 30 4,955 107 8,398 4 ,293 3 4,887
APPLICATIONS DENIED 17 8,461 29 3,876 228 7,821 18 2,522 21 9,868
APPLICATIONS WITHDRAWN 16 7,937 31 4,152 193 2,768 11 7,799 9 9,817
FILES CLOSED FOR INCOMPLETENESS 2 3,160 4 5,150 46 2,344 4 2,438 1 ,850
A) FHA, FSA/RHS & VA
Home Purchase Loans B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) G) Loans On Manufactured
Home Dwelling (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2008
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 44 9,051 302 4,772 365 0,086 48 5,574 33 0,275 1 ,210
APPLICATIONS APPROVED, NOT ACCEPTED 6 1,248 76 2,024 90 1,551 5 3,252 10 8,547 0


from

4

Daydream Island, QLD

Children stay and eat breakfast FREE of charge BONUS Spa voucher per stay


from

0

Melbourne, VIC

FREE night deal - Stay 3 pay 2


from

0

Sydney, NSW

Stay 3 nights, Pay 2


from

9

Sydney, NSW

Complimentary room upgrade to One Bedroom Spa Suite, including Anvers Chocolate Truffles on arrival


from

1

Mooloolaba QLD

Great Offers - 1 Bedroom Suite, Stay 2 nights and receive a Refresh Pack and Serendipity Spa Gift Pack.

APPLICATIONS DENIED 17 0,209 73 3,557 190 1,253 23 1,326 9 5,169 0

For other uses, see CNN (disambiguation).

In deep learning, a convolutional neural network (CNN, or ConvNet) is a class of deep, feed-forward artificial neural networks, most commonly applied to analyzing visual imagery.

CNNs use a variation of multilayer perceptrons designed to require minimal preprocessing.[1] They are also known as shift invariant or space invariant artificial neural networks (SIANN), based on their shared-weights architecture and translation invariance characteristics.[2][3]

Convolutional networks were inspired by biological processes[4] in that the connectivity pattern between neurons resembles the organization of the animal visual cortex. Individual cortical neurons respond to stimuli only in a restricted region of the visual field known as the receptive field. The receptive fields of different neurons partially overlap such that they cover the entire visual field.

CNNs use relatively little pre-processing compared to other image classification algorithms. This means that the network learns the filters that in traditional algorithms were hand-engineered. This independence from prior knowledge and human effort in feature design is a major advantage.

They have applications in image and video recognition, recommender systems[5] image classification , medical image analysis and natural language processing.[6]

Contents

A CNN consists of an input and an output layer, as well as multiple hidden layers. The hidden layers of a CNN typically consist of convolutional layers, pooling layers, fully connected layers and normalization layers[citation needed].

Description of the process as a convolution in neural networks is by convention. Mathematically it is a cross-correlation rather than a convolution. This only has significance for the indices in the matrix, and thus which weights are placed at which index.

Convolutional[edit]

Convolutional layers apply a convolution operation to the input, passing the result to the next layer. The convolution emulates the response of an individual neuron to visual stimuli.[7]

Each convolutional neuron processes data only for its receptive field. Although fully connected feedforward neural networks can be used to learn features as well as classify data, it is not practical to apply this architecture to images. A very high number of neurons would be necessary, even in a shallow (opposite of deep) architecture, due to the very large input sizes associated with images, where each pixel is a relevant variable. For instance, a fully connected layer for a (small) image of size 100 x 100 has 10000 weights for each neuron in the second layer. The convolution operation brings a solution to this problem as it reduces the number of free parameters, allowing the network to be deeper with fewer parameters.[8] For instance, regardless of image size, tiling regions of size 5 x 5, each with the same shared weights, requires only 25 learnable parameters. In this way, it resolves the vanishing or exploding gradients problem in training traditional multi-layer neural networks with many layers by using backpropagation[citation needed].

Pooling[edit]

Convolutional networks may include local or global pooling layers[clarification needed], which combine the outputs of neuron clusters at one layer into a single neuron in the next layer.[9][10] For example, max pooling uses the maximum value from each of a cluster of neurons at the prior layer.[11] Another example is average pooling, which uses the average value from each of a cluster of neurons at the prior layer.[12]

Fully connected[edit]

Fully connected layers connect every neuron in one layer to every neuron in another layer. It is in principle the same as the traditional multi-layer perceptron neural network (MLP).

Receptive field[edit]

In neural networks, each neuron receives input from some number of locations in the previous layer. In a fully connected layer, each neuron receives input from every element of the previous layer. In a convolutional layer, neurons receive input from only a restricted subarea of the previous layer. Typically the subarea is of a square shape (e.g., size 5 by 5). The input area of a neuron is called its receptive field. So, in a fully connected layer, the receptive field is the entire previous layer. In a convolutional layer, the receptive area is smaller than the entire previous layer.

Weights[edit]

Each neuron in a neural network computes an output value by applying some function to the input values coming from the receptive field in the previous layer. The function that is applied to the input values is specified by a vector of weights and a bias (typically real numbers). Learning in a neural network progresses by making incremental adjustments to the biases and weights. The vector of weights and the bias are called a filter and represents some feature of the input (e.g., a particular shape). A distinguishing feature of CNNs is that many neurons share the same filter. This reduces memory footprint because a single bias and a single vector of weights is used across all receptive fields sharing that filter, rather than each receptive field having its own bias and vector of weights.[1]

History[edit]

CNN design follows vision processing in living organisms[citation needed].

Receptive fields in the visual cortex[edit]

Work by Hubel and Wiesel in the 1950s and 1960s showed that cat and monkey visual cortexes contain neurons that individually respond to small regions of the visual field. Provided the eyes are not moving, the region of visual space within which visual stimuli affect the firing of a single neuron is known as its receptive field[citation needed]. Neighboring cells have similar and overlapping receptive fields[citation needed]. Receptive field size and location varies systematically across the cortex to form a complete map of visual space[citation needed]. The cortex in each hemisphere represents the contralateral visual field[citation needed].

Their 1968 paper[13] identified two basic visual cell types in the brain:

  • simple cells, whose output is maximized by straight edges having particular orientations within their receptive field
  • complex cells, which have larger receptive fields, whose output is insensitive to the exact position of the edges in the field.

Neocognitron[edit]

The neocognitron [14] was introduced in 1980.[11] The neocognitron does not require units located at multiple network positions to have the same trainable weights. This idea appears in 1986 in the book version of the original backpropagation paper.[16]:Figure 14 Neocognitrons were developed in 1988 for temporal signals.[clarification needed][17] Their design was improved in 1998,[18] generalized in 2003[19] and simplified in the same year.[20]

Time delay neural networks[edit]

The time delay neural network (TDNN) was the first convolutional network.[21][22]

TDNNs are fixed-size convolutional networks that share weights along the temporal dimension[23] They allow speech signals to be processed time-invariantly, analogous to the translation invariance offered by CNNs.[22] They were introduced in the early 1980s. The tiling of neuron outputs can cover timed stages.[24]

Trainable weights[edit]

A system to recognize hand-written zip code numbers[25] involved convolutions in which the kernel coefficients had been laboriously hand designed[26]

LeCun et al. in 1989,[26] used back-propagation to learn the convolution kernel coefficients directly from images of hand-written numbers. Learning was thus fully automatic, performed better than manual coefficient design, and was suited to a broader range of image recognition problems and image types.

LeNet-5[edit]

LeNet-5, a pioneering 7-level convolutional network by LeCun et al. in 1998,[18] that classifies digits, was applied by several banks to recognise hand-written numbers on checks (cheques) digitized in 32x32 pixel images. The ability to process higher resolution images requires larger and more layers of convolutional neural networks, so this technique is constrained by the availability of computing resources.

Shift-invariant neural network[edit]

Similarly, a shift invariant neural network was proposed for image character recognition in 1988.[2][3] The architecture and training algorithm were modified in 1991[27] and applied for medical image processing[28] and automatic detection of breast cancer in mammograms.[29]

A different convolution-based design was proposed in 1988[30] for application to decomposition of one-dimensional electromyography convolved signals via de-convolution. This design was modified in 1989 to other de-convolution-based designs.[31][32]

Neural abstraction pyramid[edit]

The feed-forward architecture of convolutional neural networks was extended in the neural abstraction pyramid[33] by lateral and feedback connections. The resulting recurrent convolutional network allows for the flexible incorporation of contextual information to iteratively resolve local ambiguities. In contrast to previous models, image-like outputs at the highest resolution were generated.

GPU implementations[edit]

Following the 2005 paper that established the value of GPGPU for machine learning,[34] several publications described more efficient ways to train convolutional neural networks using GPUs.[35][36][37][38] In 2011, they were refined and implemented on a GPU, with impressive results.[9] In 2012, Ciresan et al. significantly improved on the best performance in the literature for multiple image databases, including the MNIST database, the NORB database, the HWDB1.0 dataset (Chinese characters) and the CIFAR10 dataset (dataset of 60000 32x32 labeled RGB images)[11].

Distinguishing features[edit]

While traditional multilayer perceptron (MLP) models were successfully used for image recognition[example needed], due to the full connectivity between nodes they suffer from the curse of dimensionality, and thus do not scale well to higher resolution images.

CNN layers arranged in 3 dimensions

For example, in CIFAR-10, images are only of size 32x32x3 (32 wide, 32 high, 3 color channels), so a single fully connected neuron in a first hidden layer of a regular neural network would have 32323 = 3,072 weights. A 200x200 image, however, would lead to neurons that have 2002003 = 120,000 weights.

Also, such network architecture does not take into account the spatial structure of data, treating input pixels which are far apart in the same way as pixels that are close together. Thus, full connectivity of neurons is wasteful for purposes such as image recognition that are dominated by spatially local input patterns.

Convolutional neural networks are biologically inspired variants of multilayer perceptrons that are designed to emulate the behavior of a visual cortex[citation needed]. These models mitigate the challenges posed by the MLP architecture by exploiting the strong spatially local correlation present in natural images. As opposed to MLPs, CNNs have the following distinguishing features:

  • 3D volumes of neurons. The layers of a CNN have neurons arranged in 3 dimensions: width, height and depth. The neurons inside a layer are connected to only a small region of the layer before it, called a receptive field. Distinct types of layers, both locally and completely connected, are stacked to form a CNN architecture.
  • Local connectivity: following the concept of receptive fields, CNNs exploit spatial locality by enforcing a local connectivity pattern between neurons of adjacent layers. The architecture thus ensures that the learnt "filters" produce the strongest response to a spatially local input pattern. Stacking many such layers leads to non-linear filters that become increasingly global (i.e. responsive to a larger region of pixel space) so that the network first creates representations of small parts of the input, then from them assembles representations of larger areas.
  • Shared weights: In CNNs, each filter is replicated across the entire visual field. These replicated units share the same parameterization (weight vector and bias) and form a feature map. This means that all the neurons in a given convolutional layer respond to the same feature within their specific response field. Replicating units in this way allows for features to be detected regardless of their position in the visual field, thus constituting the property of translation invariance.

Together, these properties allow CNNs to achieve better generalization on vision problems. Weight sharing dramatically reduces the number of free parameters learned, thus lowering the memory requirements for running the network and allowing the training of larger, more powerful networks.

Building blocks[edit]

A CNN architecture is formed by a stack of distinct layers that transform the input volume into an output volume (e.g. holding the class scores) through a differentiable function. A few distinct types of layers are commonly used. These are further discussed below:

Neurons of a convolutional layer (blue), connected to their receptive field (red)

Convolutional layer[edit]

The convolutional layer is the core building block of a CNN. The layer's parameters consist of a set of learnable filters (or kernels), which have a small receptive field, but extend through the full depth of the input volume. During the forward pass, each filter is convolved across the width and height of the input volume, computing the dot product between the entries of the filter and the input and producing a 2-dimensional activation map of that filter. As a result, the network learns filters that activate when it detects some specific type of feature at some spatial position in the input.

Stacking the activation maps for all filters along the depth dimension forms the full output volume of the convolution layer. Every entry in the output volume can thus also be interpreted as an output of a neuron that looks at a small region in the input and shares parameters with neurons in the same activation map.

Local connectivity[edit]

When dealing with high-dimensional inputs such as images, it is impractical to connect neurons to all neurons in the previous volume because such a network architecture does not take the spatial structure of the data into account. Convolutional networks exploit spatially local correlation by enforcing a local connectivity pattern between neurons of adjacent layers: each neuron is connected to only a small region of the input volume. The extent of this connectivity is a hyperparameter called the receptive field of the neuron. The connections are local in space (along width and height), but always extend along the entire depth of the input volume. Such an architecture ensures that the learnt filters produce the strongest response to a spatially local input pattern.

Spatial arrangement[edit]

Three hyperparameters control the size of the output volume of the convolutional layer: the depth, stride and zero-padding.

  • The depth of the output volume controls the number of neurons in a layer that connect to the same region of the input volume. These neurons learn to activate for different features in the input. For example, if the first convolutional layer takes the raw image as input, then different neurons along the depth dimension may activate in the presence of various oriented edges, or blobs of color.
  • Stride controls how depth columns around the spatial dimensions (width and height) are allocated. When the stride is 1 then we move the filters one pixel at a time. This leads to heavily overlapping receptive fields between the columns, and also to large output volumes. When the stride is 2 (or rarely 3 or more) then the filters jump 2 pixels at a time as they slide around. The receptive fields overlap less and the resulting output volume has smaller spatial dimensions.[39]
  • Sometimes it is convenient to pad the input with zeros on the border of the input volume. The size of this padding is a third hyperparameter. Padding provides control of the output volume spatial size. In particular, sometimes it is desirable to exactly preserve the spatial size of the input volume.

The spatial size of the output volume can be computed as a function of the input volume size W {\displaystyle W} W, the kernel field size of the Conv Layer neurons K {\displaystyle K} K, the stride with which they are applied S {\displaystyle S} S, and the amount of zero padding P {\displaystyle P} P used on the border. The formula for calculating how many neurons "fit" in a given volume is given by ( W − K + 2 P ) / S + 1 {\displaystyle (W-K+2P)/S+1} {\displaystyle (W-K+2P)/S+1}. If this number is not an integer, then the strides are set incorrectly and the neurons cannot be tiled to fit across the input volume in a symmetric way. In general, setting zero padding to be P = ( K − 1 ) / 2 {\displaystyle P=(K-1)/2} {\displaystyle P=(K-1)/2} when the stride is S = 1 {\displaystyle S=1} S=1 ensures that the input volume and output volume will have the same size spatially. Though it's generally not completely necessary to use up all of the neurons of the previous layer, for example, you may decide to use just a portion of padding.

Parameter sharing[edit]

A parameter sharing scheme is used in convolutional layers to control the number of free parameters. It relies on one reasonable assumption: if a patch feature is useful to compute at some spatial position, then it should also be useful to compute at other positions. In other words, denoting a single 2-dimensional slice of depth as a depth slice, we constrain the neurons in each depth slice to use the same weights and bias.

Since all neurons in a single depth slice share the same parameters, then the forward pass in each depth slice of the CONV layer can be computed as a convolution of the neuron's weights with the input volume (hence the name: convolutional layer). Therefore, it is common to refer to the sets of weights as a filter (or a kernel), which is convolved with the input. The result of this convolution is an activation map, and the set of activation maps for each different filter are stacked together along the depth dimension to produce the output volume. Parameter sharing contributes to the translation invariance of the CNN architecture.

Sometimes, the parameter sharing assumption may not make sense. This is especially the case when the input images to a CNN have some specific centered structure, in which we expect completely different features to be learned on different spatial locations. One practical example is when the input are faces that have been centered in the image: we might expect different eye-specific or hair-specific features to be learned in different parts of the image. In that case it is common to relax the parameter sharing scheme, and instead simply call the layer a "locally connected layer".

Pooling layer[edit]

Max pooling with a 2x2 filter and stride = 2

Another important concept of CNNs is pooling, which is a form of non-linear down-sampling. There are several non-linear functions to implement pooling among which max pooling is the most common. It partitions the input image into a set of non-overlapping rectangles and, for each such sub-region, outputs the maximum. The intuition is that the exact location of a feature is less important than its rough location relative to other features. The pooling layer serves to progressively reduce the spatial size of the representation, to reduce the number of parameters and amount of computation in the network, and hence to also control overfitting. It is common to periodically insert a pooling layer between successive convolutional layers in a CNN architecture. The pooling operation provides another form of translation invariance.

The pooling layer operates independently on every depth slice of the input and resizes it spatially. The most common form is a pooling layer with filters of size 2x2 applied with a stride of 2 downsamples at every depth slice in the input by 2 along both width and height, discarding 75% of the activations. In this case, every max operation is over 4 numbers. The depth dimension remains unchanged.

In addition to max pooling, the pooling units can use other functions, such as average pooling or L2-norm pooling. Average pooling was often used historically but has recently fallen out of favor compared to max pooling, which works better in practice.[40]

Due to the aggressive reduction in the size of the representation, the trend is towards using smaller filters[41] or discarding the pooling layer altogether.[42]

RoI pooling to size 2x2. In this example region proposal (an input parameter) has size 7x5.

Region of Interest pooling (also known as RoI pooling) is a variant of max pooling, in which output size is fixed and input rectangle is a parameter.[43]

Pooling is an important component of convolutional neural networks for object detection based on Fast R-CNN[44] architecture.

ReLU layer[edit]

ReLU is the abbreviation of Rectified Linear Units. This layer applies the non-saturating activation function f ( x ) = max ( 0 , x ) {\displaystyle f(x)=\max(0,x)} {\displaystyle f(x)=\max(0,x)}. It increases the nonlinear properties of the decision function and of the overall network without affecting the receptive fields of the convolution layer.

Other functions are also used to increase nonlinearity, for example the saturating hyperbolic tangent f ( x ) = tanh ⁡ ( x ) {\displaystyle f(x)=\tanh(x)} {\displaystyle f(x)=\tanh(x)}, f ( x ) = | tanh ⁡ ( x ) | {\displaystyle f(x)=|\tanh(x)|} {\displaystyle f(x)=|\tanh(x)|}, and the sigmoid function f ( x ) = ( 1 + e − x ) − 1 {\displaystyle f(x)=(1+e^{-x})^{-1}} f(x)=(1+e^{-x})^{-1}. ReLU is often preferred to other functions, because it trains the neural network several times faster[45] without a significant penalty to generalisation accuracy.

Fully connected layer[edit]

Finally, after several convolutional and max pooling layers, the high-level reasoning in the neural network is done via fully connected layers. Neurons in a fully connected layer have connections to all activations in the previous layer, as seen in regular neural networks. Their activations can hence be computed with a matrix multiplication followed by a bias offset.

Loss layer[edit]

The loss layer specifies how training penalizes the deviation between the predicted and true labels and is normally the final layer. Various loss functions appropriate for different tasks may be used there. Softmax loss is used for predicting a single class of K mutually exclusive classes. Sigmoid cross-entropy loss is used for predicting K independent probability values in [ 0 , 1 ] {\displaystyle [0,1]} [0,1]. Euclidean loss is used for regressing to real-valued labels ( − ∞ , ∞ ) {\displaystyle (-\infty ,\infty )} (-\infty ,\infty ).

Typical CNN architecture

Choosing hyperparameters[edit]

CNNs use more hyperparameters than a standard MLP. While the usual rules for learning rates and regularization constants still apply, the following should be kept in mind when optimising.

Number of filters[edit]

Since feature map size decreases with depth, layers near the input layer will tend to have fewer filters while higher layers can have more. To equalize computation at each layer, the feature x pixel position product is kept roughly constant across layers. Preserving more information about the input would require keeping the total number of activations (number of feature maps times number of pixel positions) non-decreasing from one layer to the next.

The number of feature maps directly controls the capacity and depends on the number of available examples and task complexity.

Filter shape[edit]

Common filter shapes found in the literature vary greatly, and are usually chosen based on the dataset.

The challenge is, thus, to find the right level of granularity so as to create abstractions at the proper scale, given a particular dataset.

Max pooling shape[edit]

Typical values are 2x2. Very large input volumes may warrant 4x4 pooling in the lower layers. However, choosing larger shapes will dramatically reduce the dimension of the signal, and may result in excess information loss. Often, non-overlapping pooling windows perform best.[40]

Regularization methods[edit]

Main article: Regularization (mathematics)

Regularization is a process of introducing additional information to solve an ill-posed problem or to prevent overfitting. CNNs use various types of regularization.

Empirical[edit]

Dropout[edit]

Because a fully connected layer occupies most of the parameters, it is prone to overfitting. One method to reduce overfitting is dropout.[46][47] At each training stage, individual nodes are either "dropped out" of the net with probability 1 − p {\displaystyle 1-p} 1-p or kept with probability p {\displaystyle p} p, so that a reduced network is left; incoming and outgoing edges to a dropped-out node are also removed. Only the reduced network is trained on the data in that stage. The removed nodes are then reinserted into the network with their original weights.

In the training stages, the probability that a hidden node will be dropped is usually 0.5; for input nodes, this should be much lower, intuitively because information is directly lost when input nodes are ignored.

At testing time after training has finished, we would ideally like to find a sample average of all possible 2 n {\displaystyle 2^{n}} 2^{n} dropped-out networks; unfortunately this is unfeasible for large values of n {\displaystyle n} n. However, we can find an approximation by using the full network with each node's output weighted by a factor of p {\displaystyle p} p, so the expected value of the output of any node is the same as in the training stages. This is the biggest contribution of the dropout method: although it effectively generates 2 n {\displaystyle 2^{n}} 2^{n} neural nets, and as such allows for model combination, at test time only a single network needs to be tested.

By avoiding training all nodes on all training data, dropout decreases overfitting. The method also significantly improves training speed. This makes model combination practical, even for deep neural nets. The technique seems to reduce node interactions, leading them to learn more robust features that better generalize to new data.

DropConnect[edit]

DropConnect[48] is the generalization of dropout in which each connection, rather than each output unit, can be dropped with probability 1 − p {\displaystyle 1-p} 1-p. Each unit thus receives input from a random subset of units in the previous layer.

DropConnect is similar to dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights, rather than the output vectors of a layer. In other words, the fully connected layer with DropConnect becomes a sparsely connected layer in which the connections are chosen at random during the training stage.

Stochastic pooling[edit]

A major drawback to Dropout is that it does not have the same benefits for convolutional layers, where the neurons are not fully connected.

In stochastic pooling,[49] the conventional deterministic pooling operations are replaced with a stochastic procedure, where the activation within each pooling region is picked randomly according to a multinomial distribution, given by the activities within the pooling region. The approach is hyperparameter free and can be combined with other regularization approaches, such as dropout and data augmentation.

An alternate view of stochastic pooling is that it is equivalent to standard max pooling but with many copies of an input image, each having small local deformations. This is similar to explicit elastic deformations of the input images,[50] which delivers excellent MNIST performance. Using stochastic pooling in a multilayer model gives an exponential number of deformations since the selections in higher layers are independent of those below.

Artificial data[edit]

Since the degree of model overfitting is determined by both its power and the amount of training it receives, providing a convolutional network with more training examples can reduce overfitting. Since these networks are usually trained with all available data, one approach is to either generate new data from scratch (if possible) or perturb existing data to create new ones. For example, input images could be asymmetrically cropped by a few percent to create new examples with the same label as the original.[51]

Explicit[edit]

Early stopping[edit]

Main article: Early stopping

One of the simplest methods to prevent overfitting of a network is to simply stop the training before overfitting has had a chance to occur. It comes with the disadvantage that the learning process is halted.

Number of parameters[edit]

Another simple way to prevent overfitting is to limit the number of parameters, typically by limiting the number of hidden units in each layer or limiting network depth. For convolutional networks, the filter size also affects the number of parameters. Limiting the number of parameters restricts the predictive power of the network directly, reducing the complexity of the function that it can perform on the data, and thus limits the amount of overfitting. This is equivalent to a "zero norm".

Weight decay[edit]

A simple form of added regularizer is weight decay, which simply adds an additional error, proportional to the sum of weights (L1 norm) or squared magnitude (L2 norm) of the weight vector, to the error at each node. The level of acceptable model complexity can be reduced by increasing the proportionality constant, thus increasing the penalty for large weight vectors.

L2 regularization is the most common form of regularization. It can be implemented by penalizing the squared magnitude of all parameters directly in the objective. The L2 regularization has the intuitive interpretation of heavily penalizing peaky weight vectors and preferring diffuse weight vectors. Due to multiplicative interactions between weights and inputs this has the useful property of encouraging the network to use all of its inputs a little rather than some of its inputs a lot.

L1 regularization is another common form. It is possible to combine L1 with L2 regularization (this is called Elastic net regularization). The L1 regularization leads the weight vectors to become sparse during optimization. In other words, neurons with L1 regularization end up using only a sparse subset of their most important inputs and become nearly invariant to the noisy inputs.

Max norm constraints[edit]

Another form of regularization is to enforce an absolute upper bound on the magnitude of the weight vector for every neuron and use projected gradient descent to enforce the constraint. In practice, this corresponds to performing the parameter update as normal, and then enforcing the constraint by clamping the weight vector w → {\displaystyle {\vec {w}}} {\vec {w}} of every neuron to satisfy ‖ w → ‖ 2 < c {\displaystyle \|{\vec {w}}\|_{2}<c} {\displaystyle \|{\vec {w}}\|_{2}<c}. Typical values of c {\displaystyle c} c are order of 3–4. Some papers report improvements[52] when using this form of regularization.

Hierarchical coordinate frames[edit]

Pooling loses the precise spatial relationships between high-level parts (such as nose and mouth in a face image). These relationships are needed for identity recognition. Overlapping the pools so that each feature occurs in multiple pools, helps retain the information. Translation alone cannot extrapolate the understanding of geometric relationships to a radically new viewpoint, such as a different orientation or scale. On the other hand, people are very good at extrapolating; after seeing a new shape once they can recognize it from a different viewpoint.[53]

Currently, the common way to deal with this problem is to train the network on transformed data in different orientations, scales, lighting, etc. so that the network can cope with these variations. This is computationally intensive for large data-sets. The alternative is to use a hierarchy of coordinate frames and to use a group of neurons to represent a conjunction of the shape of the feature and its pose relative to the retina. The pose relative to retina is the relationship between the coordinate frame of the retina and the intrinsic features' coordinate frame.[54]

Thus, one way of representing something is to embed the coordinate frame within it. Once this is done, large features can be recognized by using the consistency of the poses of their parts (e.g. nose and mouth poses make a consistent prediction of the pose of the whole face). Using this approach ensures that the higher level entity (e.g. face) is present when the lower level (e.g. nose and mouth) agree on its prediction of the pose. The vectors of neuronal activity that represent pose ("pose vectors") allow spatial transformations modeled as linear operations that make it easier for the network to learn the hierarchy of visual entities and generalize across viewpoints. This is similar to the way the human visual system imposes coordinate frames in order to represent shapes.[55]

Applications[edit]

Image recognition[edit]

CNNs are often used in image recognition systems. In 2012 an error rate of 0.23 percent on the MNIST database was reported.[11] Another paper on using CNN for image classification reported that the learning process was "surprisingly fast"; in the same paper, the best published results as of 2011 were achieved in the MNIST database and the NORB database.[9] Subsequently, a similar CNN called AlexNet[56] won the ImageNet Large Scale Visual Recognition Challenge 2012.

When applied to facial recognition, CNNs achieved a large decrease in error rate.[57] Another paper reported a 97.6 percent recognition rate on "5,600 still images of more than 10 subjects".[4] CNNs were used to assess video quality in an objective way after manual training; the resulting system had a very low root mean square error.[24]

The ImageNet Large Scale Visual Recognition Challenge is a benchmark in object classification and detection, with millions of images and hundreds of object classes. In the ILSVRC 2014,[58] a large-scale visual recognition challenge, almost every highly ranked team used CNN as their basic framework. The winner GoogLeNet[59] (the foundation of DeepDream) increased the mean average precision of object detection to 0.439329, and reduced classification error to 0.06656, the best result to date. Its network applied more than 30 layers. That performance of convolutional neural networks on the ImageNet tests was close to that of humans.[60] The best algorithms still struggle with objects that are small or thin, such as a small ant on a stem of a flower or a person holding a quill in their hand. They also have trouble with images that have been distorted with filters, an increasingly common phenomenon with modern digital cameras. By contrast, those kinds of images rarely trouble humans. Humans, however, tend to have trouble with other issues. For example, they are not good at classifying objects into fine-grained categories such as the particular breed of dog or species of bird, whereas convolutional neural networks handle this.

In 2015 a many-layered CNN demonstrated the ability to spot faces from a wide range of angles, including upside down, even when partially occluded, with competitive performance. The network was trained on a database of 200,000 images that included faces at various angles and orientations and a further 20 million images without faces. They used batches of 128 images over 50,000 iterations.[61]

Video analysis[edit]

Compared to image data domains, there is relatively little work on applying CNNs to video classification. Video is more complex than images since it has another (temporal) dimension. However, some extensions of CNNs into the video domain have been explored. One approach is to treat space and time as equivalent dimensions of the input and perform convolutions in both time and space.[62][63] Another way is to fuse the features of two convolutional neural networks, one for the spatial and one for the temporal stream.[64][65][66]LSTM units are typically incorporated after the CNN to account for inter-frame or inter-clip dependencies.[67][68] Unsupervised learning schemes for training spatio-temporal features have been introduced, based on Convolutional Gated Restricted Boltzmann Machines[69] and Independent Subspace Analysis.[70]

Natural language processing[edit]

CNNs have also explored natural language processing. CNN models are effective for various NLP problems and achieved excellent results in semantic parsing,[71] search query retrieval,[72] sentence modeling,[73] classification,[74] prediction[75] and other traditional NLP tasks.[76]

Drug discovery[edit]

CNNs have been used in drug discovery. Predicting the interaction between molecules and biological proteins can identify potential treatments. In 2015, Atomwise introduced AtomNet, the first deep learning neural network for structure-based rational drug design.[77] The system trains directly on 3-dimensional representations of chemical interactions. Similar to how image recognition networks learn to compose smaller, spatially proximate features into larger, complex structures,[78] AtomNet discovers chemical features, such as aromaticity, sp3 carbons and hydrogen bonding. Subsequently, AtomNet was used to predict novel candidate biomolecules for multiple disease targets, most notably treatments for the Ebola virus[79] and multiple sclerosis.[80]

Health Risk Assessment and Biomarkers of Aging discovery[edit]

CNNs can be naturally tailored to analyze a sufficiently large collection of time series representing one week long human physical activity streams augmented by the rich clinical data (including the death register, as provided by, e.g., the NHANES study). A simple CNN was combined with Cox-Gompertz proportional hazards model and used to produce a proof-of-concept example of digital biomarkers of aging in the form of all-causes-mortality predictor.[81]

Checkers[edit]

CNNs have been used in the game of checkers. From 1999 to 2001, Fogel and Chellapilla published papers showing how a convolutional neural network could learn to play checkers using co-evolution. The learning process did not use prior human professional games, but rather focused on a minimal set of information contained in the checkerboard: the location and type of pieces, and the piece differential[clarify]. Ultimately, the program (Blondie24) was tested on 165 games against players and ranked in the highest 0.4%.[82][83] It also earned a win against the program Chinook at its "expert" level of play.[84]

Go[edit]

CNNs have been used in computer Go. In December 2014, Clark and Storkey published a paper showing that a CNN trained by supervised learning from a database of human professional games could outperform GNU Go and win some games against Monte Carlo tree search Fuego 1.1 in a fraction of the time it took Fuego to play.[85] Later it was announced that a large 12-layer convolutional neural network had correctly predicted the professional move in 55% of positions, equalling the accuracy of a 6 dan human player. When the trained convolutional network was used directly to play games of Go, without any search, it beat the traditional search program GNU Go in 97% of games, and matched the performance of the Monte Carlo tree search program Fuego simulating ten thousand playouts (about a million positions) per move.[86]

A couple of CNNs for choosing moves to try ("policy network") and evaluating positions ("value network") driving MCTS were used by AlphaGo, the first to beat the best human player at the time.[87]

Fine-tuning[edit]

For many applications, little training data is available. Convolutional neural networks usually require a large amount of training data in order to avoid overfitting. A common technique is to train the network on a larger data set from a related domain. Once the network parameters have converged an additional training step is performed using the in-domain data to fine-tune the network weights. This allows convolutional networks to be successfully applied to problems with small training sets.[88]

Human interpretable explanations[edit]

End-to-end training and prediction are common practice in computer vision. However, human interpretable explanations are required for critical systems such as a self-driving cars. "Black-box models will not suffice".[89] With recent advances in visual salience, spatial and temporal attention, the most critical spatial regions/temporal instants could be visualized to justify the CNN predictions.[90][91]

Extensions[edit]

Deep Q-networks[edit]

A deep Q-network (DQN) is a type of deep learning model that combines a deep CNN with Q-learning, a form of reinforcement learning. Unlike earlier reinforcement learning agents, DQNs can learn directly from high-dimensional sensory inputs.

Preliminary results were presented in 2014, with an accompanying paper in February 2015.[92] The research described an application to Atari 2600 gaming. Other deep reinforcement learning models preceded it.[93]

Deep belief networks[edit]

Main article: Deep belief network

Convolutional deep belief networks (CDBN) have structure very similar to convolutional neural networks and are trained similarly to deep belief networks. Therefore, they exploit the 2D structure of images, like CNNs do, and make use of pre-training like deep belief networks. They provide a generic structure that can be used in many image and signal processing tasks. Benchmark results on standard image datasets like CIFAR[94] have been obtained using CDBNs.[95]

Common libraries[edit]

  • Caffe: A popular library for convolutional neural networks. Created by the Berkeley Vision and Learning Center (BVLC). It supports both CPU and GPU. Developed in C++, and has Python and MATLAB wrappers.
  • Deeplearning4j: Deep learning in Java and Scala on multi-GPU-enabled Spark. A general-purpose deep learning library for the JVM production stack running on a C++ scientific computing engine. Allows the creation of custom layers. Integrates with Hadoop and Kafka.
  • deeplearning-hs: Deep learning in Haskell, supports computations with CUDA.
  • MatConvNet: A convnet implementation in MATLAB.
  • MXNet: An open-source deep learning framework which is scalable, including support for multiple GPUs and CPUs in distribution. It supports interfaces in multiple languages (C++, Python, Julia, Matlab, JavaScript, Go, R, Scala, Perl, Wolfram Language).
  • neon: The fastest framework for convolutional neural networks and Deep Learning with support for GPU and CPU backends. The front-end is in Python, while the fast kernels are written in custom shader assembly. Created by Nervana Systems, which was acquired by Intel.
  • TensorFlow: Apache 2.0-licensed Theano-like library with support for CPU, GPU and Google's proprietary TPU,[96] mobile
  • TensorLayer (github): A deep learning and reinforcement learning library. It supports both CPU and GPU. Developed in Python.
  • Theano: The reference deep-learning library for Python with an API largely compatible with the popular NumPy library. Allows user to write symbolic mathematical expressions, then automatically generates their derivatives, saving the user from having to code gradients or backpropagation. These symbolic expressions are automatically compiled to CUDA code for a fast, on-the-GPU implementation.
  • Torch (www.torch.ch): A scientific computing framework with wide support for machine learning algorithms, written in C and lua. The main author is Ronan Collobert, and it is now used at Facebook AI Research and Twitter.
  • Dlib: ([1]): A toolkit for making real world machine learning and data analysis applications in C++.
  • Microsoft Cognitive Toolkit: A deep learning toolkit written by Microsoft with several unique features enhancing scalability over multiple nodes. It supports full-fledged interfaces for training in C++ and Python and with additional support for model inference in C# and Java.

Common APIs[edit]

  • roNNie.ai: The Lightweight AI - CNN framework to promote AI, and assist programmers and data scientists to simplify the difficulties of deep learning
  • Keras: A high level API written in Python for TensorFlow and Theano convolutional neural networks.[97]

Popular culture[edit]

Convolutional neural networks are mentioned in the 2017 novel Infinity Born.[98]

See also[edit]

References[edit]

  1. ^ a b LeCun, Yann. "LeNet-5, convolutional neural networks". Retrieved 16 November 2013.
  2. ^ a b Zhang, Wei (1988). "Shift-invariant pattern recognition neural network and its optical architecture". Proceedings of annual conference of the Japan Society of Applied Physics.
  3. ^ a b Zhang, Wei (1990). "Parallel distributed processing model with local space-invariant interconnections and its optical architecture". Applied Optics. 29 (32): 4790–7. Bibcode:1990ApOpt..29.4790Z. doi:10.1364/AO.29.004790. PMID 20577468.
  4. ^ a b Matusugu, Masakazu; Katsuhiko Mori; Yusuke Mitari; Yuji Kaneda (2003). "Subject independent facial expression recognition with robust face detection using a convolutional neural network" (PDF). Neural Networks. 16 (5): 555–559. doi:10.1016/S0893-6080(03)00115-1. Retrieved 17 November 2013.
  5. ^ van den Oord, Aaron; Dieleman, Sander; Schrauwen, Benjamin (2013-01-01). Burges, C. J. C.; Bottou, L.; Welling, M.; Ghahramani, Z.; Weinberger, K. Q., eds. Deep content-based music recommendation (PDF). Curran Associates, Inc. pp. 2643–2651.
  6. ^ Collobert, Ronan; Weston, Jason (2008-01-01). "A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning". Proceedings of the 25th International Conference on Machine Learning. ICML '08. New York, NY, USA: ACM: 160–167. doi:10.1145/1390156.1390177. ISBN 978-1-60558-205-4.
  7. ^ "Convolutional Neural Networks (LeNet) – DeepLearning 0.1 documentation". DeepLearning 0.1. LISA Lab. Retrieved 31 August 2013.
  8. ^ Habibi,, Aghdam, Hamed. Guide to convolutional neural networks : a practical application to traffic-sign detection and classification. Heravi, Elnaz Jahani,. Cham, Switzerland. ISBN 9783319575490. OCLC 987790957.
  9. ^ a b c Ciresan, Dan; Ueli Meier; Jonathan Masci; Luca M. Gambardella; Jurgen Schmidhuber (2011). "Flexible, High Performance Convolutional Neural Networks for Image Classification" (PDF). Proceedings of the Twenty-Second international joint conference on Artificial Intelligence-Volume Volume Two. 2: 1237–1242. Retrieved 17 November 2013.
  10. ^ Krizhevsky, Alex. "ImageNet Classification with Deep Convolutional Neural Networks" (PDF). Retrieved 17 November 2013.
  11. ^ a b c d Ciresan, Dan; Meier, Ueli; Schmidhuber, Jürgen (June 2012). "Multi-column deep neural networks for image classification". 2012 IEEE Conference on Computer Vision and Pattern Recognition. New York, NY: Institute of Electrical and Electronics Engineers (IEEE): 3642–3649. arXiv:1202.2745. doi:10.1109/CVPR.2012.6248110. ISBN 978-1-4673-1226-4. OCLC 812295155. Retrieved 2013-12-09.
  12. ^ "A Survey of FPGA-based Accelerators for Convolutional Neural Networks", NCAA, 2018
  13. ^ Hubel, D. H.; Wiesel, T. N. (1968-03-01). "Receptive fields and functional architecture of monkey striate cortex". The Journal of Physiology. 195 (1): 215–243. doi:10.1113/jphysiol.1968.sp008455. ISSN 0022-3751. PMC 1557912. PMID 4966457.
  14. ^ LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442.
  15. ^ David E. Rumelhart; Geoffrey E. Hinton; Ronald J. Wiliams (1986). "Chapter 8 : Learning Internal Representations by ErrorPropagation". In Rumelhart, David E.; McClelland, James.L. Parallel Distributed Processing, Volume 1 (PDF). MIT Press. pp. 319–362. ISBN 9780262680530.
  16. ^ Homma, Toshiteru; Les Atlas; Robert Marks II (1988). "An Artificial Neural Network for Spatio-Temporal Bipolar Patters: Application to Phoneme Classification" (PDF). Advances in Neural Information Processing Systems. 1: 31–40.
  17. ^ a b LeCun, Yann; Léon Bottou; Yoshua Bengio; Patrick Haffner (1998). "Gradient-based learning applied to document recognition" (PDF). Proceedings of the IEEE. 86 (11): 2278–2324. doi:10.1109/5.726791. Retrieved October 7, 2016.
  18. ^ S. Behnke. Hierarchical Neural Networks for Image Interpretation, volume 2766 of Lecture Notes in Computer Science. Springer, 2003.
  19. ^ Simard, Patrice, David Steinkraus, and John C. Platt. "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis." In ICDAR, vol. 3, pp. 958–962. 2003.
  20. ^ Waibel, Alex (December 1987). Phoneme Recognition Using Time-Delay Neural Networks. Meeting of the Institute of Electrical, Information and Communication Engineers (IEICE). Tokyo, Japan.
  21. ^ a b Alexander Waibel et al., Phoneme Recognition Using Time-Delay Neural Networks IEEE Transactions on Acoustics, Speech, and Signal Processing, Volume 37, No. 3, pp. 328. - 339 March 1989.
  22. ^ LeCun, Yann; Bengio, Yoshua (1995). "Convolutional networks for images, speech, and time series". In Arbib, Michael A. The handbook of brain theory and neural networks (Second ed.). The MIT press. pp. 276–278.
  23. ^ a b Le Callet, Patrick; Christian Viard-Gaudin; Dominique Barba (2006). "A Convolutional Neural Network Approach for Objective Video Quality Assessment" (PDF). IEEE Transactions on Neural Networks. 17 (5): 1316–1327. doi:10.1109/TNN.2006.879766. PMID 17001990. Retrieved 17 November 2013.
  24. ^ Denker, J S , Gardner, W R., Graf, H. P, Henderson, D, Howard, R E, Hubbard, W, Jackel, L D , BaIrd, H S, and Guyon (1989) Neural network recognizer for hand-written zip code digits, AT&T Bell Laboratories
  25. ^ a b Y. LeCun, B. Boser, J. S. Denker, D. Henderson, R. E. Howard, W. Hubbard, L. D. Jackel, Backpropagation Applied to Handwritten Zip Code Recognition; AT&T Bell Laboratories
  26. ^ Zhang, Wei (1991). "Error Back Propagation with Minimum-Entropy Weights: A Technique for Better Generalization of 2-D Shift-Invariant NNs". Proceedings of the International Joint Conference on Neural Networks.
  27. ^ Zhang, Wei (1991). "Image processing of human corneal endothelium based on a learning network". Applied Optics. 30 (29): 4211–7. Bibcode:1991ApOpt..30.4211Z. doi:10.1364/AO.30.004211. PMID 20706526.
  28. ^ Zhang, Wei (1994). "Computerized detection of clustered microcalcifications in digital mammograms using a shift-invariant artificial neural network". Medical Physics. 21 (4): 517–24. Bibcode:1994MedPh..21..517Z. doi:10.1118/1.597177. PMID 8058017.
  29. ^ Daniel Graupe, Ruey Wen Liu, George S Moschytz."Applications of neural networks to medical signal processing". In Proc. 27th IEEE Decision and Control Conf., pp. 343–347, 1988.
  30. ^ Daniel Graupe, Boris Vern, G. Gruener, Aaron Field, and Qiu Huang. "Decomposition of surface EMG signals into single fiber action potentials by means of neural network". Proc. IEEE International Symp. on Circuits and Systems, pp. 1008–1011, 1989.
  31. ^ Qiu Huang, Daniel Graupe, Yi Fang Huang, Ruey Wen Liu."Identification of firing patterns of neuronal signals." In Proc. 28th IEEE Decision and Control Conf., pp. 266–271, 1989.
  32. ^ Behnke, Sven (2003). Hierarchical Neural Networks for Image Interpretation (PDF). Lecture Notes in Computer Science. 2766. Springer. doi:10.1007/b11963. ISBN 978-3-540-40722-5.
  33. ^ Dave Steinkraus; Patrice Simard; Ian Buck (2005). "Using GPUs for Machine Learning Algorithms". 12th International Conference on Document Analysis and Recognition (ICDAR 2005). pp. 1115–1119.
  34. ^ Kumar Chellapilla; Sid Puri; Patrice Simard (2006). "High Performance Convolutional Neural Networks for Document Processing". In Lorette, Guy. Tenth International Workshop on Frontiers in Handwriting Recognition. Suvisoft.
  35. ^ Hinton, GE; Osindero, S; Teh, YW (Jul 2006). "A fast learning algorithm for deep belief nets". Neural computation. 18 (7): 1527–54. doi:10.1162/neco.2006.18.7.1527. PMID 16764513.
  36. ^ Bengio, Yoshua; Lamblin, Pascal; Popovici, Dan; Larochelle, Hugo (2007). "Greedy Layer-Wise Training of Deep Networks". Advances in Neural Information Processing Systems: 153–160.
  37. ^ Ranzato, MarcAurelio; Poultney, Christopher; Chopra, Sumit; LeCun, Yann (2007). "Efficient Learning of Sparse Representations with an Energy-Based Model" (PDF). Advances in Neural Information Processing Systems.
  38. ^ "CS231n Convolutional Neural Networks for Visual Recognition". cs231n.github.io. Retrieved 2017-04-25.
  39. ^ a b Scherer, Dominik; Müller, Andreas C.; Behnke, Sven (2010). "Evaluation of Pooling Operations in Convolutional Architectures for Object Recognition" (PDF). Artificial Neural Networks (ICANN), 20th International Conference on. Thessaloniki, Greece: Springer. pp. 92–101.
  40. ^ Graham, Benjamin (2014-12-18). "Fractional Max-Pooling". arXiv:1412.6071 [cs.CV].
  41. ^ Springenberg, Jost Tobias; Dosovitskiy, Alexey; Brox, Thomas; Riedmiller, Martin (2014-12-21). "Striving for Simplicity: The All Convolutional Net". arXiv:1412.6806 [cs.LG].
  42. ^ Grel, Tomasz (2017-02-28). "Region of interest pooling explained". deepsense.io.
  43. ^ Girshick, Ross (2015-09-27). "Fast R-CNN". arXiv:1504.08083 [cs.CV].
  44. ^ Krizhevsky, A.; Sutskever, I.; Hinton, G. E. (2012). "Imagenet classification with deep convolutional neural networks" (PDF). Advances in Neural Information Processing Systems. 1: 1097–1105.
  45. ^ Srivastava, Nitish; C. Geoffrey Hinton; Alex Krizhevsky; Ilya Sutskever; Ruslan Salakhutdinov (2014). "Dropout: A Simple Way to Prevent Neural Networks from overfitting" (PDF). Journal of Machine Learning Research. 15 (1): 1929–1958.
  46. ^ Carlos E. Perez. "A Pattern Language for Deep Learning".
  47. ^ "Regularization of Neural Networks using DropConnect | ICML 2013 | JMLR W&CP". jmlr.org. Retrieved 2015-12-17.
  48. ^ Zeiler, Matthew D.; Fergus, Rob (2013-01-15). "Stochastic Pooling for Regularization of Deep Convolutional Neural Networks". arXiv:1301.3557 [cs.LG].
  49. ^ "Best Practices for Convolutional Neural Networks Applied to Visual Document Analysis – Microsoft Research". research.microsoft.com. Retrieved 2015-12-17.
  50. ^ Hinton, Geoffrey E.; Srivastava, Nitish; Krizhevsky, Alex; Sutskever, Ilya; Salakhutdinov, Ruslan R. (2012). "Improving neural networks by preventing co-adaptation of feature detectors". arXiv:1207.0580 [cs.NE].
  51. ^ "Dropout: A Simple Way to Prevent Neural Networks from Overfitting". jmlr.org. Retrieved 2015-12-17.
  52. ^ Hinton, Geoffrey (1979). "Some demonstrations of the effects of structural descriptions in mental imagery". Cognitive Science. 3 (3): 231–250. doi:10.1016/s0364-0213(79)80008-7.
  53. ^ Rock, Irvin. "The frame of reference." The legacy of Solomon Asch: Essays in cognition and social psychology (1990): 243–268.
  54. ^ J. Hinton, Coursera lectures on Neural Networks, 2012, Url: https://www.coursera.org/learn/neural-networks
  55. ^ Dave Gershgorn (18 June 2018). "The inside story of how AI got good enough to dominate Silicon Valley". Quartz. Retrieved 5 October 2018.
  56. ^ Lawrence, Steve; C. Lee Giles; Ah Chung Tsoi; Andrew D. Back (1997). "Face Recognition: A Convolutional Neural Network Approach". Neural Networks, IEEE Transactions on. 8 (1): 98–113. CiteSeerX 10.1.1.92.5813. doi:10.1109/72.554195.
  57. ^ "ImageNet Large Scale Visual Recognition Competition 2014 (ILSVRC2014)". Retrieved 30 January 2016.
  58. ^ Szegedy, Christian; Liu, Wei; Jia, Yangqing; Sermanet, Pierre; Reed, Scott; Anguelov, Dragomir; Erhan, Dumitru; Vanhoucke, Vincent; Rabinovich, Andrew (2014). "Going Deeper with Convolutions". Computing Research Repository. arXiv:1409.4842. Bibcode:2014arXiv1409.4842S.
  59. ^ Russakovsky, Olga; Deng, Jia; Su, Hao; Krause, Jonathan; Satheesh, Sanjeev; Ma, Sean; Huang, Zhiheng; Karpathy, Andrej; Khosla, Aditya; Bernstein, Michael; Berg, Alexander C.; Fei-Fei, Li (2014). "Image Net Large Scale Visual Recognition Challenge". arXiv:1409.0575 [cs.CV].
  60. ^ "The Face Detection Algorithm Set To Revolutionize Image Search". Technology Review. February 16, 2015. Retrieved 27 October 2017.
  61. ^ Baccouche, Moez; Mamalet, Franck; Wolf, Christian; Garcia, Christophe; Baskurt, Atilla (2011-11-16). "Sequential Deep Learning for Human Action Recognition". In Salah, Albert Ali; Lepri, Bruno. Human Behavior Unterstanding. Lecture Notes in Computer Science. 7065. Springer Berlin Heidelberg. pp. 29–39. doi:10.1007/978-3-642-25446-8_4. ISBN 978-3-642-25445-1.
  62. ^ Ji, Shuiwang; Xu, Wei; Yang, Ming; Yu, Kai (2013-01-01). "3D Convolutional Neural Networks for Human Action Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 35 (1): 221–231. doi:10.1109/TPAMI.2012.59. ISSN 0162-8828. PMID 22392705.
  63. ^ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2018). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111 [cs.CV].
  64. ^ Karpathy, Andrej, et al. "Large-scale video classification with convolutional neural networks." IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2014.
  65. ^ Simonyan, Karen; Zisserman, Andrew (2014). "Two-Stream Convolutional Networks for Action Recognition in Videos". arXiv:1406.2199 [cs.CV]. (2014).
  66. ^ Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2018-05-22). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation" (PDF). Sensors. MDPI AG. 18 (5): 1657. doi:10.3390/s18051657. ISSN 1424-8220.
  67. ^ Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation. 25th IEEE International Conference on Image Processing (ICIP). doi:10.1109/icip.2018.8451692. ISBN 978-1-4799-7061-2.
  68. ^ Taylor, Graham W.; Fergus, Rob; LeCun, Yann; Bregler, Christoph (2010-01-01). "Convolutional Learning of Spatio-temporal Features". Proceedings of the 11th European Conference on Computer Vision: Part VI. ECCV'10. Berlin, Heidelberg: Springer-Verlag: 140–153. ISBN 3-642-15566-9.
  69. ^ Le, Q. V.; Zou, W. Y.; Yeung, S. Y.; Ng, A. Y. (2011-01-01). "Learning Hierarchical Invariant Spatio-temporal Features for Action Recognition with Independent Subspace Analysis". Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition. CVPR '11. Washington, DC, USA: IEEE Computer Society: 3361–3368. doi:10.1109/CVPR.2011.5995496. ISBN 978-1-4577-0394-2.
  70. ^ Grefenstette, Edward; Blunsom, Phil; de Freitas, Nando; Hermann, Karl Moritz (2014-04-29). "A Deep Architecture for Semantic Parsing". arXiv:1404.7296 [cs.CL].
  71. ^ "Learning Semantic Representations Using Convolutional Neural Networks for Web Search – Microsoft Research". research.microsoft.com. Retrieved 2015-12-17.
  72. ^ Kalchbrenner, Nal; Grefenstette, Edward; Blunsom, Phil (2014-04-08). "A Convolutional Neural Network for Modelling Sentences". arXiv:1404.2188 [cs.CL].
  73. ^ Kim, Yoon (2014-08-25). "Convolutional Neural Networks for Sentence Classification". arXiv:1408.5882 [cs.CL].
  74. ^ Collobert, Ronan, and Jason Weston. "A unified architecture for natural language processing: Deep neural networks with multitask learning."Proceedings of the 25th international conference on Machine learning. ACM, 2008.
  75. ^ Collobert, Ronan; Weston, Jason; Bottou, Leon; Karlen, Michael; Kavukcuoglu, Koray; Kuksa, Pavel (2011-03-02). "Natural Language Processing (almost) from Scratch". arXiv:1103.0398 [cs.LG].
  76. ^ Wallach, Izhar; Dzamba, Michael; Heifets, Abraham (2015-10-09). "AtomNet: A Deep Convolutional Neural Network for Bioactivity Prediction in Structure-based Drug Discovery". arXiv:1510.02855 [cs.LG].
  77. ^ Yosinski, Jason; Clune, Jeff; Nguyen, Anh; Fuchs, Thomas; Lipson, Hod (2015-06-22). "Understanding Neural Networks Through Deep Visualization". arXiv:1506.06579 [cs.CV].
  78. ^ "Toronto startup has a faster way to discover effective medicines". The Globe and Mail. Retrieved 2015-11-09.
  79. ^ "Startup Harnesses Supercomputers to Seek Cures". KQED Future of You. Retrieved 2015-11-09.
  80. ^ Tim Pyrkov, Konstantin Slipensky, Mikhail Barg, Alexey Kondrashin, Boris Zhurov, Alexander Zenin, Mikhail Pyatnitskiy, Leonid Menshikov, Sergei Markov, and Peter O. Fedichev (2018). "Extracting biological age from biomedical data via deep learning: too much of a good thing?". Scientific Reports. 8 (1): 5210. doi:10.1038/s41598-018-23534-9. PMID 29581467.CS1 maint: Multiple names: authors list (link)
  81. ^ Chellapilla, K; Fogel, DB (1999). "Evolving neural networks to play checkers without relying on expert knowledge". IEEE Trans Neural Netw. 10 (6): 1382–91. doi:10.1109/72.809083. PMID 18252639.
  82. ^ http://ieeexplore.ieee.org/document/942536/
  83. ^ Fogel, David (2001). Blondie24: Playing at the Edge of AI. San Francisco, CA: Morgan Kaufmann. ISBN 1558607838.
  84. ^ Clark, Christopher; Storkey, Amos (2014). "Teaching Deep Convolutional Neural Networks to Play Go". arXiv:1412.3409 [cs.AI].
  85. ^ Maddison, Chris J.; Huang, Aja; Sutskever, Ilya; Silver, David (2014). "Move Evaluation in Go Using Deep Convolutional Neural Networks". arXiv:1412.6564 [cs.LG].
  86. ^ "AlphaGo – Google DeepMind". Retrieved 30 January 2016.
  87. ^ Durjoy Sen Maitra; Ujjwal Bhattacharya; S.K. Parui, "CNN based common approach to handwritten character recognition of multiple scripts," in Document Analysis and Recognition (ICDAR), 2015 13th International Conference on, vol., no., pp.1021–1025, 23–26 Aug. 2015
  88. ^ "NIPS 2017". Interpretable ML Symposium. 2017-10-20. Retrieved 2018-09-12.
  89. ^ Zang, Jinliang; Wang, Le; Liu, Ziyi; Zhang, Qilin; Hua, Gang; Zheng, Nanning (2018). "Attention-Based Temporal Weighted Convolutional Neural Network for Action Recognition". IFIP Advances in Information and Communication Technology (PDF). Cham: Springer International Publishing. pp. 97–108. doi:10.1007/978-3-319-92007-8_9. ISBN 978-3-319-92006-1. ISSN 1868-4238.
  90. ^ Wang, Le; Zang, Jinliang; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2018-06-21). "Action Recognition by an Attention-Aware Temporal Weighted Convolutional Neural Network" (PDF). Sensors. MDPI AG. 18 (7): 1979. doi:10.3390/s18071979. ISSN 1424-8220.
  91. ^ Mnih, Volodymyr; et al. (2015). "Human-level control through deep reinforcement learning". Nature. 518 (7540): 529–533. Bibcode:2015Natur.518..529M. doi:10.1038/nature14236. PMID 25719670.
  92. ^ Sun, R.; Sessions, C. (June 2000). "Self-segmentation of sequences: automatic formation of hierarchies of sequential behaviors". IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics). 30 (3): 403–418. doi:10.1109/3477.846230. ISSN 1083-4419.
  93. ^ "Convolutional Deep Belief Networks on CIFAR-10" (PDF).
  94. ^ Lee, Honglak; Grosse, Roger; Ranganath, Rajesh; Ng, Andrew Y. (1 January 2009). "Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations". Proceedings of the 26th Annual International Conference on Machine Learning – ICML '09. ACM: 609–616. doi:10.1145/1553374.1553453. ISBN 9781605585161 – via ACM Digital Library.
  95. ^ Cade Metz (May 18, 2016). "Google Built Its Very Own Chips to Power Its AI Bots". Wired.
  96. ^ "Keras Documentation". keras.io.
  97. ^ Richards, Douglas E. (2017-04-30). Infinity Born. Paragon Press. ISBN 1546406395.

External links[edit]

APPLICATIONS WITHDRAWN 8 1,640 58 7,263 90 9,653 18 8,650 14 1,461 0
FILES CLOSED FOR INCOMPLETENESS 1 4,610 15 1,832 25 4,154 5 4,910 0 0
B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) G) Loans On Manufactured
Home Dwelling (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2007
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 506 8,898 609 6,674 107 5,477 69 4,726 0
APPLICATIONS APPROVED, NOT ACCEPTED 113 7,975 136 1,003 25 4,558 13 7,510 1 ,760
APPLICATIONS DENIED 141 2,814 294 4,242 50 7,652 30 0,608 2 1,615
APPLICATIONS WITHDRAWN 72 9,347 139 7,031 14 2,519 13 6,792 0
FILES CLOSED FOR INCOMPLETENESS 19 1,269 47 4,604 5 9,782 3 4,647 0
B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
E) Loans on Dwellings For 5+ Families
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) G) Loans On Manufactured
Home Dwelling (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2006
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 727 2,134 830 2,815 164 3,629 0 88 7,770 1 ,480
APPLICATIONS APPROVED, NOT ACCEPTED 152 5,100 184 4,047 42 2,936 0 21 4,470 1 ,000
APPLICATIONS DENIED 161 1,151 340 5,511 69 4,333 0 27 5,971 2 ,815
APPLICATIONS WITHDRAWN 124 2,509 241 2,043 23 6,961 1 2,760 17 6,299 0
FILES CLOSED FOR INCOMPLETENESS 21 1,149 48 0,333 1 ,390 0 3 9,730 0
B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) G) Loans On Manufactured
Home Dwelling (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2005
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 1,057 7,149 1,081 4,547 184 6,487 109 5,509 1 ,350
APPLICATIONS APPROVED, NOT ACCEPTED 250 8,201 138 7,661 28 4,395 17 6,862 2 3,110
APPLICATIONS DENIED 231 4,021 296 5,590 65 9,263 32 3,769 1 ,550
APPLICATIONS WITHDRAWN 223 4,889 262 6,242 43 3,679 16 8,460 0
FILES CLOSED FOR INCOMPLETENESS 27 2,403 68 5,243 7 8,517 5 9,288 0
B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
E) Loans on Dwellings For 5+ Families
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) G) Loans On Manufactured
Home Dwelling (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2004
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 1,140 2,607 1,033 6,949 142 3,439 2 ,818,925 115 4,078 1 4,700
APPLICATIONS APPROVED, NOT ACCEPTED 174 7,938 142 6,679 19 1,018 0 14 8,419 1 ,080
APPLICATIONS DENIED 213 2,510 274 6,239 51 8,512 0 32 8,060 2 3,560
APPLICATIONS WITHDRAWN 149 5,828 199 3,388 17 5,032 0 13 7,485 1 ,000
FILES CLOSED FOR INCOMPLETENESS 44 9,380 82 8,167 6 4,452 0 10 3,706 0
A) FHA, FSA/RHS & VA
Home Purchase Loans B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2003
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 3 2,667 851 8,420 2,189 9,564 83 ,912 106 1,955
APPLICATIONS APPROVED, NOT ACCEPTED 1 6,000 120 2,317 246 3,850 19 3,732 21 2,820
APPLICATIONS DENIED 1 3,000 108 9,749 313 2,123 46 ,052 19 9,942
APPLICATIONS WITHDRAWN 0 110 3,197 290 4,332 9 6,819 19 0,634
FILES CLOSED FOR INCOMPLETENESS 1 3,000 24 0,870 45 3,976 5 ,234 2 0,825
A) FHA, FSA/RHS & VA
Home Purchase Loans B) Conventional
Home Purchase Loans C) Refinancings
D) Home Improvement Loans
F) Non-occupant Loans on
< 5 Family Dwellings (A B C & D) Number Average Value Number Average Value Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 1999
(Based on 1 full and 1 partial tracts)
LOANS ORIGINATED 6 0,925 173 9,148 164 0,420 24 ,972 11 0,585
APPLICATIONS APPROVED, NOT ACCEPTED 1 7,300 23 2,698 27 3,014 12 ,082 2 2,435
APPLICATIONS DENIED 0 16 4,314 38 2,413 7 ,209 1 8,670
APPLICATIONS WITHDRAWN 0 27 9,874 31 0,426 1 8,080 0
FILES CLOSED FOR INCOMPLETENESS 0 2 6,475 9 5,758 0 0

Choose year:  2009 2008 2007 2006 2005 2004 2003 1999


Detailed HMDA statistics for the following Tracts: 0307.01 , 0307.02, 0307.03 A) Conventional
Home Purchase Loans B) Refinancings
Number Average Value Number Average Value
Private Mortgage Insurance Companies Aggregated Statistics For Year 2009
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 16 3,024 12 9,512
APPLICATIONS APPROVED, NOT ACCEPTED 5 3,488 5 9,800
APPLICATIONS DENIED 2 9,785 4 2,183
APPLICATIONS WITHDRAWN 2 5,560 1 4,460
FILES CLOSED FOR INCOMPLETENESS 0 1 7,080
A) Conventional
Home Purchase Loans B) Refinancings
C) Non-occupant Loans on
< 5 Family Dwellings (A & B) Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2008
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 37 7,255 5 5,840 1 5,180
APPLICATIONS APPROVED, NOT ACCEPTED 15 0,781 3 3,393 2 9,750
APPLICATIONS DENIED 4 0,415 3 4,553 1 2,000
APPLICATIONS WITHDRAWN 1 2,000 0 0
FILES CLOSED FOR INCOMPLETENESS 0 0 0
A) Conventional
Home Purchase Loans B) Refinancings
C) Non-occupant Loans on
< 5 Family Dwellings (A & B) Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2007
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 13 7,476 3 5,213 2 5,555
APPLICATIONS APPROVED, NOT ACCEPTED 2 7,510 1 8,140 0
APPLICATIONS DENIED 1 5,000 1 2,000 1 5,000
APPLICATIONS WITHDRAWN 1 5,000 1 8,880 0
FILES CLOSED FOR INCOMPLETENESS 0 0 0
A) Conventional
Home Purchase Loans B) Refinancings
Number Average Value Number Average Value
Aggregated Statistics For Year 2006
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 8 8,508 2 7,875
APPLICATIONS APPROVED, NOT ACCEPTED 0 3 0,330
APPLICATIONS DENIED 1 0,260 0
APPLICATIONS WITHDRAWN 1 0,000 0
FILES CLOSED FOR INCOMPLETENESS 0 0
A) Conventional
Home Purchase Loans B) Refinancings
C) Non-occupant Loans on
< 5 Family Dwellings (A & B) Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2005
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 5 8,748 8 0,511 1 1,850
APPLICATIONS APPROVED, NOT ACCEPTED 3 4,600 1 1,810 1 3,800
APPLICATIONS DENIED 0 1 6,030 0
APPLICATIONS WITHDRAWN 0 1 2,230 0
FILES CLOSED FOR INCOMPLETENESS 0 1 1,810 0
A) Conventional
Home Purchase Loans B) Refinancings
C) Non-occupant Loans on
< 5 Family Dwellings (A & B) Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2004
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 21 4,393 9 4,292 2 6,145
APPLICATIONS APPROVED, NOT ACCEPTED 9 8,346 6 0,808 1 2,120
APPLICATIONS DENIED 0 0 0
APPLICATIONS WITHDRAWN 2 9,900 1 8,640 0
FILES CLOSED FOR INCOMPLETENESS 1 8,960 0 0
A) Conventional
Home Purchase Loans B) Refinancings
C) Non-occupant Loans on
< 5 Family Dwellings (A & B) Number Average Value Number Average Value Number Average Value
Aggregated Statistics For Year 2003
(Based on 1 full and 2 partial tracts)
LOANS ORIGINATED 33 7,937 13 9,359 3 0,943
APPLICATIONS APPROVED, NOT ACCEPTED 8 4,015 9 9,276 1 8,170
APPLICATIONS DENIED 1 3,000 0 0
APPLICATIONS WITHDRAWN 3 2,727 4 4,928 0
FILES CLOSED FOR INCOMPLETENESS 0 0 0
A) Conventional
Home Purchase Loans B) Refinancings
Number Average Value Number Average Value
Aggregated Statistics For Year 1999
(Based on 1 partial tract)
LOANS ORIGINATED 29 9,666 12 7,706
APPLICATIONS APPROVED, NOT ACCEPTED 3 8,510 2 0,385
APPLICATIONS DENIED 0 1 6,580
APPLICATIONS WITHDRAWN 1 7,990 0
FILES CLOSED FOR INCOMPLETENESS 0 0

Choose year:  2009 2008 2007 2006 2005 2004 2003 1999


Detailed PMIC statistics for the following Tracts: 0307.01 , 0307.02, 0307.03

Drinking water stations with addresses in El Dorado Hills that have no violations reported:

  • HUNTINGTON MOBILE HOME PARK WATER SYSTEM (Address: 2201 FRANCISCO DR SUITE 140-380 , Serves LA, Population served: 291, Primary Water Source Type: Purch surface water)
  • AUBURN RIDGE WOODS (Population served: 131, Primary Water Source Type: Groundwater)

2006 National Fire Incident Reporting System Incidents:

Incident types - El Dorado Hills

See full 2006 National Fire Incident Reporting System statistics for El Dorado Hills, CA Name Count Lived (average)
Most common first names in El Dorado Hills, CA among deceased individuals
John 34 79.1 years
Robert 29 79.0 years
Mary 25 82.1 years
William 25 76.0 years
James 21 70.0 years
Margaret 20 85.6 years
Charles 17 77.1 years
Dorothy 15 82.6 years
Donald 14 75.1 years
George 11 78.8 years
Last name Count Lived (average)
Most common last names in El Dorado Hills, CA among deceased individuals
Smith 11 79.9 years
Davis 7 86.1 years
Williams 7 78.9 years
Wilson 7 73.7 years
Anderson 6 80.3 years
Clark 6 78.4 years
Martin 6 82.1 years
Johnson 5 80.6 years
Brown 5 72.4 years
Jones 5 79.2 years

Most commonly used house heating fuel:

  • Utility gas (60%)
  • Electricity (27%)
  • Bottled, tank, or LP gas (9%)
  • Wood (2%)
  • Solar energy (2%)
Name Count Name Count
Businesses in El Dorado Hills, CA
Baskin-Robbins 1 McDonald's 1
Big O Tires 1 RadioShack 1
CVS 1 Safeway 1
Circle K 1 Starbucks 4
Cold Stone Creamery 1 Subway 3
DHL 1 T-Mobile 1
FedEx 8 Taco Bell 1
Firestone Complete Auto Care 1 Target 1
GNC 1 U-Haul 1
Holiday Inn 1 UPS 4
Jamba Juice 1 Vons 1
Browse common businesses in El Dorado Hills, CA

El Dorado Hills compared to California state average:

  • Median household income above state average.
  • Unemployed percentage significantly below state average.
  • Hispanic race population percentage below state average.
  • Median age significantly above state average.
  • Renting percentage significantly below state average.
  • Length of stay since moving in significantly below state average.
  • Number of rooms per house significantly above state average.
  • House age significantly below state average.
  • Percentage of population with a bachelor's degree or higher significantly above state average.

El Dorado Hills on our top lists:

  • #96 on the list of "Top 101 cities with the most residents born in Iran (population 500+)"
  • #19 on the list of "Top 101 counties with the lowest number of births per 1000 residents 2007-2013"
  • #54 on the list of "Top 101 counties with the largest decrease in the number of births per 1000 residents 2000-2006 to 2007-2013 (pop 50,000+)"
  • #84 on the list of "Top 101 counties with highest percentage of residents voting for 3rd party candidates in the 2012 Presidential Election (pop. 50,000+)"
There are 176 pilots and 89 other airmen in this city.
Cost of Living Calculator
Your current salary:
State of origin: Choose state Alaska Alabama Arkansas Arizona California Colorado Connecticut District of Columbia Delaware Florida Georgia Hawaii Iowa Idaho Illinois Indiana Kansas Kentucky Louisiana Massachusetts Maryland Maine Michigan Minnesota Missouri Mississippi Montana North Carolina North Dakota Nebraska New Hampshire New Jersey New Mexico Nevada New York Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Virginia Vermont Washington Wisconsin West Virginia Wyoming Destination state: Choose state Alaska Alabama Arkansas Arizona California Colorado Connecticut District of Columbia Delaware Florida Georgia Hawaii Iowa Idaho Illinois Indiana Kansas Kentucky Louisiana Massachusetts Maryland Maine Michigan Minnesota Missouri Mississippi Montana North Carolina North Dakota Nebraska New Hampshire New Jersey New Mexico Nevada New York Ohio Oklahoma Oregon Pennsylvania Rhode Island South Carolina South Dakota Tennessee Texas Utah Virginia Vermont Washington Wisconsin West Virginia Wyoming
Top Patent Applicants

Top Patent Applicants

  • Hong Jiang (51)
  • Hong Li (49)
  • Manuel Antonio D'Abreu (45)
  • Mikal C. Hunsaker (29)
  • Altug Koker (23)
  • Ashish A. Pandya (22)
  • David J. Zimmerman (20)
  • Shekoufeh Qawami (15)
  • Timothy Andrew Lewis (14)
  • Sajol Ghoshal (13)

Total of 970 patent applications in 2008-2018.



Related news

Dome marquee hire
Google street view croatia zagreb accommodation
Eci 2 0 questionnaire for national security
Quel vpn choisir pour emule
Manor barn buriton accommodation meaning
Fettercairn fasque 1824
Rivington hall barn accommodation meaning
Empire conqueror instrumental download
Kinoteka pkin repertoire telephonique
Monoprix vaugirard 11 novembre belgique