Search results for: Michael Liu
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 663

Search results for: Michael Liu

153 Airborne CO₂ Lidar Measurements for Atmospheric Carbon and Transport: America (ACT-America) Project and Active Sensing of CO₂ Emissions over Nights, Days, and Seasons 2017-2018 Field Campaigns

Authors: Joel F. Campbell, Bing Lin, Michael Obland, Susan Kooi, Tai-Fang Fan, Byron Meadows, Edward Browell, Wayne Erxleben, Doug McGregor, Jeremy Dobler, Sandip Pal, Christopher O'Dell, Ken Davis

Abstract:

The Active Sensing of CO₂ Emissions over Nights, Days, and Seasons (ASCENDS) CarbonHawk Experiment Simulator (ACES) is a NASA Langley Research Center instrument funded by NASA’s Science Mission Directorate that seeks to advance technologies critical to measuring atmospheric column carbon dioxide (CO₂ ) mixing ratios in support of the NASA ASCENDS mission. The ACES instrument, an Intensity-Modulated Continuous-Wave (IM-CW) lidar, was designed for high-altitude aircraft operations and can be directly applied to space instrumentation to meet the ASCENDS mission requirements. The ACES design demonstrates advanced technologies critical for developing an airborne simulator and spaceborne instrument with lower platform consumption of size, mass, and power, and with improved performance. The Atmospheric Carbon and Transport – America (ACT-America) is an Earth Venture Suborbital -2 (EVS-2) mission sponsored by the Earth Science Division of NASA’s Science Mission Directorate. A major objective is to enhance knowledge of the sources/sinks and transport of atmospheric CO₂ through the application of remote and in situ airborne measurements of CO₂ and other atmospheric properties on spatial and temporal scales. ACT-America consists of five campaigns to measure regional carbon and evaluate transport under various meteorological conditions in three regional areas of the Continental United States. Regional CO₂ distributions of the lower atmosphere were observed from the C-130 aircraft by the Harris Corp. Multi-Frequency Fiber Laser Lidar (MFLL) and the ACES lidar. The airborne lidars provide unique data that complement the more traditional in situ sensors. This presentation shows the applications of CO₂ lidars in support of these science needs.

Keywords: CO₂ measurement, IMCW, CW lidar, laser spectroscopy

Procedia PDF Downloads 164
152 Seagrass Biomass Distribution in Mangrove Fringed Creeks of Gazi Bay, Kenya

Authors: Gabriel A. Juma, Adiel M. Magana, Githaiga N. Michael, James G. Kairo

Abstract:

Seagrass meadows are important carbon sinks, thus understanding this role and their conservation provides opportunities for their applications in climate change mitigation and adaptation. This study aimed at understanding seagrass contribution to ecosystem carbon at Gazi Bay; by comparing carbon stocks in seagrass beds of two mangroves fringed creeks of the bay. Specifically, the objectives included assessing the distribution and abundance of seagrass in the fringed creeks, and estimating above and below-ground biomass. Results obtained would be added to the mangrove and open bay carbon in estimating total ecosystem carbon of Gazi bay. The stratified random sampling strategy was applied in this study. Transects were laid perpendicular to the waterline at intervals of 50 meters from the upper region near the mangroves to the deeper end of the creek across seagrass meadows. Along these transects, 0.25m2 square quadrats were laid at 10 m to assess distribution and composition of seagrasses in the creeks. A total of 80 plots were sampled. Above-ground biomass was sampled by harvesting all the seagrass materials within the quadrat while four sediment cores were obtained from each quarter of the quadrat and then sorted into necromass, rhizomes and roots to determine below ground biomass. Samples were cleaned and dried in the oven for 72 hours at 60˚C in the laboratory. Total biomass was determined by multiplying biomass with carbon conversion factor of 0.34. In all the statistical tests, a significant level was set at α = 0.05. Eight species of seagrass were encountered in Western creek (WC) while seven in the Eastern creek (EC). Based on importance value, the dominant species in WC were Cymodocea rotundata and Halodule uninervis while Thalassodendron ciliatum and Enhalus acoroides dominated the eastern creek. The cover of seagrass in EC was 67.97% compared to 56.45% in WC. There was a significance difference in abundance of seagrass species between the two creeks (t = 1.97, D.F = 35, p < 0.05). Similarly, there was significance differences between total seagrass biomass (t= -8.44, D.F. = 53, p < 0.05) and species composition (F(7,79) = 14.6, p < 0.05) in the two creeks. Mean seagrass in the creeks was 7.25 ± 4.2 Mg C ha-1, (range: 4.1 - 12.9 Mg C ha-1). The findings of the current study reveal variations in biomass stocks of the two creeks of Gazi bay that have varying biophysical features. It is established that habitat heterogeneity between the creeks contributes to the variation in seagrass abundance and biomass stocking. This enhances understanding of these ecosystems hence the establishment of carbon offset project in seagrass for livelihood improvement and increased conservation.

Keywords: seagrass, above-ground, below-ground, creeks, Gazi bay

Procedia PDF Downloads 132
151 The Changing Landscape of Fire Safety in Covered Car Parks with the Arrival of Electric Vehicles

Authors: Matt Stallwood, Michael Spearpoint

Abstract:

In 2020, the UK government announced that sales of new petrol and diesel cars would end in 2030, and battery-powered cars made up 1 in 8 new cars sold in 2021 – more than the total from the previous five years. The guidance across the UK for the fire safety design of covered car parks is changing in response to the projected rapid growth in electric vehicle (EV) use. This paper discusses the current knowledge on the fire safety concerns posed by EVs, in particular those powered by lithium-ion batteries, when considering the likelihood of vehicle ignition, fire severity and spread of fire to other vehicles. The paper builds on previous work that has investigated the frequency of fires starting in cars powered by internal combustion engines (ICE), the hazard posed by such fires in covered car parks and the potential for neighboring vehicles to become involved in an incident. Historical data has been used to determine the ignition frequency of ICE car fires, whereas such data is scarce when it comes to EV fires. Should a fire occur, then the fire development has conventionally been assessed to match a ‘medium’ growth rate and to have a 95th percentile peak heat release of 9 MW. The paper examines recent literature in which researchers have measured the burning characteristics of EVs to assess whether these values need to be changed. These findings are used to assess the risk posed by EVs when compared to ICE vehicles. The paper examines what new design guidance is being issued by various organizations across the UK, such as fire and rescue services, insurers, local government bodies and regulators and discusses the impact these are having on the arrangement of parking bays, particularly in residential and mixed-use buildings. For example, the paper illustrates how updated guidance published by the Fire Protection Association (FPA) on the installation of sprinkler systems has increased the hazard classification of parking buildings that can have a considerable impact on the feasibility of a building to meet all its design intents when specifying water supply tanks. Another guidance on the provision of smoke ventilation systems and structural fire resistance is also presented. The paper points to where further research is needed on the fire safety risks posed by EVs in covered car parks. This will ensure that any guidance is commensurate with the need to provide an adequate level of life and property safety in the built environment.

Keywords: covered car parks, electric vehicles, fire safety, risk

Procedia PDF Downloads 73
150 Feminine Gender Identity in Nigerian Music Education: Trends, Challenges and Prospects

Authors: Julius Oluwayomi Oluwadamilare, Michael Olutayo Olatunji

Abstract:

In the African traditional societies, women have always played the role of a teacher, albeit informally. This is evident in the upbringing of their babies. As mothers, they also serve as the first teachers to teach their wards lessons through day-to-day activities. Furthermore, women always play the role of a musician during naming ceremonies, in the singing of lullabies, during initiation rites of adolescent boys and girls into adulthood, and in preparing their children especially daughters (and sons) for marriage. They also perform this role during religious and cultural activities, chieftaincy title/coronation ceremonies, singing of dirges during funeral ceremonies, and so forth. This traditional role of the African/Nigerian women puts them at a vantage point to contribute maximally to the teaching and learning of music at every level of education. The need for more women in the field of music education in Nigeria cannot be overemphasized. Today, gender equality is a major discourse in most countries of the world, Nigeria inclusive. Statistical data in the field of education and music education reveal the high ratio of male teachers/lecturers over their female counterparts in Nigerian tertiary institutions. The percentage is put at 80% Male and a distant 20% Female! This paper, therefore, examines feminine gender in Nigerian music education by tracing the involvement of women in musical practice from the pre-colonial to the post-colonial periods. The study employed both primary and secondary sources of data collection. The primary source included interviews conducted with 19 music lecturers from 8 purposively selected tertiary institutions from 4 geo-political zones of Nigeria. In addition, observation method was employed in the selected institutions. The results show, inter alia, that though there is a remarkable improvement in the rate of admission of female students into the music programme of Nigerian tertiary institutions, there is still an imbalance in the job placement in these institutions especially in the Colleges of Education which is the main focus of this research. Religious and socio-cultural factors are highly traceable to this development. This paper recommends the need for more female music teachers to be employed in the Nigerian tertiary institutions in line with the provisions stated in the Millennium Development Goals (MDGs) of the Federal Republic of Nigeria.

Keywords: gender, education, music, women

Procedia PDF Downloads 208
149 Structuring Highly Iterative Product Development Projects by Using Agile-Indicators

Authors: Guenther Schuh, Michael Riesener, Frederic Diels

Abstract:

Nowadays, manufacturing companies are faced with the challenge of meeting heterogeneous customer requirements in short product life cycles with a variety of product functions. So far, some of the functional requirements remain unknown until late stages of the product development. A way to handle these uncertainties is the highly iterative product development (HIP) approach. By structuring the development project as a highly iterative process, this method provides customer oriented and marketable products. There are first approaches for combined, hybrid models comprising deterministic-normative methods like the Stage-Gate process and empirical-adaptive development methods like SCRUM on a project management level. However, almost unconsidered is the question, which development scopes can preferably be realized with either empirical-adaptive or deterministic-normative approaches. In this context, a development scope constitutes a self-contained section of the overall development objective. Therefore, this paper focuses on a methodology that deals with the uncertainty of requirements within the early development stages and the corresponding selection of the most appropriate development approach. For this purpose, internal influencing factors like a company’s technology ability, the prototype manufacturability and the potential solution space as well as external factors like the market accuracy, relevance and volatility will be analyzed and combined into an Agile-Indicator. The Agile-Indicator is derived in three steps. First of all, it is necessary to rate each internal and external factor in terms of the importance for the overall development task. Secondly, each requirement has to be evaluated for every single internal and external factor appropriate to their suitability for empirical-adaptive development. Finally, the total sums of internal and external side are composed in the Agile-Indicator. Thus, the Agile-Indicator constitutes a company-specific and application-related criterion, on which the allocation of empirical-adaptive and deterministic-normative development scopes can be made. In a last step, this indicator will be used for a specific clustering of development scopes by application of the fuzzy c-means (FCM) clustering algorithm. The FCM-method determines sub-clusters within functional clusters based on the empirical-adaptive environmental impact of the Agile-Indicator. By means of the methodology presented in this paper, it is possible to classify requirements, which are uncertainly carried out by the market, into empirical-adaptive or deterministic-normative development scopes.

Keywords: agile, highly iterative development, agile-indicator, product development

Procedia PDF Downloads 247
148 Transparency Obligations under the AI Act Proposal: A Critical Legal Analysis

Authors: Michael Lognoul

Abstract:

In April 2021, the European Commission released its AI Act Proposal, which is the first policy proposal at the European Union level to target AI systems comprehensively, in a horizontal manner. This Proposal notably aims to achieve an ecosystem of trust in the European Union, based on the respect of fundamental rights, regarding AI. Among many other requirements, the AI Act Proposal aims to impose several generic transparency obligationson all AI systems to the benefit of natural persons facing those systems (e.g. information on the AI nature of systems, in case of an interaction with a human). The Proposal also provides for more stringent transparency obligations, specific to AI systems that qualify as high-risk, to the benefit of their users, notably on the characteristics, capabilities, and limitations of the AI systems they use. Against that background, this research firstly presents all such transparency requirements in turn, as well as related obligations, such asthe proposed obligations on record keeping. Secondly, it focuses on a legal analysis of their scope of application, of the content of the obligations, and on their practical implications. On the scope of transparency obligations tailored for high-risk AI systems, the research notably notes that it seems relatively narrow, given the proposed legal definition of the notion of users of AI systems. Hence, where end-users do not qualify as users, they may only receive very limited information. This element might potentially raise concern regarding the objective of the Proposal. On the content of the transparency obligations, the research highlights that the information that should benefit users of high-risk AI systems is both very broad and specific, from a technical perspective. Therefore, the information required under those obligations seems to create, prima facie, an adequate framework to ensure trust for users of high-risk AI systems. However, on the practical implications of these transparency obligations, the research notes that concern arises due to potential illiteracy of high-risk AI systems users. They might not benefit from sufficient technical expertise to fully understand the information provided to them, despite the wording of the Proposal, which requires that information should be comprehensible to its recipients (i.e. users).On this matter, the research points that there could be, more broadly, an important divergence between the level of detail of the information required by the Proposal and the level of expertise of users of high-risk AI systems. As a conclusion, the research provides policy recommendations to tackle (part of) the issues highlighted. It notably recommends to broaden the scope of transparency requirements for high-risk AI systems to encompass end-users. It also suggests that principles of explanation, as they were put forward in the Guidelines for Trustworthy AI of the High Level Expert Group, should be included in the Proposal in addition to transparency obligations.

Keywords: aI act proposal, explainability of aI, high-risk aI systems, transparency requirements

Procedia PDF Downloads 324
147 Standardizing and Achieving Protocol Objectives for ChestWall Radiotherapy Treatment Planning Process using an O-ring Linac in High-, Low- and Middle-income Countries

Authors: Milton Ixquiac, Erick Montenegro, Francisco Reynoso, Matthew Schmidt, Thomas Mazur, Tianyu Zhao, Hiram Gay, Geoffrey Hugo, Lauren Henke, Jeff Michael Michalski, Angel Velarde, Vicky de Falla, Franky Reyes, Osmar Hernandez, Edgar Aparicio Ruiz, Baozhou Sun

Abstract:

Purpose: Radiotherapy departments in low- and middle-income countries (LMICs) like Guatemala have recently introduced intensity-modulated radiotherapy (IMRT). IMRT has become the standard of care in high-income countries (HIC) due to reduced toxicity and improved outcomes in some cancers. The purpose of this work is to show the agreement between the dosimetric results shown in the Dose Volume Histograms (DVH) to the objectives proposed in the adopted protocol. This is the initial experience with an O-ring Linac. Methods and Materials: An O-Linac Linac was installed at our clinic in Guatemala in 2019 and has been used to treat approximately 90 patients daily with IMRT. This Linac is a completely Image Guided Device since to deliver each radiotherapy session must take a Mega Voltage Cone Beam Computerized Tomography (MVCBCT). In each MVCBCT, the Linac deliver 9 UM, and they are taken into account while performing the planning. To start the standardization, the TG263 was employed in the nomenclature and adopted a hypofractionated protocol to treat ChestWall, including supraclavicular nodes achieving 40.05Gy in 15 fractions. The planning was developed using 4 semiarcs from 179-305 degrees. The planner must create optimization volumes for targets and Organs at Risk (OARs); the difficulty for the planner was the dose base due to the MVCBCT. To evaluate the planning modality, we used 30 chestwall cases. Results: The plans created manually achieve the protocol objectives. The protocol objectives are the same as the RTOG1005, and the DHV curves look clinically acceptable. Conclusions: Despite the O-ring Linac doesn´t have the capacity to obtain kv images, the cone beam CT was created using MV energy, the dose delivered by the daily image setup process still without affect the dosimetric quality of the plans, and the dose distribution is acceptable achieving the protocol objectives.

Keywords: hypofrationation, VMAT, chestwall, radiotherapy planning

Procedia PDF Downloads 119
146 Using Nature-Based Solutions to Decarbonize Buildings in Canadian Cities

Authors: Zahra Jandaghian, Mehdi Ghobadi, Michal Bartko, Alex Hayes, Marianne Armstrong, Alexandra Thompson, Michael Lacasse

Abstract:

The Intergovernmental Panel on Climate Change (IPCC) report stated the urgent need to cut greenhouse gas emissions to avoid the adverse impacts of climatic changes. The United Nations has forecasted that nearly 70 percent of people will live in urban areas by 2050 resulting in a doubling of the global building stock. Given that buildings are currently recognised as emitting 40 percent of global carbon emissions, there is thus an urgent incentive to decarbonize existing buildings and to build net-zero carbon buildings. To attain net zero carbon emissions in communities in the future requires action in two directions: I) reduction of emissions; and II) removal of on-going emissions from the atmosphere once de-carbonization measures have been implemented. Nature-based solutions (NBS) have a significant role to play in achieving net zero carbon communities, spanning both emission reductions and removal of on-going emissions. NBS for the decarbonisation of buildings can be achieved by using green roofs and green walls – increasing vertical and horizontal vegetation on the building envelopes – and using nature-based materials that either emit less heat to the atmosphere thus decreasing photochemical reaction rates, or store substantial amount of carbon during the whole building service life within their structure. The NBS approach can also mitigate urban flooding and overheating, improve urban climate and air quality, and provide better living conditions for the urban population. For existing buildings, de-carbonization mostly requires retrofitting existing envelopes efficiently to use NBS techniques whereas for future construction, de-carbonization involves designing new buildings with low carbon materials as well as having the integrity and system capacity to effectively employ NBS. This paper presents the opportunities and challenges in respect to the de-carbonization of buildings using NBS for both building retrofits and new construction. This review documents the effectiveness of NBS to de-carbonize Canadian buildings, identifies the missing links to implement these techniques in cold climatic conditions, and determine a road map and immediate approaches to mitigate the adverse impacts of climate change such as urban heat islanding. Recommendations are drafted for possible inclusion in the Canadian building and energy codes.

Keywords: decarbonization, nature-based solutions, GHG emissions, greenery enhancement, buildings

Procedia PDF Downloads 94
145 Novel Animal Drawn Wheel-Axle Mechanism Actuated Knapsack Boom Sprayer

Authors: Ibrahim O. Abdulmalik, Michael C. Amonye, Mahdi Makoyo

Abstract:

Manual knapsack sprayer is the most popular means of farm spraying in Nigeria. It has its limitations. Apart from the human fatigue, which leads to unsteady walking steps, their field capacities are small. They barely cover about 0.2hectare per hour. Their small swath implies that a sizeable farm would take several days to cover. Weather changes are erratic and often it is desired to spray a large farm within hours or few days for even effect, uniformity and to avoid adverse weather interference. It is also often required that a large farm be covered within a short period to avoid re-emergence of weeds before crop emergence. Deployment of many knapsack operators to large farms has not been successful. Human error in taking equally spaced swaths usually result in over dosage of overlaps and in unapplied areas due to error at edges overlaps. Large farm spraying require boom equipment with larger swath. Reduced error in swath overlaps and spraying within the shortest possible time are then assured. Tractor boom sprayers would readily overcome these problems and achieve greater coverage, but they are not available in the country. Tractor hire for cultivation is very costly with the attendant lack of spare parts and specialized technicians for maintenance wherefore farmers find it difficult to engage tractors for cultivation and would avoid considering the employment of a tractor boom sprayer. Animal traction in farming is predominant in Nigeria, especially in the Northern part of the country. Development of boom sprayers drawn by work animals surely implies the maximization of animal utilization in farming. The Hydraulic Equipment Development Institute, Kano, in keeping to its mandate of targeted R&D in hydraulic and pneumatic systems, has developed an Animal Drawn Knapsack Boom Sprayer with four nozzles using the axle mechanism of a two wheeled cart to actuate the piston pump of two knapsack sprayers in line with appropriate technology demand of the country. It is hoped that the introduction of this novel contrivance shall enhance crop protection practice and lead to greater crop and food production in Nigeria.

Keywords: boom, knapsack, farm, sprayer, wheel axle

Procedia PDF Downloads 283
144 A Review of the Agroecological Farming System as a Viable Alternative Food Production Approach in South Africa

Authors: Michael Rudolph, Evans Muchesa, Katiya Yassim, Venkatesha Prasad

Abstract:

Input-intensive production systems characterise industrial agriculture as an unsustainable means to address food and nutrition security and sustainable livelihoods. There is extensive empirical evidence that supports the diversification and reorientation of industrial agriculture and that incorporates ecological practices viewed as essential for achieving balanced and productive farming systems. An agroecological farming system is a viable alternative approach that can improve food production, especially for the most vulnerable communities and households. Furthermore, substantial proof and supporting evidence show that such a system holds the key to increasing dietary diversity at the local level and reducing the multiple health and environmental risks stemming from industrial agriculture. This paper, therefore, aims to demonstrate the benefits of the agroecology food system through an evidenced-based approach that shows how the broader agricultural network structures can play a meaningful role, particularly for impoverished households in today’s reality. The methodology is centered on a structured literature review that analyses urban agriculture, agroecology, and food insecurity. Notably, ground-truthing, practical experiences, and field observation of agroecological farming were deployed. This paper places particular emphasis on the practical application of the agroecological approach in urban and peri-urban settings. Several evaluation reports on local and provincial initiatives clearly show that very few households engage in food gardens and urban agriculture. These households do not make use of their backyards or nearby open spaces for a number of reasons, such as stringent city by-laws, restricted access to land, little or no knowledge of innovative or alternative farming practices, and a general lack of interest. Furthermore, limited resources such as water and energy and lack of capacity building and training implementation are additional constraints that are hampering small scale food gardens and farms in other settings. The Agroecology systems approach is viewed as one of the key solutions to tackling these problems.

Keywords: agroecology, water-energy-food nexus, sutainable development goals, social, environmental and economc impact

Procedia PDF Downloads 114
143 Bioefficiency of Cinnamomum verum Loaded Niosomes and Its Microbicidal and Mosquito Larvicidal Activity against Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus

Authors: Aasaithambi Kalaiselvi, Michael Gabriel Paulraj, Ekambaram Nakkeeran

Abstract:

Emergences of mosquito vector-borne diseases are considered as a perpetual problem globally in tropical countries. The outbreak of several diseases such as chikungunya, zika virus infection and dengue fever has created a massive threat towards the living population. Frequent usage of synthetic insecticides like Dichloro Diphenyl Trichloroethane (DDT) eventually had its adverse harmful effects on humans as well as the environment. Since there are no perennial vaccines, prevention, treatment or drugs available for these pathogenic vectors, WHO is more concerned in eradicating their breeding sites effectively without any side effects on humans and environment by approaching plant-derived natural eco-friendly bio-insecticides. The aim of this study is to investigate the larvicidal potency of Cinnamomum verum essential oil (CEO) loaded niosomes. Cholesterol and surfactant variants of Span 20, 60 and 80 were used in synthesizing CEO loaded niosomes using Transmembrane pH gradient method. The synthesized CEO loaded niosomes were characterized by Zeta potential, particle size, Fourier Transform Infrared Spectroscopy (FT-IR), GC-MS and SEM analysis to evaluate charge, size, functional properties, the composition of secondary metabolites and morphology. The Z-average size of the formed niosomes was 1870.84 nm and had good stability with zeta potential -85.3 meV. The entrapment efficiency of the CEO loaded niosomes was determined by UV-Visible Spectrophotometry. The bio-potency of CEO loaded niosomes was treated and assessed against gram-positive (Bacillus subtilis) and gram-negative (Escherichia coli) bacteria and fungi (Aspergillus fumigatus and Candida albicans) at various concentrations. The larvicidal activity was evaluated against II to IV instar larvae of Aedes aegypti, Anopheles stephensi and Culex quinquefasciatus at various concentrations for 24 h. The mortality rate of LC₅₀ and LC₉₀ values were calculated. The results exhibited that CEO loaded niosomes have greater efficiency against mosquito larvicidal activity. The results suggest that niosomes could be used in various applications of biotechnology and drug delivery systems with greater stability by altering the drug of interest.

Keywords: Cinnamomum verum, niosomes, entrapment efficiency, bactericidal and fungicidal, mosquito larvicidal activity

Procedia PDF Downloads 166
142 The Closed Cavity Façade (CCF): Optimization of CCF for Enhancing Energy Efficiency and Indoor Environmental Quality in Office Buildings

Authors: Michalis Michael, Mauro Overend

Abstract:

Buildings, in which we spend 87-90% of our time, act as a shelter protecting us from environmental conditions and weather phenomena. The building's overall performance is significantly dependent on the envelope’s glazing part, which is particularly critical as it is the most vulnerable part to heat gain and heat loss. However, conventional glazing technologies have relatively low-performance thermo-optical characteristics. In this regard, during winter, the heat losses due to the glazing part of a building envelope are significantly increased as well as the heat gains during the summer period. In this study, the contribution of an innovative glazing technology, namely Closed Cavity Façade (CCF) in improving energy efficiency and IEQ in office buildings is examined, aiming to optimize various design configurations of CCF. Using Energy Plus and IDA ICE packages, the performance of several CCF configurations and geometries for various climate types were investigated, aiming to identify the optimum solution. The model used for the simulations and optimization process was MATELab, a recently constructed outdoor test facility at the University of Cambridge (UK). The model was previously experimentally calibrated. The study revealed that the use of CCF technology instead of conventional double or triple glazing leads to important benefits. Particularly, the replacement of the traditional glazing units, used as the baseline, with the optimal configuration of CCF led to a decrease in energy consumption in the range of 18-37% (depending on the location). This mainly occurs due to integrating shading devices in the cavity and applying proper glass coatings and control strategies, which lead to improvement of thermal transmittance and g-value of the glazing. Since the solar gain through the façade is the main contributor to energy consumption during cooling periods, it was observed that a higher energy improvement is achieved in cooling-dominated locations. Furthermore, it was shown that a suitable selection of the constituents of a closed cavity façade, such as the colour and type of shading devices and the type of coatings, leads to an additional improvement of its thermal performance, avoiding overheating phenomena and consequently ensuring temperatures in the glass cavity below the critical value, and reducing the radiant discomfort providing extra benefits in terms of Indoor Environmental Quality (IEQ).

Keywords: building energy efficiency, closed cavity façade, optimization, occupants comfort

Procedia PDF Downloads 65
141 Discussion of Blackness in Wrestling

Authors: Jason Michael Crozier

Abstract:

The wrestling territories of the mid-twentieth century in the United States are widely considered the birthplace of modern professional wrestling, and by many professional wrestlers, to be a beacon of hope for the easing of racial tensions during the civil rights era and beyond. The performers writing on this period speak of racial equality but fail to acknowledge the exploitation of black athletes as a racialized capital commodity who suffered the challenges of systemic racism, codified by a false narrative of aspirational exceptionalism and equality measured by audience diversity. The promoters’ ability to equate racial and capital exploitation with equality leads to a broader discussion of the history of Muscular Christianity in the United States and the exploitation of black bodies. Narratives of racial erasure that dominate the historical discourse when examining athleticism and exceptionalism redefined how blackness existed and how physicality and race are conceived of in sport and entertainment spaces. When discussing the implications of race and professional wrestling, it is important to examine the role of promotions as ‘imagined communities’ where the social agency of wrestlers is defined and quantified based on their ‘desired elements’ as a performer. The intentionally vague nature of this language masks a deep history of racialization that has been perpetuated by promoters and never fully examined by scholars. Sympathetic racism and the omission of cultural identity are also key factors in the limitations and racial barriers placed upon black athletes in the squared circle. The use of sympathetic racism within professional wrestling during the twentieth century defined black athletes into two distinct categorizations, the ‘black savage’ or the ‘black minstrel’. Black wrestlers of the twentieth century were defined by their strength as a capital commodity and their physicality rather than their knowledge of the business and in-ring skill. These performers had little agency in their ability to shape their own character development inside and outside the ring. Promoters would often create personas that heavily racialized the performer by tying them to a regional past or memory, such as that of slavery in the deep south using dog collar matches and adoring black characters in chains. Promoters softened cultural memory by satirizing the historic legacy of slavery and the black identity.

Keywords: sympathetic racism, social agency, racial commodification, stereotyping

Procedia PDF Downloads 135
140 Re-Evaluating the Hegemony of English Language in West Africa: A Meta-Analysis Review of the Research, 2003-2018

Authors: Oris Tom-Lawyer, Michael Thomas

Abstract:

This paper seeks to analyse the hegemony of the English language in Western Africa through the lens of educational policies and the socio-economic functions of the language. It is based on the premise that there is a positive link between the English language and development contexts. The study aims to fill a gap in the research literature by examining the usefulness of hegemony as a concept to explain the role of English language in the region, thus countering the negative connotations that often accompany it. The study identified four main research questions: i. What are the socio-economic functions of English in Francophone/lusophone countries? ii. What factors promote the hegemony of English in anglophone countries? iii. To what extent is the hegemony of English in West Africa? iv. What are the implications of the non-hegemony of English in Western Africa? Based on a meta-analysis of the research literature between 2003 and 2018, the findings of the study revealed that in francophone/lusophone countries, English functions in the following socio-economic domains; they are peace keeping missions, regional organisations, commercial and industrial sectors, as an unofficial international language and as a foreign language. The factors that promote linguistic hegemony of English in anglophone countries are English as an official language, a medium of instruction, lingua franca, cultural language, language of politics, language of commerce, channel of development and English for media and entertainment. In addition, the extent of the hegemony of English in West Africa can be viewed from the factors that contribute to the non-hegemony of English in the region; they are French language, Portuguese language, the French culture, neo-colonialism, level of poverty, and economic ties of French to its former colonies. Finally, the implications of the non-hegemony of English language in West Africa are industrial backwardness, poverty rate, lack of social mobility, drop out of school rate, growing interest in English, access to limited internet information and lack of extensive career opportunities. The paper concludes that the hegemony of English has resulted in the development of anglophone countries in Western Africa, while in the francophone/lusophone regions of the continent, industrial backwardness and low literacy rates have been consequences of English language marginalisation. In conclusion, the paper makes several recommendations, including the need for the early introduction of English into French curricula as part of a potential solution.

Keywords: developmental tool, English language, linguistic hegemony, West Africa

Procedia PDF Downloads 141
139 Identifying the Determinants of Compliance with Maritime Environmental Legislation in the North and Baltic Sea Area: A Model Developed from Exploratory Qualitative Data Collection

Authors: Thea Freese, Michael Gille, Andrew Hursthouse, John Struthers

Abstract:

Ship operators on the North and Baltic Sea have been experiencing increased political interest in marine environmental protection and cleaner vessel operations. Stricter legislation on SO2 and NOx emissions, ballast water management and other measures of protection are currently being phased in or will come into force in the coming years. These measures benefit the health of the marine environment, while increasing company’s operational costs. In times of excess shipping capacity and linked consolidation in the industry non-compliance with environmental rules is one way companies might hope to stay competitive with both intra- and inter-modal trade. Around 5-15% of industry participants are believed to neglect laws on vessel-source pollution willingly or unwillingly. Exploratory in-depth interviews conducted with 12 experts from various stakeholder groups informed the researchers about variables influencing compliance levels, including awareness and apprehension, willingness to comply, ability to comply and effectiveness of controls. Semi-structured expert interviews were evaluated using qualitative content analysis. A model of determinants of compliance was developed and is presented here. While most vessel operators endeavour to achieve full compliance with environmental rules, a lack of availability of technical solutions, expediency of implementation and operation and economic feasibility might prove a hindrance. Ineffective control systems on the other hand foster willing non-compliance. With respect to motivations, lacking time, lacking financials and the absence of commercial advantages decrease compliance levels. These and other variables were inductively developed from qualitative data and integrated into a model on environmental compliance. The outcomes presented here form part of a wider research project on economic effects of maritime environmental legislation. Research on determinants of compliance might inform policy-makers about actual behavioural effects of shipping companies and might further the development of a comprehensive legal system for environmental protection.

Keywords: compliance, marine environmental protection, exploratory qualitative research study, clean vessel operations, North and Baltic Sea area

Procedia PDF Downloads 383
138 Comprehensive Longitudinal Multi-omic Profiling in Weight Gain and Insulin Resistance

Authors: Christine Y. Yeh, Brian D. Piening, Sarah M. Totten, Kimberly Kukurba, Wenyu Zhou, Kevin P. F. Contrepois, Gucci J. Gu, Sharon Pitteri, Michael Snyder

Abstract:

Three million deaths worldwide are attributed to obesity. However, the biomolecular mechanisms that describe the link between adiposity and subsequent disease states are poorly understood. Insulin resistance characterizes approximately half of obese individuals and is a major cause of obesity-mediated diseases such as Type II diabetes, hypertension and other cardiovascular diseases. This study makes use of longitudinal quantitative and high-throughput multi-omics (genomics, epigenomics, transcriptomics, glycoproteomics etc.) methodologies on blood samples to develop multigenic and multi-analyte signatures associated with weight gain and insulin resistance. Participants of this study underwent a 30-day period of weight gain via excessive caloric intake followed by a 60-day period of restricted dieting and return to baseline weight. Blood samples were taken at three different time points per patient: baseline, peak-weight and post weight loss. Patients were characterized as either insulin resistant (IR) or insulin sensitive (IS) before having their samples processed via longitudinal multi-omic technologies. This comparative study revealed a wealth of biomolecular changes associated with weight gain after using methods in machine learning, clustering, network analysis etc. Pathways of interest included those involved in lipid remodeling, acute inflammatory response and glucose metabolism. Some of these biomolecules returned to baseline levels as the patient returned to normal weight whilst some remained elevated. IR patients exhibited key differences in inflammatory response regulation in comparison to IS patients at all time points. These signatures suggest differential metabolism and inflammatory pathways between IR and IS patients. Biomolecular differences associated with weight gain and insulin resistance were identified on various levels: in gene expression, epigenetic change, transcriptional regulation and glycosylation. This study was not only able to contribute to new biology that could be of use in preventing or predicting obesity-mediated diseases, but also matured novel biomedical informatics technologies to produce and process data on many comprehensive omics levels.

Keywords: insulin resistance, multi-omics, next generation sequencing, proteogenomics, type ii diabetes

Procedia PDF Downloads 429
137 Utilization of Standard Paediatric Observation Chart to Evaluate Infants under Six Months Presenting with Non-Specific Complaints

Authors: Michael Zhang, Nicholas Marriage, Valerie Astle, Marie-Louise Ratican, Jonathan Ash, Haddijatou Hughes

Abstract:

Objective: Young infants are often brought to the Emergency Department (ED) with a variety of complaints, some of them are non-specific and present as a diagnostic challenge to the attending clinician. Whilst invasive investigations such as blood tests and lumbar puncture are necessary in some cases to exclude serious infections, some basic clinical tools in additional to thorough clinical history can be useful to assess the risks of serious conditions in these young infants. This study aimed to examine the utilization of one of clinical tools in this regard. Methods: This retrospective observational study examined the medical records of infants under 6 months presenting to a mixed urban ED between January 2013 and December 2014. The infants deemed to have non-specific complaints or diagnoses by the emergency clinicians were selected for analysis. The ones with clear systemic diagnoses were excluded. Among all relevant clinical information and investigation results, utilization of Standard Paediatric Observation Chart (SPOC) was particularly scrutinized in these medical records. This specific chart was developed by the expert clinicians in local health department. It categorizes important clinical signs into some color-coded zones as a visual cue for serious implication of some abnormalities. An infant is regarded as SPOC positive when fulfills 1 red zone or 2 yellow zones criteria, and the attending clinician would be prompted to investigate and treat for potential serious conditions accordingly. Results: Eight hundred and thirty-five infants met the inclusion criteria for this project. The ones admitted to the hospital for further management were more likely to have SPOC positive criteria than the discharged infants (Odds ratio: 12.26, 95% CI: 8.04 – 18.69). Similarly, Sepsis alert criteria on SPOC were positive in a higher percentage of patients with serious infections (56.52%) in comparison to those with mild conditions (15.89%) (p < 0.001). The SPOC sepsis criteria had a sensitivity of 56.5% (95% CI: 47.0% - 65.7%) and a moderate specificity of 84.1% (95% CI: 80.8% - 87.0%) to identify serious infections. Applying to this infant population, with a 17.4% prevalence of serious infection, the positive predictive value was only 42.8% (95% CI: 36.9% - 49.0%). However, the negative predictive value was high at 90.2% (95% CI: 88.1% - 91.9%). Conclusions: Standard Paediatric Observation Chart has been applied as a useful clinical tool in the clinical practice to help identify and manage young sick infants in ED effectively.

Keywords: clinical tool, infants, non-specific complaints, Standard Paediatric Observation Chart

Procedia PDF Downloads 253
136 Simons, Ehrlichs and the Case for Polycentricity – Why Growth-Enthusiasts and Growth-Sceptics Must Embrace Polycentricity

Authors: Justus Enninga

Abstract:

Enthusiasts and skeptics about economic growth have not much in common in their preference for institutional arrangements that solve ecological conflicts. This paper argues that agreement between both opposing schools can be found in the Bloomington Schools’ concept of polycentricity. Growth-enthusiasts who will be referred to as Simons after the economist Julian Simon and growth-skeptics named Ehrlichs after the ecologist Paul R. Ehrlich both profit from a governance structure where many officials and decision structures are assigned limited and relatively autonomous prerogatives to determine, enforce and alter legal relationships. The paper advances this argument in four steps. First, it will provide clarification of what Simons and Ehrlichs mean when they talk about growth and what the arguments for and against growth-enhancing or degrowth policies are for them and for the other site. Secondly, the paper advances the concept of polycentricity as first introduced by Michael Polanyi and later refined to the study of governance by the Bloomington School of institutional analysis around the Nobel Prize laureate Elinor Ostrom. The Bloomington School defines polycentricity as a non-hierarchical, institutional, and cultural framework that makes possible the coexistence of multiple centers of decision making with different objectives and values, that sets the stage for an evolutionary competition between the complementary ideas and methods of those different decision centers. In the third and fourth parts, it is shown how the concept of polycentricity is of crucial importance for growth-enthusiasts and growth-skeptics alike. The shorter third part demonstrates the literature on growth-enhancing policies and argues that large parts of the literature already accept that polycentric forms of governance like markets, the rule of law and federalism are an important part of economic growth. Part four delves into the more nuanced question of how a stagnant steady-state economy or even an economy that de-grows will still find polycentric governance desirable. While the majority of degrowth proposals follow a top-down approach by requiring direct governmental control, a contrasting bottom-up approach is advanced. A decentralized, polycentric approach is desirable because it allows for the utilization of tacit information dispersed in society and an institutionalized discovery process for new solutions to the problem of ecological collective action – no matter whether you belong to the Simons or Ehrlichs in a green political economy.

Keywords: degrowth, green political theory, polycentricity, institutional robustness

Procedia PDF Downloads 184
135 Investigations of the Service Life of Different Material Configurations at Solid-lubricated Rolling Bearings

Authors: Bernd Sauer, Michel Werner, Stefan Emrich, Michael Kopnarski, Oliver Koch

Abstract:

Friction reduction is an important aspect in the context of sustainability and energy transition. Rolling bearings are therefore used in many applications in which components move relative to each other. Conventionally lubricated rolling bearings are used in a wide range of applications, but are not suitable under certain conditions. Conventional lubricants such as grease or oil cannot be used at very high or very low temperatures. In addition, these lubricants evaporate at very low ambient pressure, e.g. in a high vacuum environment, making the use of solid lubricated bearings unavoidable. With the use of solid-lubricated bearings, predicting the service life becomes more complex. While the end of the service life of bearings with conventional lubrication is mainly caused by the failure of the bearing components due to material fatigue, solid-lubricated bearings fail at the moment when the lubrication layer is worn and the rolling elements come into direct contact with the raceway during operation. In order to extend the service life of these bearings beyond the service life of the initial coating, the use of transfer lubrication is recommended, in which pockets or sacrificial cages are used in which the balls run and can thus absorb the lubricant, which is then available for lubrication in tribological contact. This contribution presents the results of wear and service life tests on solid-lubricated rolling bearings with sacrificial cage pockets. The cage of the bearing consists of a polyimide (PI) matrix with 15% molybdenum disulfide (MoS2) and serves as a lubrication depot alongside the silver-coated balls. The bearings are tested under high vacuum (pE < 10-2 Pa) at a temperature of 300 °C on a four-bearing test rig. First, investigations of the bearing system within the bearing service life are presented and the torque curve, the wear mass and surface analyses are discussed. With regard to wear, it can be seen that the bearing rings tend to increase in mass over the service life of the bearing, while the balls and the cage tend to lose mass. With regard to the elementary surface properties, the surfaces of the bearing rings and balls are examined in terms of the mass of the elements on them. Furthermore, service life investigations with different material pairings are presented, whereby the focus here is on the service life achieved in addition to the torque curve, wear development and surface analysis. It was shown that MoS2 in the cage leads to a longer service life, while a silver (Ag) coating on the balls has no positive influence on the service life and even appears to reduce it in combination with MoS2.

Keywords: ball bearings, molybdenum disulfide, solid lubricated bearings, solid lubrication mechanisms

Procedia PDF Downloads 52
134 Comprehensive Geriatric Assessments: An Audit into Assessing and Improving Uptake on Geriatric Wards at King’s College Hospital, London

Authors: Michael Adebayo, Saheed Lawal

Abstract:

The Comprehensive Geriatric Assessment (CGA) is the multidimensional tool used to assess elderly, frail patients either on admission to hospital care or at a community level in primary care. It is a tool designed with the aim of using a holistic approach to managing patients. A Cochrane review of CGA use in 2011 found that the likelihood of being alive and living in their own home rises by 30% post-discharge. RCTs have also discovered 10–15% reductions in readmission rates and reductions in institutionalization, and resource use and costs. Past audit cycles at King’s College Hospital, Denmark Hill had shown inconsistent evidence of CGA completion inpatient discharge summaries (less than 50%). Junior Doctors in the Health and Ageing (HAU) wards have struggled to sustain the efforts of past audit cycles due to the quick turnover in staff (four-month placements for trainees). This 7th cycle created a multi-faceted approach to solving this problem amongst staff and creating lasting change. Methods: 1. We adopted multidisciplinary team involvement to support Doctors. MDT staff e.g. Nurses, Physiotherapists, Occupational Therapists and Dieticians, were actively encouraged to fill in the CGA document. 2. We added a CGA Document Pro-forma to “Sunrise EPR” (Trust computer system). These CGAs were to automatically be included the discharge summary. 3. Prior to assessing uptake, we used a spot audit questionnaire to assess staff awareness/knowledge of what a CGA was. 4. We designed and placed posters highlighting domains of CGA and MDT roles suited to each domain on geriatric “Health and Ageing Wards” (HAU) in the hospital. 5. We performed an audit of % discharge summaries which include CGA and MDT role input. 6. We nominated ward champions on each ward from each multidisciplinary specialty to monitor and encourage colleagues to actively complete CGAs. 7. We initiated further education of ward staff on CGA's importance by discussion at board rounds and weekly multidisciplinary meetings. Outcomes: 1. The majority of respondents to our spot audit were aware of what a CGA was, but fewer had used the EPR document to complete one. 2. We found that CGAs were not being commenced for nearly 50% of patients discharged on HAU wards and the Frailty Assessment Unit.

Keywords: comprehensive geriatric assessment, CGA, multidisciplinary team, quality of life, mortality

Procedia PDF Downloads 85
133 Efficient Estimation of Maximum Theoretical Productivity from Batch Cultures via Dynamic Optimization of Flux Balance Models

Authors: Peter C. St. John, Michael F. Crowley, Yannick J. Bomble

Abstract:

Production of chemicals from engineered organisms in a batch culture typically involves a trade-off between productivity, yield, and titer. However, strategies for strain design typically involve designing mutations to achieve the highest yield possible while maintaining growth viability. Such approaches tend to follow the principle of designing static networks with minimum metabolic functionality to achieve desired yields. While these methods are computationally tractable, optimum productivity is likely achieved by a dynamic strategy, in which intracellular fluxes change their distribution over time. One can use multi-stage fermentations to increase either productivity or yield. Such strategies would range from simple manipulations (aerobic growth phase, anaerobic production phase), to more complex genetic toggle switches. Additionally, some computational methods can also be developed to aid in optimizing two-stage fermentation systems. One can assume an initial control strategy (i.e., a single reaction target) in maximizing productivity - but it is unclear how close this productivity would come to a global optimum. The calculation of maximum theoretical yield in metabolic engineering can help guide strain and pathway selection for static strain design efforts. Here, we present a method for the calculation of a maximum theoretical productivity of a batch culture system. This method follows the traditional assumptions of dynamic flux balance analysis: that internal metabolite fluxes are governed by a pseudo-steady state and external metabolite fluxes are represented by dynamic system including Michealis-Menten or hill-type regulation. The productivity optimization is achieved via dynamic programming, and accounts explicitly for an arbitrary number of fermentation stages and flux variable changes. We have applied our method to succinate production in two common microbial hosts: E. coli and A. succinogenes. The method can be further extended to calculate the complete productivity versus yield Pareto surface. Our results demonstrate that nearly optimal yields and productivities can indeed be achieved with only two discrete flux stages.

Keywords: A. succinogenes, E. coli, metabolic engineering, metabolite fluxes, multi-stage fermentations, succinate

Procedia PDF Downloads 217
132 Exploring Mothers' Knowledge and Experiences of Attachment in the First 1000 Days of Their Child's Life

Authors: Athena Pedro, Zandile Batweni, Laura Bradfield, Michael Dare, Ashley Nyman

Abstract:

The rapid growth and development of an infant in the first 1000 days of life means that this time period provides the greatest opportunity for a positive developmental impact on a child’s life socially, emotionally, cognitively and physically. Current research is being focused on children in the first 1000 days, but there is a lack of research and understanding of mothers and their experiences during this crucial time period. Thus, it is imperative that more research is done to help better understand the experiences of mothers during the first 1000 days of their child’s life, as well as gain more insight into mothers’ knowledge regarding this time period. The first 1000 days of life, from conception to two years, is a critical period, and the child’s attachment to his or her mother or primary caregiver during this period is crucial for a multitude of future outcomes. The aim of this study was to explore mothers’ understanding and experience of the first 1000 days of their child’s life, specifically looking at attachment in the context of Bowlby and Ainsworths’ attachment theory. Using a qualitative methodological framework, data were collected through semi-structured individual interviews with 12 first-time mothers from low-income communities in Cape Town. Thematic analysis of the data revealed that mothers articulated the importance of attachment within the first 1000 days of life and shared experiences of how they bond and form attachment with their babies. Furthermore, these mothers expressed their belief in the long-term effects of early attachment of responsive positive parenting as well as the lasting effects of poor attachment and non-responsive parenting. This study has implications for new mothers and healthcare staff working with mothers of new-born babies, as well as for future contextual research. By gaining insight into the mothers’ experiences, policies and intervention efforts can be formulated in order to assist mothers during this time, which ultimately promote the healthy development of the nation’s children and future adult generation. If researchers are also able to understand the extent of mothers’ general knowledge regarding the first 1000 days and attachment, then there will be a better understanding of where there may be gaps in knowledge and thus, recommendations for effective and relevant intervention efforts may be provided. These interventions may increase knowledge and awareness of new mothers and health care workers at clinics and other service providers, creating a high impact on positive outcome. Thus, improving the developmental trajectory for many young babies allows them the opportunity to pursue optimal development by reaching their full potential.

Keywords: attachment, experience, first 1000 days, knowledge, mothers

Procedia PDF Downloads 179
131 Improving the Technology of Assembly by Use of Computer Calculations

Authors: Mariya V. Yanyukina, Michael A. Bolotov

Abstract:

Assembling accuracy is the degree of accordance between the actual values of the parameters obtained during assembly, and the values specified in the assembly drawings and technical specifications. However, the assembling accuracy depends not only on the quality of the production process but also on the correctness of the assembly process. Therefore, preliminary calculations of assembly stages are carried out to verify the correspondence of real geometric parameters to their acceptable values. In the aviation industry, most calculations involve interacting dimensional chains. This greatly complicates the task. Solving such problems requires a special approach. The purpose of this article is to carry out the problem of improving the technology of assembly of aviation units by use of computer calculations. One of the actual examples of the assembly unit, in which there is an interacting dimensional chain, is the turbine wheel of gas turbine engine. Dimensional chain of turbine wheel is formed by geometric parameters of disk and set of blades. The interaction of the dimensional chain consists in the formation of two chains. The first chain is formed by the dimensions that determine the location of the grooves for the installation of the blades, and the dimensions of the blade roots. The second dimensional chain is formed by the dimensions of the airfoil shroud platform. The interaction of the dimensional chain of the turbine wheel is the interdependence of the first and second chains by means of power circuits formed by a plurality of middle parts of the turbine blades. The timeliness of the calculation of the dimensional chain of the turbine wheel is the need to improve the technology of assembly of this unit. The task at hand contains geometric and mathematical components; therefore, its solution can be implemented following the algorithm: 1) research and analysis of production errors by geometric parameters; 2) development of a parametric model in the CAD system; 3) creation of set of CAD-models of details taking into account actual or generalized distributions of errors of geometrical parameters; 4) calculation model in the CAE-system, loading of various combinations of models of parts; 5) the accumulation of statistics and analysis. The main task is to pre-simulate the assembly process by calculating the interacting dimensional chains. The article describes the approach to the solution from the point of view of mathematical statistics, implemented in the software package Matlab. Within the framework of the study, there are data on the measurement of the components of the turbine wheel-blades and disks, as a result of which it is expected that the assembly process of the unit will be optimized by solving dimensional chains.

Keywords: accuracy, assembly, interacting dimension chains, turbine

Procedia PDF Downloads 373
130 Relativity in Toddlers' Understanding of the Physical World as Key to Misconceptions in the Science Classroom

Authors: Michael Hast

Abstract:

Within their first year, infants can differentiate between objects based on their weight. By at least 5 years children hold consistent weight-related misconceptions about the physical world, such as that heavy things fall faster than lighter ones because of their weight. Such misconceptions are seen as a challenge for science education since they are often highly resistant to change through instruction. Understanding the time point of emergence of such ideas could, therefore, be crucial for early science pedagogy. The paper thus discusses two studies that jointly address the issue by examining young children’s search behaviour in hidden displacement tasks under consideration of relative object weight. In both studies, they were tested with a heavy or a light ball, and they either had information about one of the balls only or both. In Study 1, 88 toddlers aged 2 to 3½ years watched a ball being dropped into a curved tube and were then allowed to search for the ball in three locations – one straight beneath the tube entrance, one where the curved tube lead to, and one that corresponded to neither of the previous outcomes. Success and failure at the task were not impacted by weight of the balls alone in any particular way. However, from around 3 years onwards, relative lightness, gained through having tactile experience of both balls beforehand, enhanced search success. Conversely, relative heaviness increased search errors such that children increasingly searched in the location immediately beneath the tube entry – known as the gravity bias. In Study 2, 60 toddlers aged 2, 2½ and 3 years watched a ball roll down a ramp and behind a screen with four doors, with a barrier placed along the ramp after one of four doors. Toddlers were allowed to open the doors to find the ball. While search accuracy generally increased with age, relative weight did not play a role in 2-year-olds’ search behaviour. Relative lightness improved 2½-year-olds’ searches. At 3 years, both relative lightness and relative heaviness had a significant impact, with the former improving search accuracy and the latter reducing it. Taken together, both studies suggest that between 2 and 3 years of age, relative object weight is increasingly taken into consideration in navigating naïve physical concepts. In particular, it appears to contribute to the early emergence of misconceptions relating to object weight. This insight from developmental psychology research may have consequences for early science education and related pedagogy towards early conceptual change.

Keywords: conceptual development, early science education, intuitive physics, misconceptions, object weight

Procedia PDF Downloads 190
129 Microbiological Assessment of Soft Cheese (Wara), Raw Milk and Dairy Drinking Water from Selected Farms in Ido, Ibadan, Nigeria

Authors: Blessing C. Nwachukwu, Michael O. Taiwo, Wasiu A. Abibu, Isaac O. Ayodeji

Abstract:

Milk is an important source of micro and macronutrients for humans. Soft Cheese (Wara) is an example of a by-product of milk. In addition, water is considered as one of the most vital resources in cattle farms. Due to the high consumption rate of milk and soft cheese and the traditional techniques involved in their production in Nigeria, there was a need for a microbiological assessment which will be of utmost public health importance. The study thus investigated microbial risk assessments associated with consumption of milk and soft cheese (Wara). It also investigated common pathogens present in dairy water in farms and antibiotic sensitivity profiling for implicated pathogens were conducted. Samples were collected from three different Fulani dairy herds in Ido local government, Ibadan, Oyo State, Nigeria and subjected to microbiological evaluation and antimicrobial susceptibility testing. Aspergillus flavus was the only isolated fungal isolate from Wara while Staphylococcus aureus, Vibro cholera, Hafnia alvei, Proteus mirabilis, Escherishia coli, Psuedomonas aeuroginosa, Citrobacter freundii, and Klebsiella pneumonia were the bacteria genera isolated from Wara, dairy milk and dairy drinking water. Bacterial counts from Wara from the three selected farms A, B and C were 3.5×105 CFU/ml, 4.0×105 CFU/ml and 5.3×105 CFU/ml respectively while the fungal count was 3CFU/100µl. The total bacteria count from dairy milk from the three selected farms A, B and C were Farms 2.0 ×105 CFU/ml, 3.5 × 105 CFU/ml and 6.5 × 105 CFU/ml respectively. 1.4×105 CFU/ml, 1.9×105 CFU/ml and 4.9×105 CFU/ml were the recorded bacterial counts from dairy water from farms A, B and C respectively. The highest antimicrobial resistance of 100% was recorded in Wara with Enrofloxacin, Gentamycin, Cefatriaxone and Colistin. The highest antimicrobial susceptibility of 100% was recorded in Raw milk with Enrofloxacin and Gentamicin. Highest antimicrobial intermediate response of 100% was recorded in Raw milk with Streptomycin. The study revealed that most of the cheeses sold at Ido local Government are contaminated with pathogens. Further research is needed on standardizing the production method to prevent pathogens from gaining access. The presence of bacteria in raw milk indicated contamination due to poor handling and unhygienic practices. Thus, drinking unpasteurized milk is hazardous as it increases the risk of zoonoses. Also, the Provision of quality drinking water is crucial for optimum productivity of dairy. Health education programs aiming at increasing awareness of the importance of clean water for animal health will be helpful.

Keywords: dairy, raw milk, soft cheese, Wara

Procedia PDF Downloads 183
128 Considering Uncertainties of Input Parameters on Energy, Environmental Impacts and Life Cycle Costing by Monte Carlo Simulation in the Decision Making Process

Authors: Johannes Gantner, Michael Held, Matthias Fischer

Abstract:

The refurbishment of the building stock in terms of energy supply and efficiency is one of the major challenges of the German turnaround in energy policy. As the building sector accounts for 40% of Germany’s total energy demand, additional insulation is key for energy efficient refurbished buildings. Nevertheless the energetic benefits often the environmental and economic performances of insulation materials are questioned. The methods Life Cycle Assessment (LCA) as well as Life Cycle Costing (LCC) can form the standardized basis for answering this doubts and more and more become important for material producers due efforts such as Product Environmental Footprint (PEF) or Environmental Product Declarations (EPD). Due to increasing use of LCA and LCC information for decision support the robustness and resilience of the results become crucial especially for support of decision and policy makers. LCA and LCC results are based on respective models which depend on technical parameters like efficiencies, material and energy demand, product output, etc.. Nevertheless, the influence of parameter uncertainties on lifecycle results are usually not considered or just studied superficially. Anyhow the effect of parameter uncertainties cannot be neglected. Based on the example of an exterior wall the overall lifecycle results are varying by a magnitude of more than three. As a result simple best case worst case analyses used in practice are not sufficient. These analyses allow for a first rude view on the results but are not taking effects into account such as error propagation. Thereby LCA practitioners cannot provide further guidance for decision makers. Probabilistic analyses enable LCA practitioners to gain deeper understanding of the LCA and LCC results and provide a better decision support. Within this study, the environmental and economic impacts of an exterior wall system over its whole lifecycle are illustrated, and the effect of different uncertainty analysis on the interpretation in terms of resilience and robustness are shown. Hereby the approaches of error propagation and Monte Carlo Simulations are applied and combined with statistical methods in order to allow for a deeper understanding and interpretation. All in all this study emphasis the need for a deeper and more detailed probabilistic evaluation based on statistical methods. Just by this, misleading interpretations can be avoided, and the results can be used for resilient and robust decisions.

Keywords: uncertainty, life cycle assessment, life cycle costing, Monte Carlo simulation

Procedia PDF Downloads 286
127 Time of Week Intensity Estimation from Interval Censored Data with Application to Police Patrol Planning

Authors: Jiahao Tian, Michael D. Porter

Abstract:

Law enforcement agencies are tasked with crime prevention and crime reduction under limited resources. Having an accurate temporal estimate of the crime rate would be valuable to achieve such a goal. However, estimation is usually complicated by the interval-censored nature of crime data. We cast the problem of intensity estimation as a Poisson regression using an EM algorithm to estimate the parameters. Two special penalties are added that provide smoothness over the time of day and day of the week. This approach presented here provides accurate intensity estimates and can also uncover day-of-week clusters that share the same intensity patterns. Anticipating where and when crimes might occur is a key element to successful policing strategies. However, this task is complicated by the presence of interval-censored data. The censored data refers to the type of data that the event time is only known to lie within an interval instead of being observed exactly. This type of data is prevailing in the field of criminology because of the absence of victims for certain types of crime. Despite its importance, the research in temporal analysis of crime has lagged behind the spatial component. Inspired by the success of solving crime-related problems with a statistical approach, we propose a statistical model for the temporal intensity estimation of crime with censored data. The model is built on Poisson regression and has special penalty terms added to the likelihood. An EM algorithm was derived to obtain maximum likelihood estimates, and the resulting model shows superior performance to the competing model. Our research is in line with the smart policing initiative (SPI) proposed by the Bureau Justice of Assistance (BJA) as an effort to support law enforcement agencies in building evidence-based, data-driven law enforcement tactics. The goal is to identify strategic approaches that are effective in crime prevention and reduction. In our case, we allow agencies to deploy their resources for a relatively short period of time to achieve the maximum level of crime reduction. By analyzing a particular area within cities where data are available, our proposed approach could not only provide an accurate estimate of intensities for the time unit considered but a time-variation crime incidence pattern. Both will be helpful in the allocation of limited resources by either improving the existing patrol plan with the understanding of the discovery of the day of week cluster or supporting extra resources available.

Keywords: cluster detection, EM algorithm, interval censoring, intensity estimation

Procedia PDF Downloads 66
126 The Impact of Technology and Artificial Intelligence on Children in Autism

Authors: Dina Moheb Rashid Michael

Abstract:

A descriptive statistical analysis of the data showed that the most important factor evoking negative attitudes among teachers is student behavior. have been presented as useful models for understanding the risk factors and protective factors associated with the emergence of autistic traits. Although these "syndrome" forms of autism reach clinical thresholds, they appear to be distinctly different from the idiopathic or "non-syndrome" autism phenotype. Most teachers reported that kindergartens did not prepare them for the educational needs of children with autism, particularly in relation to non-verbal skills. The study is important and points the way for improving teacher inclusion education in Thailand. Inclusive education for students with autism is still in its infancy in Thailand. Although the number of autistic children in schools has increased significantly since the Thai government introduced the Education Regulations for Persons with Disabilities Act in 2008, there is a general lack of services for autistic students and their families. This quantitative study used the Teaching Skills and Readiness Scale for Students with Autism (APTSAS) to test the attitudes and readiness of 110 elementary school teachers when teaching students with autism in general education classrooms. To uncover the true nature of these co morbidities, it is necessary to expand the definition of autism to include the cognitive features of the disorder, and then apply this expanded conceptualization to examine patterns of autistic syndromes. This study used various established eye-tracking paradigms to assess the visual and attention performance of children with DS and FXS who meet the autism thresholds defined in the Social Communication Questionnaire. To study whether the autistic profiles of these children are associated with visual orientation difficulties ("sticky attention"), decreased social attention, and increased visual search performance, all of which are hallmarks of the idiopathic autistic child phenotype. Data will be collected from children with DS and FXS, aged 6 to 10 years, and two control groups matched for age and intellectual ability (i.e., children with idiopathic autism).In order to enable a comparison of visual attention profiles, cross-sectional analyzes of developmental trajectories are carried out. Significant differences in the visual-attentive processes underlying the presentation of autism in children with FXS and DS have been suggested, supporting the concept of syndrome specificity. The study provides insights into the complex heterogeneity associated with autism syndrome symptoms and autism itself, with clinical implications for the utility of autism intervention programs in DS and FXS populations.

Keywords: attitude, autism, teachers, sports activities, movement skills, motor skills

Procedia PDF Downloads 57
125 MicroRNA-1246 Expression Associated with Resistance to Oncogenic BRAF Inhibitors in Mutant BRAF Melanoma Cells

Authors: Jae-Hyeon Kim, Michael Lee

Abstract:

Intrinsic and acquired resistance limits the therapeutic benefits of oncogenic BRAF inhibitors in melanoma. MicroRNAs (miRNA) regulate the expression of target mRNAs by repressing their translation. Thus, we investigated miRNA expression patterns in melanoma cell lines to identify candidate biomarkers for acquired resistance to BRAF inhibitor. Here, we used Affymetrix miRNA V3.0 microarray profiling platform to compare miRNA expression levels in three cell lines containing BRAF inhibitor-sensitive A375P BRAF V600E cells, their BRAF inhibitor-resistant counterparts (A375P/Mdr), and SK-MEL-2 BRAF-WT cells with intrinsic resistance to BRAF inhibitor. The miRNAs with at least a two-fold change in expression between BRAF inhibitor-sensitive and –resistant cell lines, were identified as differentially expressed. Averaged intensity measurements identified 138 and 217 miRNAs that were differentially expressed by 2 fold or more between: 1) A375P and A375P/Mdr; 2) A375P and SK-MEL-2, respectively. The hierarchical clustering revealed differences in miRNA expression profiles between BRAF inhibitor-sensitive and –resistant cell lines for miRNAs involved in intrinsic and acquired resistance to BRAF inhibitor. In particular, 43 miRNAs were identified whose expression was consistently altered in two BRAF inhibitor-resistant cell lines, regardless of intrinsic and acquired resistance. Twenty five miRNAs were consistently upregulated and 18 downregulated more than 2-fold. Although some discrepancies were detected when miRNA microarray data were compared with qPCR-measured expression levels, qRT-PCR for five miRNAs (miR-3617, miR-92a1, miR-1246, miR-1936-3p, and miR-17-3p) results showed excellent agreement with microarray experiments. To further investigate cellular functions of miRNAs, we examined effects on cell proliferation. Synthetic oligonucleotide miRNA mimics were transfected into three cell lines, and proliferation was quantified using a colorimetric assay. Of the 5 miRNAs tested, only miR-1246 altered cell proliferation of A375P/Mdr cells. The transfection of miR-1246 mimic strongly conferred PLX-4720 resistance to A375P/Mdr cells, implying that miR-1246 upregulation confers acquired resistance to BRAF inhibition. We also found that PLX-4720 caused much greater G2/M arrest in A375P/Mdr cells transfected with miR-1246mimic than that seen in scrambled RNA-transfected cells. Additionally, miR-1246 mimic partially caused a resistance to autophagy induction by PLX-4720. These results indicate that autophagy does play an essential death-promoting role inPLX-4720-induced cell death. Taken together, these results suggest that miRNA expression profiling in melanoma cells can provide valuable information for a network of BRAF inhibitor resistance-associated miRNAs.

Keywords: microRNA, BRAF inhibitor, drug resistance, autophagy

Procedia PDF Downloads 326
124 The Role of Glyceryl Trinitrate (GTN) in 99mTc-HIDA with Morphine Provocation Scan for the Investigation of Type III Sphincter of Oddi Dysfunction (SOD)

Authors: Ibrahim M Hassan, Lorna Que, Michael Rutland

Abstract:

Type I SOD is usually diagnosed by anatomical imaging such as ultrasound, CT and MRCP. However, the types II and III SOD yield negative results despite the presence of significant symptoms. In particular, the type III is difficult to diagnose due to the absence of significant biochemical or anatomical abnormalities. Nuclear Medicine can aid in this diagnostic dilemma by demonstrating functional changes in the bile flow. Low dose Morphine (0.04mg/Kg) stimulates the tone of the sphincter of Oddi (SO) and its usefulness has been shown in diagnosing SOD by causing a delay in bile flow when compared to a non morphine provoked - baseline scan. This work expands on that process by using sublingual GTN at 60 minutes post tracer and morphine injection to relax the SO and induce an improvement in bile outflow, and in some cases show immediate relief of morphine induced abdominal pain. The criteria for positive SOD are as follows: if during the first hour of the morphine provocation showed (1) delayed intrahepatic biliary ducts tracer accumulation; plus (2) delayed appearance but persistent retention of activity in the common bile duct, and (3) delayed bile flow into the duodenum. In addition, patients who required GTN within the first hour to relieve abdominal pain were regarded as highly supportive of the diagnosis. Retrospective analysis of 85 patients (pts) (78F and 6M) referred for suspected SOD (type III) who had been intensively investigated because of recurrent right upper quadrant or abdominal pain post cholecystectomy. 99mTc-HIDA scan with morphine-provocation is performed followed by GTN at 60 minutes post tracer injection and a further thirty minutes of dynamic imaging are acquired. 30 pts were negative. 55 pts were regarded as positive for SOD and 38/55 (60%) of these patients with an abnormal result were further evaluated with a baseline 99mTc-HIDA. As expected, all 38 pts showed better bile flow characteristics than during the morphine provocation. 20/55 (36%) patients were treated by ERCP sphincterotomy and the rest were managed conservatively by medical therapy. In all cases regarded as positive for SOD, the sublingual GTN at 60 minutes showed immediate improvement in bile flow. 11/55(20%) who developed severe post-morphine abdominal pain were relieved by GTN almost instantaneously. We propose that GTN is a useful agent in the diagnosis of SOD when performing 99mTc-HIDA scan and that the satisfactory response to the sublingual GTN could offer additional information in patients who have severe pain at the time the procedure or when presenting to the emergency unit because of biliary pain. And also in determining whether a trial of medical therapy may be used before considering surgery.

Keywords: GTN, HIDA, MORPHINE, SOD

Procedia PDF Downloads 306