Search results for: specific methane production
1201 Emergence of Neurodiversity and Awareness of Autism Among School Teachers- A Preliminary Survey
Authors: Tanvi Rajesh Sanghavi
Abstract:
Introduction: Neurodiversity is a concept which captures the different ways in which everyone's brain functions and is considered as part of normal variation. It is a strength-based approach which focuses on the individual's strengths and capabilities and believes in providing support wherever necessary. In many parts of the world, those diagnosed with autism spectrum disorder have been ostracized and ridiculed due to their sensory and communication differences. Hence, it becomes important for the teachers to have knowledge about autism and understand the needs of children with Autism. Need: India is rich in terms of culture, languages and religious diversity. It is important to study neurodiversity in such a population for better understanding of neurodiverse individuals and appropriate intervention. Aim & objectives: This study seeks teachers' knowledge of the causes, traits and educational requirements of children with autism spectrum disorder (ASD). It also aims to find out whether mainstream schools actually provide training programs to the teachers to manage such children along with the necessary accommodations. Method: The current study was a cross-sectional study conducted among school teachers. A total of 30 school teachers were taken for the study. The participants were enrolled after informed consent. The participants were directed to a google form consisting of objective questions. The first part of the questionnaire elicited information about school, teaching experience, qualification, etc. There were specific questions extracting details on attending/conducting sensitization and professional programs in regard to care for autistic children. The second part of the questionnaire consisted of some basic questions on the teacher’s understanding of diagnosis, traits, causes, road to recovery and understanding the educational and communication needs of autistic children from the teacher’s perspective. The responses were tabulated and analyzed descriptively. Results: Most of the teachers had 5–10 years of teaching experience. The majority of the teachers used the term “special child” for autistic children. Around 54.8% (17 teachers) of the total teachers felt that the parents of autistic children should teach their child to learn adaptive skills and 41.9% of the teachers felt that they should take medical intervention. About 50% of the teachers felt that the cause of autism is related to pre-natal maternal factors and about 40% felt that its cause is genetic. Only a small percentage of teachers felt that they were trained to manage the children with autism. More than 50% of the teachers mentioned that their schools do not conduct training programs for managing these children. Discussion & Conclusion: In this study, the knowledge and perspectives of teachers on children with ASD were studied. The most widely held contemporary belief is that genetic factors play a major part in the development of ASD, although the existing evidence is muddled, with numerous opposing perspectives on the nature of this mechanism. It is worth noting that any culture's level of humanity is mirrored in how that society "treats" its vulnerable population.Keywords: autism, neurodiversity, awareness, education
Procedia PDF Downloads 161200 Assessment of Surface Water Quality near Landfill Sites Using a Water Pollution Index
Authors: Alejandro Cittadino, David Allende
Abstract:
Landfilling of municipal solid waste is a common waste management practice in Argentina as in many parts of the world. There is extensive scientific literature on the potential negative effects of landfill leachates on the environment, so it’s necessary to be rigorous with the control and monitoring systems. Due to the specific municipal solid waste composition in Argentina, local landfill leachates contain large amounts of organic matter (biodegradable, but also refractory to biodegradation), as well as ammonia-nitrogen, small trace of some heavy metals, and inorganic salts. In order to investigate the surface water quality in the Reconquista river adjacent to the Norte III landfill, water samples both upstream and downstream the dumpsite are quarterly collected and analyzed for 43 parameters including organic matter, heavy metals, and inorganic salts, as required by the local standards. The objective of this study is to apply a water quality index that considers the leachate characteristics in order to determine the quality status of the watercourse through the landfill. The water pollution index method has been widely used in water quality assessments, particularly rivers, and it has played an increasingly important role in water resource management, since it provides a number simple enough for the public to understand, that states the overall water quality at a certain location and time. The chosen water quality index (ICA) is based on the values of six parameters: dissolved oxygen (in mg/l and percent saturation), temperature, biochemical oxygen demand (BOD5), ammonia-nitrogen and chloride (Cl-) concentration. The index 'ICA' was determined both upstream and downstream the Reconquista river, being the rating scale between 0 (very poor water quality) and 10 (excellent water quality). The monitoring results indicated that the water quality was unaffected by possible leachate runoff since the index scores upstream and downstream were ranked in the same category, although in general, most of the samples were classified as having poor water quality according to the index’s scale. The annual averaged ICA index scores (computed quarterly) were 4.9, 3.9, 4.4 and 5.0 upstream and 3.9, 5.0, 5.1 and 5.0 downstream the river during the study period between 2014 and 2017. Additionally, the water quality seemed to exhibit distinct seasonal variations, probably due to annual precipitation patterns in the study area. The ICA water quality index appears to be appropriate to evaluate landfill impacts since it accounts mainly for organic pollution and inorganic salts and the absence of heavy metals in the local leachate composition, however, the inclusion of other parameters could be more decisive in discerning the affected stream reaches from the landfill activities. A future work may consider adding to the index other parameters like total organic carbon (TOC) and total suspended solids (TSS) since they are present in the leachate in high concentrations.Keywords: landfill, leachate, surface water, water quality index
Procedia PDF Downloads 1511199 Comparison Conventional with Microwave-Assisted Drying Method on the Physicochemical Characteristics of Rice Bran Noodle
Authors: Chien-Chun Huang, Yi-U Chiou, Chiun-C.R. Wang
Abstract:
For longer shelf life of noodles, air-dried method is the traditional way for the noodle preparation. Microwave drying has the specific advantage of rapid and uniform heating due to the penetration of microwaves into the body of the product. Microwave-assisted facility offers a quick and energy saving method during food dehydration as compares to the conventional air-dried method. Recently, numerous studies in the rheological characteristics of pasta or spaghetti were carried out with microwave–assisted air driers and many agricultural products were dried successfully. There are few researches about the evaluation of physicochemical characteristics and cooking quality of microwave-assisted air dried salted noodles. The purposes of this study were to compare the difference between conventional and microwave-assisted drying method on the physicochemical properties and eating quality of rice bran noodles. Three different microwave power including 0.5 KW, 0.75 KW and 1.0 KW installing with 50℃ hot air were applied for dehydration of rice bran noodles in this study. Three proportion of rice bran ranging in 0-20% were incorporated into salted noodles processing. The appearance, optimum cooking time, cooking yield and losses, textural profiles analysis, sensory evaluation of rice bran noodles were measured in this study. The results indicated that high power (1.0 KW) microwave facility caused partially burnt and porous on the surface of rice bran noodles. However, no characteristic of noodle was appeared on the surface of noodles preparing by low power (0.5 KW) microwave facility. The optimum cooking time of noodles was decreased as higher power microwave or higher proportion of rice bran was incorporated into noodles preparation. The higher proportion of rice bran (20%) or higher power of microwave-assisted dried noodles obtained the higher color intensity and the higher cooking losses as compared with conventional air dried noodles. The firmness of cooked rice bran noodles slightly decreased in the cooked noodles which were dried by high power microwave-assisted method. The shearing force, tensile strength, elasticity and texture profiles of cooked rice noodles decreased with the progress of the proportion of rice bran. The results of sensory evaluation indicated conventional dried noodles obtained the higher springiness, cohesiveness and acceptability of cooked noodles than high power (1.0 KW) microwave-assisted dried noodles. However, low power (0.5 KW) microwave-assisted dried noodles showed the comparable sensory attributes and acceptability with conventional dried noodles. Moreover, the sensory attributes including firmness, springiness, cohesiveness decreased, but stickiness increased, with the increases of rice bran proportion. These results inferred that incorporation of lower proportion of rice bran and lower power microwave-assisted dried noodles processing could produce faster cooking time and acceptable quality of cooked noodles as compared to conventional dried noodles.Keywords: microwave-assisted drying method, physicochemical characteristics, rice bran noodles, sensory evaluation
Procedia PDF Downloads 4811198 Getting to Know ICU Nurses and Their Duties
Authors: Masih Nikgou
Abstract:
ICU nurses or intensive care nurses are highly specialized and trained healthcare personnel. These nurses provide nursing care for patients with life-threatening illnesses or conditions. They provide the experience, knowledge and specialized skills that patients need to survive and recover. Intensive care nurses (ICU) are trained to make momentary decisions and act quickly when the patient's condition changes. Their primary work environment is in the hospital in intensive care units. Typically, ICU patients require a high level of care. ICU nurses work in challenging and complex fields in their nursing profession. They have the primary duty of caring for and saving patients who are fighting for their lives. Intensive care (ICU) nurses are highly trained to provide exceptional care to patients who depend on 24/7 nursing care. A patient in the ICU is often equipped with a ventilator, intubated and connected to several life support machines and medical equipment. Intensive Care Nurses (ICU) have full expertise in considering all aspects of bringing back their patients. Some of the specific responsibilities of ICU nurses include (a) Assessing and monitoring the patient's progress and identifying any sudden changes in the patient's medical condition. (b) Administration of drugs intravenously by injection or through gastric tubes. (c) Provide regular updates on patient progress to physicians, patients, and their families. (d) According to the clinical condition of the patient, perform the approved diagnostic or treatment methods. (e) In case of a health emergency, informing the relevant doctors. (f) To determine the need for emergency interventions, evaluate laboratory data and vital signs of patients. (g) Caring for patient needs during recovery in the ICU. (h) ICU nurses often provide emotional support to patients and their families. (i) Regulating and monitoring medical equipment and devices such as medical ventilators, oxygen delivery devices, transducers, and pressure lines. (j) Assessment of pain level and sedation needs of patients. (k) Maintaining patient reports and records. As the name suggests, critical care nurses work primarily in ICU health care units. ICUs are completely healthy and have proper lighting with strict adherence to health and safety from medical centers. ICU nurses usually move between the intensive care unit, the emergency department, the operating room, and other special departments of the hospital. ICU nurses usually follow a standard shift schedule that includes morning, afternoon, and night schedules. There are also other relocation programs depending on the hospital and region. Nurses who are passionate about data and managing a patient's condition and outcomes typically do well as ICU nurses. An inquisitive mind and attention to processes are equally important. ICU nurses are completely compassionate and are not afraid to advocate for their patients and family members. who are distressed.Keywords: nursing, intensive care unit, pediatric intensive care unit, mobile intensive care unit, surgical intensive care unite
Procedia PDF Downloads 781197 Narrating Atatürk Cultural Center as a Place of Memory and a Space of Politics
Authors: Birge Yildirim Okta
Abstract:
This paper aims to narrate the story of Atatürk Cultural Center in Taksim Square, which was demolished in 2018 and discuss its architectonic as a social place of memory and its existence and demolishment as the space of politics. The paper uses narrative discourse analysis to research Atatürk Cultural Center (AKM) as a place of memory and space of politics from the establishment of the Turkish Republic (1923) until today. After the establishment of the Turkish Republic, one of the most important implementations in Taksim Square, reflecting the internationalist style, was the construction of the Opera Building in Prost Plan. The first design of the opera building belonged to Aguste Perret, which could not be implemented due to economic hardship during World War II. Later the project was designed by architects Feridun Kip and Rüknettin Güney in 1946 but could not be completed due to the 1960 military coup. Later the project was shifted to another architect Hayati Tabanlıoglu, with a change in its function as a cultural center. Eventually, the construction of the building was completed in 1969 in a completely different design. AKM became a symbol of republican modernism not only with its modern architectural style but also with it is function as the first opera building of the Republic, reflecting the western, modern cultural heritage by professional groups, artists, and the intelligentsia. In 2005, Istanbul’s council for the protection of cultural heritage decided to list AKM as a grade 1 cultural heritage, ending a period of controversy which saw calls for the demolition of the center as it was claimed, it ended its useful lifespan. In 2008 the building was announced to be closed for repairs and restoration. Over the following years, the building was demolished piece by piece silently while the Taksim mosque has been built just in front of Atatürk Cultural Center. Belonging to the early republican period AKM was a representation of the cultural production of modern society for the emergence and westward looking, secular public space in Turkey. Its erasure from the Taksim scene under the rule of the conservative government, Justice, and Development Party, and the construction of the Taksim mosque in front of AKM’s parcel is also representational. The question of governing the city through space has always been an important aspect for governments, those holding political power since cities are the chaotic environments that are seen as a threat for the governments, carrying the tensions of the proletariat or the contradictory groups. The story of AKM as a dispositive or a regulatory apparatus demonstrates how space itself is becoming a political medium, to transform the socio-political condition. The paper narrates the existence and demolishment of the Atatürk Cultural Center by discussing the constructed and demolished building as a place of memory and space of politics.Keywords: space of politics, place of memory, Atatürk Cultural Center, Taksim square, collective memory
Procedia PDF Downloads 1401196 Impact of Transitioning to Renewable Energy Sources on Key Performance Indicators and Artificial Intelligence Modules of Data Center
Authors: Ahmed Hossam ElMolla, Mohamed Hatem Saleh, Hamza Mostafa, Lara Mamdouh, Yassin Wael
Abstract:
Artificial intelligence (AI) is reshaping industries, and its potential to revolutionize renewable energy and data center operations is immense. By harnessing AI's capabilities, we can optimize energy consumption, predict fluctuations in renewable energy generation, and improve the efficiency of data center infrastructure. This convergence of technologies promises a future where energy is managed more intelligently, sustainably, and cost-effectively. The integration of AI into renewable energy systems unlocks a wealth of opportunities. Machine learning algorithms can analyze vast amounts of data to forecast weather patterns, solar irradiance, and wind speeds, enabling more accurate energy production planning. AI-powered systems can optimize energy storage and grid management, ensuring a stable power supply even during intermittent renewable generation. Moreover, AI can identify maintenance needs for renewable energy infrastructure, preventing costly breakdowns and maximizing system lifespan. Data centers, which consume substantial amounts of energy, are prime candidates for AI-driven optimization. AI can analyze energy consumption patterns, identify inefficiencies, and recommend adjustments to cooling systems, server utilization, and power distribution. Predictive maintenance using AI can prevent equipment failures, reducing energy waste and downtime. Additionally, AI can optimize data placement and retrieval, minimizing energy consumption associated with data transfer. As AI transforms renewable energy and data center operations, modified Key Performance Indicators (KPIs) will emerge. Traditional metrics like energy efficiency and cost-per-megawatt-hour will continue to be relevant, but additional KPIs focused on AI's impact will be essential. These might include AI-driven cost savings, predictive accuracy of energy generation and consumption, and the reduction of carbon emissions attributed to AI-optimized operations. By tracking these KPIs, organizations can measure the success of their AI initiatives and identify areas for improvement. Ultimately, the synergy between AI, renewable energy, and data centers holds the potential to create a more sustainable and resilient future. By embracing these technologies, we can build smarter, greener, and more efficient systems that benefit both the environment and the economy.Keywords: data center, artificial intelligence, renewable energy, energy efficiency, sustainability, optimization, predictive analytics, energy consumption, energy storage, grid management, data center optimization, key performance indicators, carbon emissions, resiliency
Procedia PDF Downloads 331195 Bending the Consciousnesses: Uncovering Environmental Issues Through Circuit Bending
Authors: Enrico Dorigatti
Abstract:
The growing pile of hazardous e-waste produced especially by those developed and wealthy countries gets relentlessly bigger, composed of the EEDs (Electric and Electronic Device) that are often thrown away although still well functioning, mainly due to (programmed) obsolescence. As a consequence, e-waste has taken, over the last years, the shape of a frightful, uncontrollable, and unstoppable phenomenon, mainly fuelled by market policies aiming to maximize sales—and thus profits—at any cost. Against it, governments and organizations put some efforts in developing ambitious frameworks and policies aiming to regulate, in some cases, the whole lifecycle of EEDs—from the design to the recycling. Incidentally, however, such regulations sometimes make the disposal of the devices economically unprofitable, which often translates into growing illegal e-waste trafficking—an activity usually undertaken by criminal organizations. It seems that nothing, at least in the near future, can stop the phenomenon of e-waste production and accumulation. But while, from a practical standpoint, a solution seems hard to find, much can be done regarding people's education, which translates into informing and promoting good practices such as reusing and repurposing. This research argues that circuit bending—an activity rooted in neo-materialist philosophy and post-digital aesthetic, and based on repurposing EEDs into novel music instruments and sound generators—could have a great potential in this. In particular, it asserts that circuit bending could expose ecological, environmental, and social criticalities related to the current market policies and economic model. Not only thanks to its practical side (e.g., sourcing and repurposing devices) but also to the artistic one (e.g., employing bent instruments for ecological-aware installations, performances). Currently, relevant literature and debate lack interest and information about the ecological aspects and implications of the practical and artistic sides of circuit bending. This research, therefore, although still at an early stage, aims to fill in this gap by investigating, on the one side, the ecologic potential of circuit bending and, on the other side, its capacity of sensitizing people, through artistic practice, about e-waste-related issues. The methodology will articulate in three main steps. Firstly, field research will be undertaken—with the purpose of understanding where and how to source, in an ecologic and sustainable way, (discarded) EEDs for circuit bending. Secondly, artistic installations and performances will be organized—to sensitize the audience about environmental concerns through sound art and music derived from bent instruments. Data, such as audiences' feedback, will be collected at this stage. The last step will consist in realising workshops to spread an ecologically-aware circuit bending practice. Additionally, all the data and findings collected will be made available and disseminated as resources.Keywords: circuit bending, ecology, sound art, sustainability
Procedia PDF Downloads 1711194 Phantom and Clinical Evaluation of Block Sequential Regularized Expectation Maximization Reconstruction Algorithm in Ga-PSMA PET/CT Studies Using Various Relative Difference Penalties and Acquisition Durations
Authors: Fatemeh Sadeghi, Peyman Sheikhzadeh
Abstract:
Introduction: Block Sequential Regularized Expectation Maximization (BSREM) reconstruction algorithm was recently developed to suppress excessive noise by applying a relative difference penalty. The aim of this study was to investigate the effect of various strengths of noise penalization factor in the BSREM algorithm under different acquisition duration and lesion sizes in order to determine an optimum penalty factor by considering both quantitative and qualitative image evaluation parameters in clinical uses. Materials and Methods: The NEMA IQ phantom and 15 clinical whole-body patients with prostate cancer were evaluated. Phantom and patients were injected withGallium-68 Prostate-Specific Membrane Antigen(68 Ga-PSMA)and scanned on a non-time-of-flight Discovery IQ Positron Emission Tomography/Computed Tomography(PET/CT) scanner with BGO crystals. The data were reconstructed using BSREM with a β-value of 100-500 at an interval of 100. These reconstructions were compared to OSEM as a widely used reconstruction algorithm. Following the standard NEMA measurement procedure, background variability (BV), recovery coefficient (RC), contrast recovery (CR) and residual lung error (LE) from phantom data and signal-to-noise ratio (SNR), signal-to-background ratio (SBR) and tumor SUV from clinical data were measured. Qualitative features of clinical images visually were ranked by one nuclear medicine expert. Results: The β-value acts as a noise suppression factor, so BSREM showed a decreasing image noise with an increasing β-value. BSREM, with a β-value of 400 at a decreased acquisition duration (2 min/ bp), made an approximately equal noise level with OSEM at an increased acquisition duration (5 min/ bp). For the β-value of 400 at 2 min/bp duration, SNR increased by 43.7%, and LE decreased by 62%, compared with OSEM at a 5 min/bp duration. In both phantom and clinical data, an increase in the β-value is translated into a decrease in SUV. The lowest level of SUV and noise were reached with the highest β-value (β=500), resulting in the highest SNR and lowest SBR due to the greater noise reduction than SUV reduction at the highest β-value. In compression of BSREM with different β-values, the relative difference in the quantitative parameters was generally larger for smaller lesions. As the β-value decreased from 500 to 100, the increase in CR was 160.2% for the smallest sphere (10mm) and 12.6% for the largest sphere (37mm), and the trend was similar for SNR (-58.4% and -20.5%, respectively). BSREM visually was ranked more than OSEM in all Qualitative features. Conclusions: The BSREM algorithm using more iteration numbers leads to more quantitative accuracy without excessive noise, which translates into higher overall image quality and lesion detectability. This improvement can be used to shorter acquisition time.Keywords: BSREM reconstruction, PET/CT imaging, noise penalization, quantification accuracy
Procedia PDF Downloads 961193 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment
Authors: Ujjwall Sai Sunder Uppuluri
Abstract:
Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.Keywords: complex systems, evolutionary theory, group theory, international political economy
Procedia PDF Downloads 1391192 An Integrated Framework for Wind-Wave Study in Lakes
Authors: Moien Mojabi, Aurelien Hospital, Daniel Potts, Chris Young, Albert Leung
Abstract:
The wave analysis is an integral part of the hydrotechnical assessment carried out during the permitting and design phases for coastal structures, such as marinas. This analysis aims in quantifying: i) the Suitability of the coastal structure design against Small Craft Harbour wave tranquility safety criterion; ii) Potential environmental impacts of the structure (e.g., effect on wave, flow, and sediment transport); iii) Mooring and dock design and iv) Requirements set by regulatory agency’s (e.g., WSA section 11 application). While a complex three-dimensional hydrodynamic modelling approach can be applied on large-scale projects, the need for an efficient and reliable wave analysis method suitable for smaller scale marina projects was identified. As a result, Tetra Tech has developed and applied an integrated analysis framework (hereafter TT approach), which takes the advantage of the state-of-the-art numerical models while preserving the level of simplicity that fits smaller scale projects. The present paper aims to describe the TT approach and highlight the key advantages of using this integrated framework in lake marina projects. The core of this methodology is made by integrating wind, water level, bathymetry, and structure geometry data. To respond to the needs of specific projects, several add-on modules have been added to the core of the TT approach. The main advantages of this method over the simplified analytical approaches are i) Accounting for the proper physics of the lake through the modelling of the entire lake (capturing real lake geometry) instead of a simplified fetch approach; ii) Providing a more realistic representation of the waves by modelling random waves instead of monochromatic waves; iii) Modelling wave-structure interaction (e.g. wave transmission/reflection application for floating structures and piles amongst others); iv) Accounting for wave interaction with the lakebed (e.g. bottom friction, refraction, and breaking); v) Providing the inputs for flow and sediment transport assessment at the project site; vi) Taking in consideration historical and geographical variations of the wind field; and vii) Independence of the scale of the reservoir under study. Overall, in comparison with simplified analytical approaches, this integrated framework provides a more realistic and reliable estimation of wave parameters (and its spatial distribution) in lake marinas, leading to a realistic hydrotechnical assessment accessible to any project size, from the development of a new marina to marina expansion and pile replacement. Tetra Tech has successfully utilized this approach since many years in the Okanagan area.Keywords: wave modelling, wind-wave, extreme value analysis, marina
Procedia PDF Downloads 841191 The Development of Local-Global Perceptual Bias across Cultures: Examining the Effects of Gender, Education, and Urbanisation
Authors: Helen J. Spray, Karina J. Linnell
Abstract:
Local-global bias in adulthood is strongly dependent on environmental factors and a global bias is not the universal characteristic of adult perception it was once thought to be: whilst Western adults typically demonstrate a global bias, Namibian adults living in traditional villages possess a strong local bias. Furthermore, environmental effects on local-global bias have been shown to be highly gender-specific; whereas urbanisation promoted a global bias in urbanised Namibian women but not men, education promoted a global bias in urbanised Namibian men but not women. Adult populations, however, provide only a snapshot of the gene-environment interactions which shape perceptual bias. Yet, to date, there has been little work on the development of local-global bias across environmental settings. In the current study, local-global bias was assessed using a similarity-matching task with Navon figures in children aged between 4 and 15 years from across three populations: traditional Namibians, urban Namibians, and urban British. For the two Namibian groups, measures of urbanisation and education were obtained. Data were subjected to both between-group and within-group analyses. Between-group analyses compared developmental trajectories across population and gender. These analyses revealed a global bias from even as early as 4 in the British sample, and showed that the developmental onset of a global bias is not fixed. Urbanised Namibian children ultimately developed a global bias that was indistinguishable from British children; however, a global bias did not emerge until much later in development. For all populations, the greatest developmental effects were observed directly following the onset of formal education. No overall gender effects were observed; however, there was a significant gender by age interaction which was difficult to reconcile with existing biological-level accounts of gender differences in the development of local-global bias. Within-group analyses compared the effects of urbanisation and education on local-global bias for traditional and urban Namibian boys and girls separately. For both traditional and urban boys, education mediated all effects of age and urbanisation; however, this was not the case for girls. Traditional Namibian girls retained a local bias regardless of age, education, or urbanisation, and in urbanised girls, the development of a global bias was not attributable to any one factor specifically. These results are broadly consistent with aforementioned findings that education promoted a global bias in urbanised Namibian men but not women. The development of local-global bias does not follow a fixed trajectory but is subject to environmental control. Understanding how variability in the development of local-global bias might arise, particularly in the context of gender, may have far-reaching implications. For example, a number of educationally important cognitive functions (e.g., spatial ability) are known to show consistent gender differences in childhood and local-global bias may mediate some of these effects. With education becoming an increasingly prevalent force across much of the developing world it will be important to understand the processes that underpin its effects and their implications.Keywords: cross-cultural, development, education, gender, local-global bias, perception, urbanisation, urbanization
Procedia PDF Downloads 1391190 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 1261189 Gender Specific Differences in Clinical Outcomes of Knee Osteoarthritis Treated with Micro-Fragmented Adipose Tissue
Authors: Tiffanie-Marie Borg, Yasmin Zeinolabediny, Nima Heidari, Ali Noorani, Mark Slevin, Angel Cullen, Stefano Olgiati, Alberto Zerbi, Alessandro Danovi, Adrian Wilson
Abstract:
Knee Osteoarthritis (OA) is a critical cause of disability globally. In recent years, there has been growing interest in non-invasive treatments, such as intra-articular injection of micro-fragmented fat (MFAT), showing great potential in treating OA. Mesenchymal stem cells (MSCs), originating from pericytes of micro-vessels in MFAT, can differentiate into mesenchymal lineage cells such as cartilage, osteocytes, adipocytes, and osteoblasts. Secretion of growth factor and cytokines from MSCs have the capability to inhibit T cell growth, reduced pain and inflammation, and create a micro-environment that through paracrine signaling, can promote joint repair and cartilage regeneration. Here we have shown, for the first time, data supporting the hypothesis that women respond better in terms of improvements in pain and function to MFAT injection compared to men. Historically, women have been underrepresented in studies, and studies with both sexes regularly fail to analyse the results by sex. To mitigate this bias and quantify it, we describe a technique using reproducible statistical analysis and replicable results with Open Access statistical software R to calculate the magnitude of this difference. Genetic, hormonal, environmental, and age factors play a role in our observed difference between the sexes. This observational, intention-to-treat study included the complete sample of 456 patients who agreed to be scored for pain (visual analogue scale (VAS)) and function (Oxford knee score (OKS)) at baseline regardless of subsequent changes to adherence or status during follow-up. We report that a significantly larger number of women responded to treatment than men: [90% vs. 60% change in VAS scores with 87% vs. 65% change in OKS scores, respectively]. Women overall had a stronger positive response to treatment with reduced pain and improved mobility and function. Pre-injection, our cohort of women were in more pain with worse joint function which is quite common to see in orthopaedics. However, during the 2-year follow-up, they consistently maintained a lower incidence of discomfort with superior joint function. This data clearly identifies a clear need for further studies to identify the cell and molecular biological and other basis for these differences and be able to utilize this information for stratification in order to improve outcome for both women and men.Keywords: gender differences, micro-fragmented adipose tissue, knee osteoarthritis, stem cells
Procedia PDF Downloads 1811188 Toxicity Evaluation of Reduced Graphene Oxide on First Larval Stages of Artemia sp.
Authors: Roberta Pecoraro
Abstract:
The focus of this work was to investigate the potential toxic effect of titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites on nauplii of microcrustacean Artemia sp. In order to assess the nanocomposite’s toxicity, a short-term test was performed by exposing nauplii to solutions containing TiO₂-rGO. To prepare titanium dioxide-reduced graphene oxide (TiO₂-rGO) nanocomposites, a green procedure based on solar photoreduction was proposed; it allows to obtain the photocatalysts by exploiting the photocatalytic properties of titania activated by the solar irradiation in order to avoid the high temperatures and pressures required for the standard hydrothermal synthesis. Powders of TiO₂-rGO supplied by the Department of Chemical Sciences (University of Catania) are indicated as TiO₂-rGO at 1% and TiO₂-rGO at 2%. Starting from a stock solution (1mg rGO-TiO₂/10 ml ASPM water) of each type, we tested four different concentrations (serial dilutions ranging from 10⁻¹ to 10⁻⁴ mg/ml). All the solutions have been sonicated for 12 min prior to use. Artificial seawater (called ASPM water) was prepared to guarantee the hatching of the cysts and to maintain nauplii; the durable cysts used in this study, marketed by JBL (JBL GmbH & Co. KG, Germany), were hydrated with ASPM water to obtain nauplii (instar II-III larvae). The hatching of the cysts was carried out in the laboratory by immersing them in ASPM water inside a 500 ml beaker and keeping them constantly oxygenated thanks to an aerator for the insufflation of microbubble air: after 24-48 hours, the cysts hatched, and the nauplii appeared. The nauplii in the second and third stages of development were collected one-to-one, using stereomicroscopes, and transferred into 96-well microplates where one nauplius per well was added. The wells quickly have been filled with 300 µl of each specific concentration of the solution used, and control samples were incubated only with ASPM water. Replication was performed for each concentration. Finally, the microplates were placed on an orbital shaker, and the tests were read after 24 and 48 hours from inoculating the solutions to assess the endpoint (immobility/death) for the larvae. Nauplii that appeared motionless were counted as dead, and the percentages of mortality were calculated for each treatment. The results showed a low percentage of immobilization both for TiO₂-rGO at 1% and TiO₂-rGO at 2% for all concentrations tested: for TiO₂-rGO at 1% was below 12% after 24h and below 15% after 48h; for TiO₂-rGO at 2% was below 8% after 24h and below 12% after 48h. According to other studies in the literature, the results have not shown mortality nor toxic effects on the development of larvae after exposure to rGO. Finally, it is important to highlight that the TiO₂-rGO catalysts were tested in the solar photodegradation of a toxic herbicide (2,4-Dichlorophenoxyacetic acid, 2,4-D), obtaining a high percentage of degradation; therefore, this alternative approach could be considered a good strategy to obtain performing photocatalysts.Keywords: Nauplii, photocatalytic properties, reduced GO, short-term toxicity test, titanium dioxide
Procedia PDF Downloads 1831187 Curriculum Transformation: Multidisciplinary Perspectives on ‘Decolonisation’ and ‘Africanisation’ of the Curriculum in South Africa’s Higher Education
Authors: Andre Bechuke
Abstract:
The years of 2015-2017 witnessed a huge campaign, and in some instances, violent protests in South Africa by students and some groups of academics advocating the decolonisation of the curriculum of universities. These protests have forced through high expectations for universities to teach a curriculum relevant to the country, and the continent as well as enabled South Africa to participate in the globalised world. To realise this purpose, most universities are currently undertaking steps to transform and decolonise their curriculum. However, the transformation process is challenged and delayed by lack of a collective understanding of the concepts ‘decolonisation’ and ‘africanisation’ that should guide its application. Even more challenging is lack of a contextual understanding of these concepts across different university disciplines. Against this background, and underpinned in a qualitative research paradigm, the perspectives of these concepts as applied by different university disciplines were examined in order to understand and establish their implementation in the curriculum transformation agenda. Data were collected by reviewing the teaching and learning plans of 8 faculties of an institution of higher learning in South Africa and analysed through content and textual analysis. The findings revealed varied understanding and use of these concepts in the transformation of the curriculum across faculties. Decolonisation, according to the faculties of Law and Humanities, is perceived as the eradication of the Eurocentric positioning in curriculum content and the constitutive rules and norms that control thinking. This is not done by ignoring other knowledge traditions but does call for an affirmation and validation of African views of the world and systems of thought, mixing it with current knowledge. For the Faculty of Natural and Agricultural Sciences, decolonisation is seen as making the content of the curriculum relevant to students, fulfilling the needs of industry and equipping students for job opportunities. This means the use of teaching strategies and methods that are inclusive of students from diverse cultures, and to structure the learning experience in ways that are not alien to the cultures of the students. For the Health Sciences, decolonisation of the curriculum refers to the need for a shift in Western thinking towards being more sensitive to all cultural beliefs and thoughts. Collectively, decolonisation of education thus entails that a nation must become independent with regard to the acquisition of knowledge, skills, values, beliefs, and habits. Based on the findings, for universities to successfully transform their curriculum and integrate the concepts of decolonisation and Africanisation, there is a need to contextually determine the meaning of the concepts generally and narrow them down to what they should mean to specific disciplines. Universities should refrain from considering an umbrella approach to these concepts. Decolonisation should be seen as a means and not an end. A decolonised curriculum should equally be developed based on the finest knowledge skills, values, beliefs and habits around the world and not limited to one country or continent.Keywords: Africanisation, curriculum, transformation, decolonisation, multidisciplinary perspectives, South Africa’s higher education
Procedia PDF Downloads 1601186 Accounting for Downtime Effects in Resilience-Based Highway Network Restoration Scheduling
Authors: Zhenyu Zhang, Hsi-Hsien Wei
Abstract:
Highway networks play a vital role in post-disaster recovery for disaster-damaged areas. Damaged bridges in such networks can disrupt the recovery activities by impeding the transportation of people, cargo, and reconstruction resources. Therefore, rapid restoration of damaged bridges is of paramount importance to long-term disaster recovery. In the post-disaster recovery phase, the key to restoration scheduling for a highway network is prioritization of bridge-repair tasks. Resilience is widely used as a measure of the ability to recover with which a network can return to its pre-disaster level of functionality. In practice, highways will be temporarily blocked during the downtime of bridge restoration, leading to the decrease of highway-network functionality. The failure to take downtime effects into account can lead to overestimation of network resilience. Additionally, post-disaster recovery of highway networks is generally divided into emergency bridge repair (EBR) in the response phase and long-term bridge repair (LBR) in the recovery phase, and both of EBR and LBR are different in terms of restoration objectives, restoration duration, budget, etc. Distinguish these two phases are important to precisely quantify highway network resilience and generate suitable restoration schedules for highway networks in the recovery phase. To address the above issues, this study proposes a novel resilience quantification method for the optimization of long-term bridge repair schedules (LBRS) taking into account the impact of EBR activities and restoration downtime on a highway network’s functionality. A time-dependent integer program with recursive functions is formulated for optimally scheduling LBR activities. Moreover, since uncertainty always exists in the LBRS problem, this paper extends the optimization model from the deterministic case to the stochastic case. A hybrid genetic algorithm that integrates a heuristic approach into a traditional genetic algorithm to accelerate the evolution process is developed. The proposed methods are tested using data from the 2008 Wenchuan earthquake, based on a regional highway network in Sichuan, China, consisting of 168 highway bridges on 36 highways connecting 25 cities/towns. The results show that, in this case, neglecting the bridge restoration downtime can lead to approximately 15% overestimation of highway network resilience. Moreover, accounting for the impact of EBR on network functionality can help to generate a more specific and reasonable LBRS. The theoretical and practical values are as follows. First, the proposed network recovery curve contributes to comprehensive quantification of highway network resilience by accounting for the impact of both restoration downtime and EBR activities on the recovery curves. Moreover, this study can improve the highway network resilience from the organizational dimension by providing bridge managers with optimal LBR strategies.Keywords: disaster management, highway network, long-term bridge repair schedule, resilience, restoration downtime
Procedia PDF Downloads 1501185 Examining the Critical Factors for Success and Failure of Common Ticketing Systems
Authors: Tam Viet Hoang
Abstract:
With a plethora of new mobility services and payment systems found in our cities and across modern public transportation systems, several cities globally have turned to common ticketing systems to help navigate this complexity. Helping to create time and space-differentiated fare structures and tariff schemes, common ticketing systems can optimize transport utilization rates, achieve cost efficiencies, and provide key incentives to specific target groups. However, not all cities and transportation systems have enjoyed a smooth journey towards the adoption, roll-out, and servicing of common ticketing systems, with both the experiences of success and failure being attributed to a wide variety of critical factors. Using case study research as a methodology and cities as the main unit of analysis, this research will seek to address the fundamental question of “what are the critical factors for the success and failure of common ticketing systems?” Using rail/train systems as the entry point for this study will start by providing a background to the evolution of transport ticketing and justify the improvements in operational efficiency that can be achieved through common ticketing systems. Examining the socio-economic benefits of common ticketing, the research will also help to articulate the value derived for different key identified stakeholder groups. By reviewing case studies of the implementation of common ticketing systems in different cities, the research will explore lessons learned from cities with the aim to elicit factors to ensure seamless connectivity integrated e-ticketing platforms. In an increasingly digital age and where cities are now coming online, this paper seeks to unpack these critical factors, undertaking case study research drawing from literature and lived experiences. Offering us a better understanding of the enabling environment and ideal mixture of ingredients to facilitate the successful roll-out of a common ticketing system, interviews will be conducted with transport operators from several selected cities to better appreciate the challenges and strategies employed to overcome those challenges in relation to common ticketing systems. Meanwhile, as we begin to see the introduction of new mobile applications and user interfaces to facilitate ticketing and payment as part of the transport journey, we take stock of numerous policy challenges ahead and implications on city-wide and system-wide urban planning. It is hoped that this study will help to identify the critical factors for the success and failure of common ticketing systems for cities set to embark on their implementation while serving to fine-tune processes in those cities where common ticketing systems are already in place. Outcomes from the study will help to facilitate an improved understanding of common pitfalls and essential milestones towards the roll-out of a common ticketing system for railway systems, especially for emerging countries where mass rapid transit transport systems are being considered or in the process of construction.Keywords: common ticketing, public transport, urban strategies, Bangkok, Fukuoka, Sydney
Procedia PDF Downloads 881184 Impact of Financial Performance Indicators on Share Price of Listed Pharmaceutical Companies in India
Authors: Amit Das
Abstract:
Background and significance of the study: Generally investors and market forecasters use financial statement for investigation while it awakens contribute to investing. The main vicinity of financial accounting and reporting practices recommends a few basic financial performance indicators, namely, return on capital employed, return on assets and earnings per share, which is associated considerably with share prices. It is principally true in case of Indian pharmaceutical companies also. Share investing is intriguing a financial risk in addition to investors look for those financial evaluations which have noteworthy shock on share price. A crucial intention of financial statement analysis and reporting is to offer information which is helpful predominantly to exterior clients in creating credit as well as investment choices. Sound financial performance attracts the investors automatically and it will increase the share price of the respective companies. Keeping in view of this, this research work investigates the impact of financial performance indicators on share price of pharmaceutical companies in India which is listed in the Bombay Stock Exchange. Methodology: This research work is based on secondary data collected from moneycontrol database on September 28, 2015 of top 101 pharmaceutical companies in India. Since this study selects four financial performance indicators purposively and availability in the database, that is, earnings per share, return on capital employed, return on assets and net profits as independent variables and one dependent variable, share price of 101 pharmaceutical companies. While analysing the data, correlation statistics, multiple regression technique and appropriate test of significance have been used. Major findings: Correlation statistics show that four financial performance indicators of 101 pharmaceutical companies are associated positively and negatively with its share price and it is very much significant that more than 80 companies’ financial performances are related positively. Multiple correlation test results indicate that financial performance indicators are highly related with share prices of the selected pharmaceutical companies. Furthermore, multiple regression test results illustrate that when financial performances are good, share prices have been increased steadily in the Bombay stock exchange and all results are statistically significant. It is more important to note that sensitivity indices were changed slightly through financial performance indicators of selected pharmaceutical companies in India. Concluding statements: The share prices of pharmaceutical companies depend on the sound financial performances. It is very clear that share prices are changed with the movement of two important financial performance indicators, that is, earnings per share and return on assets. Since 101 pharmaceutical companies are listed in the Bombay stock exchange and Sensex are changed with this, it is obvious that Government of India has to take important decisions regarding production and exports of pharmaceutical products so that financial performance of all the pharmaceutical companies are improved and its share price are increased positively.Keywords: financial performance indicators, share prices, pharmaceutical companies, India
Procedia PDF Downloads 3061183 Predictive Pathogen Biology: Genome-Based Prediction of Pathogenic Potential and Countermeasures Targets
Authors: Debjit Ray
Abstract:
Horizontal gene transfer (HGT) and recombination leads to the emergence of bacterial antibiotic resistance and pathogenic traits. HGT events can be identified by comparing a large number of fully sequenced genomes across a species or genus, define the phylogenetic range of HGT, and find potential sources of new resistance genes. In-depth comparative phylogenomics can also identify subtle genome or plasmid structural changes or mutations associated with phenotypic changes. Comparative phylogenomics requires that accurately sequenced, complete and properly annotated genomes of the organism. Assembling closed genomes requires additional mate-pair reads or “long read” sequencing data to accompany short-read paired-end data. To bring down the cost and time required of producing assembled genomes and annotating genome features that inform drug resistance and pathogenicity, we are analyzing the performance for genome assembly of data from the Illumina NextSeq, which has faster throughput than the Illumina HiSeq (~1-2 days versus ~1 week), and shorter reads (150bp paired-end versus 300bp paired end) but higher capacity (150-400M reads per run versus ~5-15M) compared to the Illumina MiSeq. Bioinformatics improvements are also needed to make rapid, routine production of complete genomes a reality. Modern assemblers such as SPAdes 3.6.0 running on a standard Linux blade are capable in a few hours of converting mixes of reads from different library preps into high-quality assemblies with only a few gaps. Remaining breaks in scaffolds are generally due to repeats (e.g., rRNA genes) are addressed by our software for gap closure techniques, that avoid custom PCR or targeted sequencing. Our goal is to improve the understanding of emergence of pathogenesis using sequencing, comparative genomics, and machine learning analysis of ~1000 pathogen genomes. Machine learning algorithms will be used to digest the diverse features (change in virulence genes, recombination, horizontal gene transfer, patient diagnostics). Temporal data and evolutionary models can thus determine whether the origin of a particular isolate is likely to have been from the environment (could it have evolved from previous isolates). It can be useful for comparing differences in virulence along or across the tree. More intriguing, it can test whether there is a direction to virulence strength. This would open new avenues in the prediction of uncharacterized clinical bugs and multidrug resistance evolution and pathogen emergence.Keywords: genomics, pathogens, genome assembly, superbugs
Procedia PDF Downloads 1971182 Modelling of Meandering River Dynamics in Colombia: A Case Study of the Magdalena River
Authors: Laura Isabel Guarin, Juliana Vargas, Philippe Chang
Abstract:
The analysis and study of Open Channel flow dynamics for River applications has been based on flow modelling using discreet numerical models based on hydrodynamic equations. The overall spatial characteristics of rivers, i.e. its length to depth to width ratio generally allows one to correctly disregard processes occurring in the vertical or transverse dimensions thus imposing hydrostatic pressure conditions and considering solely a 1D flow model along the river length. Through a calibration process an accurate flow model may thus be developed allowing for channel study and extrapolation of various scenarios. The Magdalena River in Colombia is a large river basin draining the country from South to North with 1550 km with 0.0024 average slope and 275 average width across. The river displays high water level fluctuation and is characterized by a series of meanders. The city of La Dorada has been affected over the years by serious flooding in the rainy and dry seasons. As the meander is evolving at a steady pace repeated flooding has endangered a number of neighborhoods. This study has been undertaken in pro of correctly model flow characteristics of the river in this region in order to evaluate various scenarios and provide decision makers with erosion control measures options and a forecasting tool. Two field campaigns have been completed over the dry and rainy seasons including extensive topographical and channel survey using Topcon GR5 DGPS and River Surveyor ADCP. Also in order to characterize the erosion process occurring through the meander, extensive suspended and river bed samples were retrieved as well as soil perforation over the banks. Hence based on DEM ground digital mapping survey and field data a 2DH flow model was prepared using the Iber freeware based on the finite volume method in a non-structured mesh environment. The calibration process was carried out comparing available historical data of nearby hydrologic gauging station. Although the model was able to effectively predict overall flow processes in the region, its spatial characteristics and limitations related to pressure conditions did not allow for an accurate representation of erosion processes occurring over specific bank areas and dwellings. As such a significant helical flow has been observed through the meander. Furthermore, the rapidly changing channel cross section as a consequence of severe erosion has hindered the model’s ability to provide decision makers with a valid up to date planning tool.Keywords: erosion, finite volume method, flow dynamics, flow modelling, meander
Procedia PDF Downloads 3191181 A Qualitative Study Identifying the Complexities of Early Childhood Professionals' Use and Production of Data
Authors: Sara Bonetti
Abstract:
The use of quantitative data to support policies and justify investments has become imperative in many fields including the field of education. However, the topic of data literacy has only marginally touched the early care and education (ECE) field. In California, within the ECE workforce, there is a group of professionals working in policy and advocacy that use quantitative data regularly and whose educational and professional experiences have been neglected by existing research. This study aimed at analyzing these experiences in accessing, using, and producing quantitative data. This study utilized semi-structured interviews to capture the differences in educational and professional backgrounds, policy contexts, and power relations. The participants were three key professionals from county-level organizations and one working at a State Department to allow for a broader perspective at systems level. The study followed Núñez’s multilevel model of intersectionality. The key in Núñez’s model is the intersection of multiple levels of analysis and influence, from the individual to the system level, and the identification of institutional power dynamics that perpetuate the marginalization of certain groups within society. In a similar manner, this study looked at the dynamic interaction of different influences at individual, organizational, and system levels that might intersect and affect ECE professionals’ experiences with quantitative data. At the individual level, an important element identified was the participants’ educational background, as it was possible to observe a relationship between that and their positionality, both with respect to working with data and also with respect to their power within an organization and at the policy table. For example, those with a background in child development were aware of how their formal education failed to train them in the skills that are necessary to work in policy and advocacy, and especially to work with quantitative data, compared to those with a background in administration and/or business. At the organizational level, the interviews showed a connection between the participants’ position within the organization and their organization’s position with respect to others and their degree of access to quantitative data. This in turn affected their sense of empowerment and agency in dealing with data, such as shaping what data is collected and available. These differences reflected on the interviewees’ perceptions and expectations for the ECE workforce. For example, one of the interviewees pointed out that many ECE professionals happen to use data out of the necessity of the moment. This lack of intentionality is a cause for, and at the same time translates into missed training opportunities. Another interviewee pointed out issues related to the professionalism of the ECE workforce by remarking the inadequacy of ECE students’ training in working with data. In conclusion, Núñez’s model helped understand the different elements that affect ECE professionals’ experiences with quantitative data. In particular, what was clear is that these professionals are not being provided with the necessary support and that we are not being intentional in creating data literacy skills for them, despite what is asked of them and their work.Keywords: data literacy, early childhood professionals, intersectionality, quantitative data
Procedia PDF Downloads 2521180 The Subtle Influence of Hindu Doctrines on Film Industry: A Case Study of Movie Avatar
Authors: Cemil Kutlutürk
Abstract:
Hindu culture and religious doctrines such as caste, reincarnation, yoga, nirvana have always proved a popular theme for the film industry. The analyzing of these motifs in the movies with a scientific approach enables to individuals either to comprehend the messages and deep meanings of films or to understand others’ religious beliefs systems and daily lives in a properly way. The primary aim of this study is to handle the subtle influence of Hindu doctrines on cinema industry by focusing on James Cameron’s film, Avatar and its relationship with Hindu concept of avatara by referring to original Hindu sacred texts where this doctrine is basically clarified. The Sanskrit word avatara means to come down or to descend. Although an avatara is commonly considered as an appearance of any deity on earth, the term refers the Vishnu’s descending on earth. When the movie avatar and avatara doctrine are compared, various similarities have noteworthy revealed. Firstly in the movie, Jake is chosen by Eywa to protect Pandora from evils. Similarly in the movie, avatar is born when there is a rise of jealousy and unrighteousness. The same concept is found in avatara doctrine. According to this belief whenever righteousness (dharma) wanes and unrighteousness (adharma) increases God incarnates himself as an avatara. In Hindu tradition, the ten avataras of Vishnu are the most popular. This standard list of ten avataras includes the Fish, the Tortoise, the Boar, the Man-Lion (Narasimha), the Dwarf, Parasurama, Rama, Krishna, the Buddha and Kalki. In the movie the avatar has tail, eyes, nose, ear which is similar to Narasimha (half man-half lion) avatara. On the other hand use of bow and arrow by Navis in the film, evokes us Rama avatara whose basic gun is same. Navis fly on a dragon like bird called Ikra and ride a horse-like quadruped animal. The vehicle for transformation of the avatar in the movie is also resemblance with the idea of Garuda, the great mythical bird, which is used by Vishnu in Hindu mythology. In addition, the last avatara, Kalki, will be seen on a white horse according to Puranas. The basic difference is that for Hinduism avatara means descent of a God, yet in the movie, a human being named Jake Sully, is manifested as humanoid of another planet, this is called as avatar. While in the movie the avatar manifests himself in another planet, Pandora, in Hinduism avataras descent on this world. On the other hand, in Hindu scriptures, there are many avataras and they are categorized according to their functions and attributes. These sides of avatara doctrine cannot be also seen clearly in the film. Even though there are some differences between each other, the main hypothesis of this study is that the general character of the movie is similar to avatara doctrine. In the movie instead of emphasizing on a specific avatara, qualities of different Vishnu avataras have been properly used.Keywords: film industry, Hinduism, incarnation, James Cameron, movie avatar
Procedia PDF Downloads 4011179 Improving Teaching in English-Medium Instruction Classes at Japanese Universities through Needs-Based Professional Development Workshops
Authors: Todd Enslen
Abstract:
In order to attract more international students to study for undergraduate degrees in Japan, many universities have been developing English-Medium Instruction degree programs. This means that many faculty members must now teach their courses in English, which raises a number of concerns. A common misconception of English-Medium Instruction (EMI) is that teaching in English is simply a matter of translating materials. Since much of the teaching in Japan still relies on a more traditional, teachercentered, approach, continuing with this style in an EMI environment that targets international students can cause a clash between what is happening and what students expect in the classroom, not to mention what the Scholarship of Teaching and Learning (SoTL) has shown is effective teaching. A variety of considerations need to be taken into account in EMI classrooms such as varying English abilities of the students, modifying input material, and assuring comprehension through interactional checks. This paper analyzes the effectiveness of the English-Medium Instruction (EMI) undergraduate degree programs in engineering, agriculture, and science at a large research university in Japan by presenting the results from student surveys regarding the areas where perceived improvements need to be made. The students were the most dissatisfied with communication with their teachers in English, communication with Japanese students in English, adherence to only English being used in the classes, and the quality of the education they received. In addition, the results of a needs analysis survey of Japanese teachers having to teach in English showed that they believed they were most in need of English vocabulary and expressions to use in the classroom and teaching methods for teaching in English. The result from the student survey and the faculty survey show similar concerns between the two groups. By helping the teachers to understand student-centered teaching and the benefits for learning that it provides, teachers may begin to incorporate more student-centered approaches that in turn help to alleviate the dissatisfaction students are currently experiencing. Through analyzing the current environment in Japanese higher education against established best practices in teaching and EMI, three areas that need to be addressed in professional development workshops were identified. These were “culture” as it relates to the English language, “classroom management techniques” and ways to incorporate them into classes, and “language” issues. Materials used to help faculty better understand best practices as they relate to these specific areas will be provided to help practitioners begin the process of helping EMI faculty build awareness of better teaching practices. Finally, the results from faculty development workshops participants’ surveys will show the impact that these workshops can have. Almost all of the participants indicated that they learned something new and would like to incorporate the ideas from the workshop into their teaching. In addition, the vast majority of the participants felt the workshop provided them with new information, and they would like more workshops like these.Keywords: English-medium instruction, materials development, professional development, teaching effectiveness
Procedia PDF Downloads 891178 Weapon-Being: Weaponized Design and Object-Oriented Ontology in Hypermodern Times
Authors: John Dimopoulos
Abstract:
This proposal attempts a refabrication of Heidegger’s classic thing-being and object-being analysis in order to provide better ontological tools for understanding contemporary culture, technology, and society. In his work, Heidegger sought to understand and comment on the problem of technology in an era of rampant innovation and increased perils for society and the planet. Today we seem to be at another crossroads in this course, coming after postmodernity, during which dreams and dangers of modernity augmented with critical speculations of the post-war era take shape. The new era which we are now living in, referred to as hypermodernity by researchers in various fields such as architecture and cultural theory, is defined by the horizontal implementation of digital technologies, cybernetic networks, and mixed reality. Technology today is rapidly approaching a turning point, namely the point of no return for humanity’s supervision over its creations. The techno-scientific civilization of the 21st century creates a series of problems, progressively more difficult and complex to solve and impossible to ignore, climate change, data safety, cyber depression, and digital stress being some of the most prevalent. Humans often have no other option than to address technology-induced problems with even more technology, as in the case of neuron networks, machine learning, and AI, thus widening the gap between creating technological artifacts and understanding their broad impact and possible future development. As all technical disciplines and particularly design, become enmeshed in a matrix of digital hyper-objects, a conceptual toolbox that allows us to handle the new reality becomes more and more necessary. Weaponized design, prevalent in many fields, such as social and traditional media, urban planning, industrial design, advertising, and the internet in general, hints towards an increase in conflicts. These conflicts between tech companies, stakeholders, and users with implications in politics, work, education, and production as apparent in the cases of Amazon workers’ strikes, Donald Trump’s 2016 campaign, Facebook and Microsoft data scandals, and more are often non-transparent to the wide public’s eye, thus consolidating new elites and technocratic classes and making the public scene less and less democratic. The new category proposed, weapon-being, is outlined in respect to the basic function of reducing complexity, subtracting materials, actants, and parameters, not strictly in favor of a humanistic re-orientation but in a more inclusive ontology of objects and subjects. Utilizing insights of Object-Oriented Ontology (OOO) and its schematization of technological objects, an outline for a radical ontology of technology is approached.Keywords: design, hypermodernity, object-oriented ontology, weapon-being
Procedia PDF Downloads 1521177 Efficiency of Different Types of Addition onto the Hydration Kinetics of Portland Cement
Authors: Marine Regnier, Pascal Bost, Matthieu Horgnies
Abstract:
Some of the problems to be solved for the concrete industry are linked to the use of low-reactivity cement, the hardening of concrete under cold-weather and the manufacture of pre-casted concrete without costly heating step. The development of these applications needs to accelerate the hydration kinetics, in order to decrease the setting time and to obtain significant compressive strengths as soon as possible. The mechanisms enhancing the hydration kinetics of alite or Portland cement (e.g. the creation of nucleation sites) were already studied in literature (e.g. by using distinct additions such as titanium dioxide nanoparticles, calcium carbonate fillers, water-soluble polymers, C-S-H, etc.). However, the goal of this study was to establish a clear ranking of the efficiency of several types of additions by using a robust and reproducible methodology based on isothermal calorimetry (performed at 20°C). The cement was a CEM I 52.5N PM-ES (Blaine fineness of 455 m²/kg). To ensure the reproducibility of the experiments and avoid any decrease of the reactivity before use, the cement was stored in waterproof and sealed bags to avoid any contact with moisture and carbon dioxide. The experiments were performed on Portland cement pastes by using a water-to-cement ratio of 0.45, and incorporating different compounds (industrially available or laboratory-synthesized) that were selected according to their main composition and their specific surface area (SSA, calculated using the Brunauer-Emmett-Teller (BET) model and nitrogen adsorption isotherms performed at 77K). The intrinsic effects of (i) dry powders (e.g. fumed silica, activated charcoal, nano-precipitates of calcium carbonate, afwillite germs, nanoparticles of iron and iron oxides , etc.), and (ii) aqueous solutions (e.g. containing calcium chloride, hydrated Portland cement or Master X-SEED 100, etc.) were investigated. The influence of the amount of addition, calculated relatively to the dry extract of each addition compared to cement (and by conserving the same water-to-cement ratio) was also studied. The results demonstrated that the X-SEED®, the hydrated calcium nitrate, the calcium chloride (and, at a minor level, a solution of hydrated Portland cement) were able to accelerate the hydration kinetics of Portland cement, even at low concentration (e.g. 1%wt. of dry extract compared to cement). By using higher rates of additions, the fumed silica, the precipitated calcium carbonate and the titanium dioxide can also accelerate the hydration. In the case of the nano-precipitates of calcium carbonate, a correlation was established between the SSA and the accelerating effect. On the contrary, the nanoparticles of iron or iron oxides, the activated charcoal and the dried crystallised hydrates did not show any accelerating effect. Future experiments will be scheduled to establish the ranking of these additions, in terms of accelerating effect, by using low-reactivity cements and other water to cement ratios.Keywords: acceleration, hydration kinetics, isothermal calorimetry, Portland cement
Procedia PDF Downloads 2561176 Assessment of Water Pollution in the River Nile (Egypt) by Applying Blood Biomarkers in Two Excellent Model Species Oreochromis niloticus niloticus and Clarias gariepinus
Authors: Alaa G. M. Osman, Abd-El –Baset M. Abd El Reheem, Khaled Y. Abouelfadl, Usama M. Mahmoud, Mohsen A. Moustafa
Abstract:
This study aimed to explore new sites of biomarker research and to establish the use of blood parameters in wild fish populations. Four hundred and twenty fish samples were collected from six sites along the whole course of the river Nile, Egypt. The mean values of erythrocytes, thrombocytes, hemoglobin concentration, hematocrit value, and mean corpuscular volume were significantly lower in the blood of Nile tilapia and African catfish collected from downstream (contaminated) compared to upstream sites. In contrast, mean corpuscular hemoglobin and mean corpuscular hemoglobin concentration in the peripheral blood of both fish species significantly increased from upstream to downstream river Nile. The leukocytes count was significantly decreased in contaminated sites compared to upstream area. Hematological variables in the peripheral blood of Oreochromis niloticus niloticus and Clarias gariepinus exhibited significant (p<0.05) correlation with nearly all the detected chemical and physical parameters along the Nile course. In the present study, lower cellular and nuclear areas and cellular and nuclear shape factor were recorded in the erythrocytes of fish collected from downstream compared to those caught from upstream sites. This was confirmed by higher immature ratios of red cells in the blood of fish sampled from downstream river Nile. Karyorrhetic and enucleated erythrocytes were significantly correlated with physiochemical parameters in water samples collected from the same sites is being higher in the blood of fish collected from downstream sites. To see if there was any correlation between fish altered physiological fitness and environmental stress, we measured serum biochemical variables namely; total protein, cholesterol, triglycerides, calcium, chlorides, alkaline phosphatase activity (ALP), aspartate aminotransferase (AST), alanine aminotransferase (ALT), uric acid activity, creatinine, and serum glucose. The level of all the selected biochemical variables in the blood of O. niloticus niloticus and C. gariepinus were recorded to be significantly higher (p<0.05) in downstream sites. According to the present results, nearly all the detected haematological and blood biochemical variables are suitable indicators of contaminant exposure in O. niloticus niloticus and C. gariepinus. Also the detected erythrocytes malformations in blood collected from Nile tilapia and African catfish were proven to be suitable for bio-monitoring aquatic pollution. The results revealed species-specific differences in sensitivities, suggesting that Nile tilapia may serve as a more sensitive test species compared to African catfish.Keywords: biomarkers, water pollution, blood parameters, river nile, african catfish, nile tilapia
Procedia PDF Downloads 2911175 Temperature Contour Detection of Salt Ice Using Color Thermal Image Segmentation Method
Authors: Azam Fazelpour, Saeed Reza Dehghani, Vlastimil Masek, Yuri S. Muzychka
Abstract:
The study uses a novel image analysis based on thermal imaging to detect temperature contours created on salt ice surface during transient phenomena. Thermal cameras detect objects by using their emissivities and IR radiance. The ice surface temperature is not uniform during transient processes. The temperature starts to increase from the boundary of ice towards the center of that. Thermal cameras are able to report temperature changes on the ice surface at every individual moment. Various contours, which show different temperature areas, appear on the ice surface picture captured by a thermal camera. Identifying the exact boundary of these contours is valuable to facilitate ice surface temperature analysis. Image processing techniques are used to extract each contour area precisely. In this study, several pictures are recorded while the temperature is increasing throughout the ice surface. Some pictures are selected to be processed by a specific time interval. An image segmentation method is applied to images to determine the contour areas. Color thermal images are used to exploit the main information. Red, green and blue elements of color images are investigated to find the best contour boundaries. The algorithms of image enhancement and noise removal are applied to images to obtain a high contrast and clear image. A novel edge detection algorithm based on differences in the color of the pixels is established to determine contour boundaries. In this method, the edges of the contours are obtained according to properties of red, blue and green image elements. The color image elements are assessed considering their information. Useful elements proceed to process and useless elements are removed from the process to reduce the consuming time. Neighbor pixels with close intensities are assigned in one contour and differences in intensities determine boundaries. The results are then verified by conducting experimental tests. An experimental setup is performed using ice samples and a thermal camera. To observe the created ice contour by the thermal camera, the samples, which are initially at -20° C, are contacted with a warmer surface. Pictures are captured for 20 seconds. The method is applied to five images ,which are captured at the time intervals of 5 seconds. The study shows the green image element carries no useful information; therefore, the boundary detection method is applied on red and blue image elements. In this case study, the results indicate that proposed algorithm shows the boundaries more effective than other edges detection methods such as Sobel and Canny. Comparison between the contour detection in this method and temperature analysis, which states real boundaries, shows a good agreement. This color image edge detection method is applicable to other similar cases according to their image properties.Keywords: color image processing, edge detection, ice contour boundary, salt ice, thermal image
Procedia PDF Downloads 3141174 Multiple Intelligences to Improve Pronunciation
Authors: Jean Pierre Ribeiro Daquila
Abstract:
This paper aims to analyze the use of the Theory of Multiple Intelligences as a tool to facilitate students’ learning. This theory, proposed by the American psychologist and educator Howard Gardner, was first established in 1983 and advocates that human beings possess eight intelligence and not only one, as defended by psychologists prior to his theory. These intelligence are bodily-kinesthetic intelligence, musical, linguistic, logical-mathematical, spatial, interpersonal, intrapersonal, and naturalist. This paper will focus on bodily-kinesthetic intelligence. Spatial and bodily-kinesthetic intelligences are sensed by athletes, dancers, and others who use their bodies in ways that exceed normal abilities. These are intelligences that are closely related. A quarterback or a ballet dancer needs to have both an awareness of body motions and abilities as well as a sense of the space involved in the action. Nevertheless, there are many reasons which make classical ballet dance more integrated with other intelligences. Ballet dancers make it look effortless as they move across the stage, from the lifts to the toe points; therefore, there is acting both in the performance of the repertoire and in hiding the pain or physical stress. The ballet dancer has to have great mathematical intelligence to perform a fast allegro; for instance, each movement has to be executed in a specific millisecond. Flamenco dancers need to rely as well on their mathematic abilities, as the footwork requires the ability to make half, two, three, four or even six movements in just one beat. However, the precision of the arm movements is freer than in ballet dance; for this reason, ballet dancers need to be more holistically aware of their movements; therefore, our experiment will test whether this greater attention required by ballet dancers makes them acquire better results in the training sessions when compared to flamenco dancers. An experiment will be carried out in this study by training ballet dancers through dance (four years of experience dancing minimum – experimental group 1); a group of flamenco dancers (four years of experience dancing minimum – experimental group 2). Both experimental groups will be trained in two different domains – phonetics and chemistry – to examine whether there is a significant improvement in these areas compared to the control group (a group of regular students who will receive the same training through a traditional method). However, this paper will focus on phonetic training. Experimental group 1 will be trained with the aid of classical music plus bodily work. Experimental group 2 will be trained with flamenco rhythm and kinesthetic work. We would like to highlight that this study takes dance as an example of a possible area of strength; nonetheless, other types of arts can and should be used to support students, such as drama, creative writing, music and others. The main aim of this work is to suggest that other intelligences, in the case of this study, bodily-kinesthetic, can be used to help improve pronunciation.Keywords: multiple intelligences, pronunciation, effective pronunciation trainings, short drills, musical intelligence, bodily-kinesthetic intelligence
Procedia PDF Downloads 961173 Goal-Setting in a Peer Leader HIV Prevention Intervention to Improve Preexposure Prophylaxis Access among Black Men Who Have Sex with Men
Authors: Tim J. Walsh, Lindsay E. Young, John A. Schneider
Abstract:
Background: The disproportionate rate of HIV infection among Black men who have sex with men (BMSM) in the United States suggest the importance of Preexposure Prophylaxis (PrEP) interventions for this population. As such, there is an urgent need for innovative outreach strategies that extend beyond the traditional patient-provider relationship to reach at-risk populations. Training members of the BMSM community as peer change agents (PCAs) is one such strategy. An important piece of this training is goal-setting. Goal-setting not only encourages PCAs to define the parameters of the intervention according to their lived experience, it also helps them plan courses of action. Therefore, the aims of this mixed methods study are: (1) Characterize the goals that BMSM set at the end of their PrEP training and (2) Assess the relationship between goal types and PCA engagement. Methods: Between March 2016 and July 2016, preliminary data were collected from 68 BMSM, ages 18-33, in Chicago as part of an ongoing PrEP intervention. Once enrolled, PCAs participate in a half-day training in which they learn about PrEP, practice initiating conversations about PrEP, and identify strategies for supporting at-risk peers through the PrEP adoption process. Training culminates with a goal-setting exercise, whereby participants establish a goal related to their role as a PCA. Goals were coded for features that either emerged from the data itself or existed in extant goal-setting literature. The main outcomes were (1) number of PrEP conversations PCAs self-report during booster conversations two weeks following the intervention and (2) number of peers PCAs recruit into the study that completed the PrEP workshop. Results: PCA goals (N=68) were characterized in terms of four features: Specificity, target population, personalization, and purpose defined. To date, PCAs report a collective 52 PrEP conversations. 56, 25, and 6% of PrEP conversations occurred with friends, family, and sexual partners, respectively. PCAs with specific goals had more PrEP conversations with at-risk peers compared to those with vague goals (58% vs. 42%); PCAs with personalized goals had more PrEP conversations compared to those with de-personalized goals (60% vs. 53%); and PCAs with goals that defined a purpose had more PrEP conversations compared to those who did not define a purpose (75% vs. 52%). 100% of PCAs with goals that defined a purpose recruited peers into the study compared to 45 percent of PCAs with goals that did not define a purpose. Conclusion: Our preliminary analysis demonstrates that BMSM are motivated to set and work toward a diverse set of goals to support peers in PrEP adoption. PCAs with goals involving a clearly defined purpose had more PrEP conversations and greater peer recruitment than those with goals lacking a defined purpose. This may indicate that PCAs who define their purpose at the outset of their participation will be more engaged in the study than those who do not. Goal-setting may be considered as a component of future HIV prevention interventions to advance intervention goals and as an indicator of PCAs understanding of the intervention.Keywords: HIV prevention, MSM, peer change agent, preexposure prophylaxis
Procedia PDF Downloads 1961172 Exploring the Gap between Coverage, Access, Utilization of Long Lasting Insecticidal Nets (LLINs) among the People of Malaria Endemic Districts in Bangladesh
Authors: Fouzia Khanam, Tridib Chowdhury, Belal Hossain, Sajedur Rahman, Mahfuzar Rahman
Abstract:
Introduction: Over the last decades, the world has achieved a noticeable success in preventing malaria. Nevertheless, malaria, a vector-borne infectious disease, remains a major public health burden globally as well as in Bangladesh. To achieve the goal of eliminating malaria, BRAC, a leading organization of Bangladesh in collaboration with government, is distributing free LLIN to the 13 endemic districts of the country. The study was conducted with the aim of assessing the gap between coverage, access, and utilization of LLIN among the people of the 13 malaria endemic districts of Bangladesh. Methods: This baseline study employed a community cross-sectional design triangulated with qualitative methods to measure households’ ownership, access and use of 13 endemic districts. A multistage cluster random sampling was employed for the quantitative part and for qualitative part a purposive sampling strategy was done. Thus present analysis included 2640 households encompassing a total of 14475 populations. Data were collected using a pre-tested structured questionnaire through one on one face-to-face interview with respondents. All analyses were performed using STATA (Version 13.0). For the qualitative part participant observation, in-depth interview, focus group discussion, key informant interview and informal interview was done to gather the contextual data. Findings: According to our study, 99.8% of households possessed at least one-bed net in both study areas. 77.4% households possessed at least two LLIN and 43.2% households had access to LLIN for all the members. So the gap between coverage and access is 34%. 91.8% people in the 13 districts and 95.1% in Chittagong Hill Tracts areas reported having had slept under a bed net the night before interviewed. And despite the relatively low access, in 77.8% of households, all the members were used the LLIN the previous night. This higher utilization compared to access might be due to the increased awareness among the community people regarding LLIN uses. However, among those people with sufficient access to LLIN, 6% of them still did not use the LLIN which reflects the behavioral failure that needs to be addressed. The major reasons for not using LLIN, identified by both qualitative and quantitative findings, were insufficient access, sleeping or living outside the home, migration, perceived low efficacy of LLIN, fear of physical side effects or feeling uncomfortable. Conclusion: Given that LLIN access and use was a bit short of the targets, it conveys important messages to the malaria control program. Targeting specific population segments and groups for achieving expected LLIN coverage is very crucial. And also, addressing behavior failure by well-designed behavioral change interventions is mandatory.Keywords: long lasting insecticide net, malaria, malaria control programme, World Health Organisation
Procedia PDF Downloads 187