Search results for: small islands
671 Medial Temporal Tau Predicts Memory Decline in Cognitively Unimpaired Elderly
Authors: Angela T. H. Kwan, Saman Arfaie, Joseph Therriault, Zahra Azizi, Firoza Z. Lussier, Cecile Tissot, Mira Chamoun, Gleb Bezgin, Stijn Servaes, Jenna Stevenon, Nesrine Rahmouni, Vanessa Pallen, Serge Gauthier, Pedro Rosa-Neto
Abstract:
Alzheimer’s disease (AD) can be detected in living people using in vivo biomarkers of amyloid-β (Aβ) and tau, even in the absence of cognitive impairment during the preclinical phase. [¹⁸F]-MK-6420 is a high affinity positron emission tomography (PET) tracer that quantifies tau neurofibrillary tangles, but its ability to predict cognitive changes associated with early AD symptoms, such as memory decline, is unclear. Here, we assess the prognostic accuracy of baseline [18F]-MK-6420 tau PET for predicting longitudinal memory decline in asymptomatic elderly individuals. In a longitudinal observational study, we evaluated a cohort of cognitively normal elderly participants (n = 111) from the Translational Biomarkers in Aging and Dementia (TRIAD) study (data collected between October 2017 and July 2020, with a follow-up period of 12 months). All participants underwent tau PET with [¹⁸F]-MK-6420 and Aβ PET with [¹⁸F]-AZD-4694. The exclusion criteria included the presence of head trauma, stroke, or other neurological disorders. There were 111 eligible participants who were chosen based on the availability of Aβ PET, tau PET, magnetic resonance imaging (MRI), and APOEε4 genotyping. Among these participants, the mean (SD) age was 70.1 (8.6) years; 20 (18%) were tau PET positive, and 71 of 111 (63.9%) were women. A significant association between baseline Braak I-II [¹⁸F]-MK-6240 SUVR positivity and change in composite memory score was observed at the 12-month follow-up, after correcting for age, sex, and years of education (Logical Memory and RAVLT, standardized beta = -0.52 (-0.82-0.21), p < 0.001, for dichotomized tau PET and -1.22 (-1.84-(-0.61)), p < 0.0001, for continuous tau PET). Moderate cognitive decline was observed for A+T+ over the follow-up period, whereas no significant change was observed for A-T+, A+T-, and A-T-, though it should be noted that the A-T+ group was small.Our results indicate that baseline tau neurofibrillary tangle pathology is associated with longitudinal changes in memory function, supporting the use of [¹⁸F]-MK-6420 PET to predict the likelihood of asymptomatic elderly individuals experiencing future memory decline. Overall, [¹⁸F]-MK-6420 PET is a promising tool for predicting memory decline in older adults without cognitive impairment at baseline. This is of critical relevance as the field is shifting towards a biological model of AD defined by the aggregation of pathologic tau. Therefore, early detection of tau pathology using [¹⁸F]-MK-6420 PET provides us with the hope that living patients with AD may be diagnosed during the preclinical phase before it is too late.Keywords: alzheimer’s disease, braak I-II, in vivo biomarkers, memory, PET, tau
Procedia PDF Downloads 76670 Investigations of Effective Marketing Metric Strategies: The Case of St. George Brewery Factory, Ethiopia
Authors: Mekdes Getu Chekol, Biniam Tedros Kahsay, Rahwa Berihu Haile
Abstract:
The main objective of this study is to investigate the marketing strategy practice in the Case of St. George Brewery Factory in Addis Ababa. One of the core activities in a Business Company to stay in business is having a well-developed marketing strategy. It assessed how the marketing strategies were practiced in the company to achieve its goals aligned with segmentation, target market, positioning, and the marketing mix elements to satisfy customer requirements. Using primary and secondary data, the study is conducted by using both qualitative and quantitative approaches. The primary data was collected through open and closed-ended questionnaires. Considering the size of the population is small, the selection of the respondents was carried out by using a census. The finding shows that the company used all the 4 Ps of the marketing mix elements in its marketing strategies and provided quality products at affordable prices by promoting its products by using high and effective advertising mechanisms. The product availability and accessibility are admirable with the practices of both direct and indirect distribution channels. On the other hand, the company has identified its target customers, and the company’s market segmentation practice is geographical location. Communication effectiveness between the marketing department and other departments is very good. The adjusted R2 model explains 61.6% of the marketing strategy practice variance by product, price, promotion, and place. The remaining 38.4% of variation in the dependent variable was explained by other factors not included in this study. The result reveals that all four independent variables, product, price, promotion, and place, have a positive beta sign, proving that predictor variables have a positive effect on that of the predicting dependent variable marketing strategy practice. Even though the marketing strategies of the company are effectively practiced, there are some problems that the company faces while implementing them. These are infrastructure problems, economic problems, intensive competition in the market, shortage of raw materials, seasonality of consumption, socio-cultural problems, and the time and cost of awareness creation for the customers. Finally, the authors suggest that the company better develop a long-range view and try to implement a more structured approach to attain information about potential customers, competitor’s actions, and market intelligence within the industry. In addition, we recommend conducting the study by increasing the sample size and including different marketing factors.Keywords: marketing strategy, market segmentation, target marketing, market positioning, marketing mix
Procedia PDF Downloads 55669 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System
Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li
Abstract:
The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.Keywords: afterburner, combustion, field synergy, solid oxide fuel cell
Procedia PDF Downloads 132668 An Analysis of LoRa Networks for Rainforest Monitoring
Authors: Rafael Castilho Carvalho, Edjair de Souza Mota
Abstract:
As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest
Procedia PDF Downloads 84667 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors
Authors: Minal Jain, Vinayak Malhotra
Abstract:
Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.Keywords: combustion, propellant, regression, safety
Procedia PDF Downloads 161666 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers
Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya
Abstract:
In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.Keywords: IVF, embryo, machine learning, time-lapse imaging data
Procedia PDF Downloads 91665 Inertial Spreading of Drop on Porous Surfaces
Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi
Abstract:
The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium
Procedia PDF Downloads 136664 Shameful Heroes of Queer Cinema: A Critique of Mumbai Police (2013) and My Life Partner (2014)
Authors: Payal Sudhan
Abstract:
Popular films in India, Bollywood, and other local industries make a range of commercial films that attract vast viewership. Love, Heroism, Action, Adventure, Revenge, etc., are some of the dearest themes chosen by many filmmakers of various popular film Industries across the world. However, sexuality has become an issue to address within the cinema. Such films feature in small numbers compared to other themes. One can easily assume that homosexuality is unlikely to be a favorite theme found in Indian popular cinema. It doesn’t mean that there is absolutely no film made on the issues of homosexuality. There have been several attempts. Earlier, some movies depicted homosexual (gay) characters as comedians, which continued until the beginning of the 21st century. The study aims to explore how modern homophobia and stereotype are represented in the films and how it affects homosexuality in the recent Malayalam Cinema. The study wills primarily focusing on Mumbai Police (2013) and My Life Partner (2014). The study tries to explain social space, the idea of a cure, and criminality. The film that has been selected for the analysis Mumbai Police (2013) is a crime thriller. The nonlinear narration of the movie reveals, towards the end, the murderer of ACP Aryan IPS, who was shot dead in a public meeting. In the end, the culprit is the enquiring officer, ACP Antony Moses, himself a close friend and colleague of the victim. Much to one’s curiosity, the primary cause turns out to be the sexual relation Antony has. My Life Partner generically can be classified as a drama. The movie puts forth male bonding and visibly riddles the notions of love and sex between Kiran and his roommate Richard. Running through the same track, the film deals with a different ‘event.’ The ‘event’ is the exclusive celebration of male bonding. The socio-cultural background of the cinema is heterosexual. The elements of heterosexual social setup meet the ends of diplomacy of the Malayalam queer visual culture. The film reveals the life of two gays who were humiliated by the larger heterosexual society. In the end, Kiran dies because of extreme humiliation. The paper is a comparative and cultural analysis of the two movies, My Life Partner and Mumbai Police. I try to bring all the points of comparison together and explain the similarities and differences, how one movie differs from another. Thus, my attempt here explains how stereotypes and homophobia with other related issues are represented in these two movies.Keywords: queer cinema, homophobia, malayalam cinema, queer films
Procedia PDF Downloads 233663 From Servicescape to Servicespace: Qualitative Research in a Post-Cartesian Retail Context
Authors: Chris Houliez
Abstract:
This study addresses the complex dynamics of the modern retail environment, focusing on how the ubiquitous nature of mobile communication technologies has reshaped the shopper experience and tested the limits of the conventional "servicescape" concept commonly used to describe retail experiences. The objective is to redefine the conceptualization of retail space by introducing an approach to space that aligns with a retail context where physical and digital interactions are increasingly intertwined. To offer a more shopper-centric understanding of the retail experience, this study draws from phenomenology, particularly Henri Lefebvre’s work on the production of space. The presented protocol differs from traditional methodologies by not making assumptions about what constitutes a retail space. Instead, it adopts a perspective based on Lefebvre’s seminal work, which posits that space is not a three-dimensional container commonly referred to as “servicescape” but is actively produced through shoppers’ spatial practices. This approach allows for an in-depth exploration of the retail experience by capturing the everyday spatial practices of shoppers without preconceived notions of what constitutes a retail space. The designed protocol was tested with eight participants during 209 hours of day-long field trips, immersing the researcher into the shopper's lived experience by combining multiple data collection methods, including participant observation, videography, photography, and both pre-fieldwork and post-fieldwork interviews. By giving equal importance to both locations and connections, this study unpacked various spatial practices that contribute to the production of retail space. These findings highlight the relative inadequacy of some traditional retail space conceptualizations, which often fail to capture the fluid nature of contemporary shopping experiences. The study's emphasis on the customization process, through which shoppers optimize their retail experience by producing a “fully lived retail space,” offers a more comprehensive understanding of consumer shopping behavior in the digital age. In conclusion, this research presents a significant shift in the conceptualization of retail space. By employing a phenomenological approach rooted in Lefebvre’s theory, the study provides a more efficient framework to understand the retail experience in the age of mobile communication technologies. Although this research is limited by its small sample size and the demographic profile of participants, it offers valuable insights into the spatial practices of modern shoppers and their implications for retail researchers and retailers alike.Keywords: shopper behavior, mobile telecommunication technologies, qualitative research, servicescape, servicespace
Procedia PDF Downloads 19662 The Democracy of Love and Suffering in the Erotic Epigrams of Meleager
Authors: Carlos A. Martins de Jesus
Abstract:
The Greek anthology, first put together in the tenth century AD, gathers in two separate books a large number of epigrams devoted to love and its consequences, both of hetero (book V) and homosexual (book XII) nature. While some poets wrote epigrams of only one genre –that is the case of Strato (II cent. BC), the organizer of a wide-spread garland of homosexual epigrams –, several others composed within both categories, often using the same topics of love and suffering. Using Plato’s theorization of two different kinds of Eros (Symp. 180d-182a), the popular (pandemos) and the celestial (ouranios), homoerotic epigrammatic love is more often associated with the first one, while heterosexual poetry tends to be connected to a higher form of love. This paper focuses on the epigrammatic production of a single first-century BC poet, Meleager, aiming to look for the similarities and differences on singing both kinds of love. From Meleager, the Greek Anthology –a garland whose origins have been traced back to the poet’s garland itself– preserves more than sixty heterosexual and 48 homosexual epigrams, an important and unprecedented amount of poems that are able to trace a complete profile of his way of singing love. Meleager’s poetry deals with personal experience and emotions, frequently with love and the unhappiness that usually comes from it. Most times he describes himself not as an active and engaged lover, but as one struck by the beauty of a woman or boy, i.e., in a stage prior to erotic consummation. His epigrams represent the unreal and fantastic (literally speaking) world of the lover, in which the imagery and wordplays are used to convey emotion in the epigrams of both genres. Elsewhere Meleager surprises the reader by offering a surrealist or dreamlike landscape where everyday adventures are transcribed into elaborate metaphors for erotic feeling. For instance, in 12.81, the lovers are shipwrecked, and as soon as they have disembarked, they are promptly kidnapped by a figure who is both Eros and a beautiful boy. Particularly –and worth-to-know why significant – in the homosexual poems collected in Book XII, mythology also plays an important role, namely in the figure and the scene of Ganimedes’ kidnap by Zeus for his royal court (12. 70, 94). While mostly refusing the Hellenistic model of dramatic love epigram, in which a small everyday scene is portrayed –and 5. 182 is a clear exception to this almost rule –, Meleager actually focuses on the tumultuous inside of his (poetic) lovers, in the realm of a subject that feels love and pain far beyond his/her erotic preferences. In relation to loving and suffering –mostly suffering, it has to be said –, Meleager’s love is therefore completely democratic. There is no real place in his epigrams for the traditional association mentioned before between homoeroticism and a carnal-erotic-pornographic love, while the heterosexual one being more evenly and pure, so to speak.Keywords: epigram, erotic epigram, Greek Anthology, Meleager
Procedia PDF Downloads 252661 Assessment of Water Reuse Potential in a Metal Finishing Factory
Authors: Efe Gumuslu, Guclu Insel, Gülten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tuğba Olmez Hanci, Didem Okutman Tas, Fatoş Germirli Babuna, Derya Firat Ertem, Ökmen Yildirim, Özge Erturan, Betül Kirci
Abstract:
Although water reclamation and reuse are inseparable parts of sustainable production concept all around the world, current levels of reuse constitute only a small fraction of the total volume of industrial effluents. Nowadays, within the perspective of serious climate change, wastewater reclamation and reuse practices should be considered as a requirement. Industrial sector is one of the largest users of water sources. The OECD Environmental Outlook to 2050 predicts that global water demand for manufacturing will increase by 400% from 2000 to 2050 which is much larger than any other sector. Metal finishing industry is one of the industries that requires high amount of water during the manufacturing. Therefore, actions regarding the improvement of wastewater treatment and reuse should be undertaken on both economic and environmental sustainability grounds. Process wastewater can be reused for more purposes if the appropriate treatment systems are installed to treat the wastewater to the required quality level. Recent studies showed that membrane separation techniques may help in solving the problem of attaining a suitable quality of water that allows being recycled back to the process. The metal finishing factory where this study is conducted is one of the biggest white-goods manufacturers in Turkey. The sheet metal parts used in the cookers production have to be exposed to surface pre-treatment processes composed of degreasing, rinsing, nanoceramics coating and deionization rinsing processes, consecutively. The wastewater generating processes in the factory are enamel coating, painting and styrofoam processes. In the factory, the main source of water is the well water. While some part of the well water is directly used in the processes after passing through resin treatment, some portion of it is directed to the reverse osmosis treatment to obtain required water quality for enamel coating and painting processes. In addition to these processes another important source of water that can be considered as a potential water source is rainwater (3660 tons/year). In this study, process profiles as well as pollution profiles were assessed by a detailed quantitative and qualitative characterization of the wastewater sources generated in the factory. Based on the preliminary results the main water sources that can be considered for reuse in the processes were determined as painting and styrofoam processes.Keywords: enamel coating, painting, reuse, wastewater
Procedia PDF Downloads 377660 A Longitudinal Study of Social Engagement in Classroom in Children with Autism Spectrum Disorder
Authors: Cecile Garry, Katia Rovira, Julie Brisson
Abstract:
Autism Spectrum Disorder (ASD) is defined by a qualitative and quantitative impairment of social interaction. Indeed early intervention programs, such as the Early Start Denver Model (ESDM), aimed at encouraging the development of social skills. In classroom, the children need to be socially engaged to learn. Early intervention programs can thus be implemented in kindergarten schools. In these schools, ASD children have more opportunities to interact with their peers or adults than in elementary schools. However, the preschool children with ASD are less socially engaged than their typically developing peers in the classroom. They initiate, respond and maintain less the social interactions. In addition, they produce more responses than initiations. When they interact, the non verbal communication is more used than verbal or symbolic communication forms and they are more engaged with adults than with peers. Nevertheless, communicative patterns may vary according to the clinical profiles of ASD children. Indeed, the ASD children with better cognitive skills interact more with their peers and use more symbolic communication than the ASD children with a low cognitive level. ASD children with the less severe symptoms use more the verbal communication than ASD children with the more severe symptoms. Small groups and structured activities encourage coordinated joint engagement episodes in ASD children. Our goal is to evaluate ASD children’s social engagement development in class, with their peers or adults, during dyadic or group activities. Participants were 19 preschool children with ASD aged from 3 to 6 years old that benefited of an early intervention in special kindergarten schools. Severity of ASD symptoms was measured with the CARS at the beginning of the follow-up. Classroom situations of interaction were recorded during 10 minutes (5 minutes of dyadic interaction and 5 minutes of a group activity), every 2 months, during 10 months. Social engagement behaviors of children, including initiations, responses and imitation, directed to a peer or an adult, were then coded. The Observer software (Noldus) that allows to annotate behaviors was the coding system used. A double coding was conducted and revealed a good inter judges fidelity. Results show that ASD children were more often and longer socially engaged in dyadic than in groups situations. They were also more engaged with adults than with peers. Children with the less severe symptoms of ASD were more socially engaged in groups situations than children with the more severe symptoms of ASD. Then, ASD children with the less severe symptoms of ASD were more engaged with their peers than ASD children with the more severe symptoms of ASD. However, the engagement frequency increased during the 10 month of follow-up but only for ASD children with the more severe symptoms at the beginning. To conclude, these results highlighted the necessity of individualizing early intervention programs according to the clinical profile of the child.Keywords: autism spectrum disorder, preschool children, developmental psychology, early interventions, social interactions
Procedia PDF Downloads 157659 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet
Authors: Bello Muhammad Dogon Kade
Abstract:
The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein
Procedia PDF Downloads 65658 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach
Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist
Abstract:
Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.
Procedia PDF Downloads 75657 Iranian English as Foreign Language Teachers' Psychological Well-Being across Gender: During the Pandemic
Authors: Fatemeh Asadi Farsad, Sima Modirkhameneh
Abstract:
The purpose of this study was to explore the pattern of Psychological Well-Being (PWB) of Iranian male and female EFL teachers during the pandemic. It was intended to see if such a drastic change in the context and mode of teaching affects teachers' PWB. Furthermore, the possible difference between the six elements of PWB of Iranian EFL male vs. female teachers during the pandemic was investigated. The other purpose was to find out the EFL teachers’ perceptions of any modifications, and factors leading to such modifications in their PWB during pandemic. For the purpose of this investigation, a total of 81 EFL teachers (59 female, 22 male) with an age range of 25 to 35 were conveniently sampled from different cities in Iran. Ryff’s PWB questionnaire was sent to participant teachers through online platforms to elicit data on their PWB. As for their perceptions on the possible modifications and the factors involved in PWB during pandemic, a set of semi-structured interviews were run among both sample groups. The findings revealed that male EFL teachers had the highest mean on personal growth, followed by purpose of life, and self-acceptance and the lowest mean on environmental mastery. With a slightly similar pattern, female EFL teachers had the highest mean on personal growth, followed by purpose in life, and positive relationship with others with the lowest mean on environmental mastery. However, no significant difference was observed between the male and female groups’ overall means on elements of PWB. Additionally, participants perceived that their anxiety level in online classes altered due to factors like (1) Computer literacy skills, (2) Lack of social communications and interactions with colleagues and students, (3) Online class management, (4) Overwhelming workloads, and (5) Time management. The study ends with further suggestions as regards effective online teaching preparation considering teachers PWB, especially at severe situations such as covid-19 pandemic. The findings offer to determine the reformations of educational policies concerning enhancing EFL teachers’ PWB through computer literacy courses and stress management courses. It is also suggested that to proactively support teachers’ mental health, it is necessary to provide them with advisors and psychologists if possible for free. Limitations: One limitation is the small number of participants (81), suggesting that future replications should include more participants for reliable findings. Another limitation is the gender imbalance, which future studies should address to yield better outcomes. Furthermore, Limited data gathering tools suggest using observations, diaries, and narratives for more insights in future studies. The study focused on one model of PWB, calling for further research on other models in the literature. Considering the wide effect of the COVID-19 pandemic, future studies should consider additional variables (e.g., teaching experience, age, income) to understand Iranian EFL teachers’ vulnerabilities and strengths better.Keywords: online teaching, psychological well-being, female and male EFL teachers, pandemic
Procedia PDF Downloads 45656 Investigation of the Function of Chemotaxonomy of White Tea on the Regulatory Function of Genes in Pathway of Colon Cancer
Authors: Fereydoon Bondarian, Samira Shaygan
Abstract:
Today, many nutritionists recommend the consumption of plants, fruits, and vegetables to provide the antioxidants needed by the body because the use of plant antioxidants usually causes fewer side effects and better treatment. Natural antioxidants increase the power of plasma antioxidants and reduce the incidence of some diseases, such as cancer. Bad lifestyles and environmental factors play an important role in increasing the incidence of cancer. In this study, different extracts of white teas taken from two types of tea available in Iran (clone 100 and Chinese hybrid) due to the presence of a hydroxyl functional group in their structure to inhibit free radicals and anticancer properties, using 3 aqueous, methanolic and aqueous-methanolic methods were used. The total polyphenolic content was calculated using the Folin-Ciocalcu method, and the percentage of inhibition and trapping of free radicals in each of the extracts was calculated using the DPPH method. With the help of high-performance liquid chromatography, a small amount of each catechin in the tea samples was obtained. Clone 100 white tea was found to be the best sample of tea in terms of all the examined attributes (total polyphenol content, antioxidant properties, and individual amount of each catechin). The results showed that aqueous and aqueous-methanolic extracts of Clone 100 white tea have the highest total polyphenol content with 27.59±0.08 and 36.67±0.54 (equivalent gallic acid per gram dry weight of leaves), respectively. Due to having the highest level of different groups of catechin compounds, these extracts have the highest property of inhibiting and trapping free radicals with 66.61±0.27 and 71.74±0.27% (mg/l) of the extracted sample against ascorbic acid). Using the MTT test, the inhibitory effect of clone 100 white tea extract in inhibiting the growth of HCT-116 colon cancer cells was investigated and the best time and concentration treatments were 500, 150 and 1000 micrograms in 8, 16 and 24 hours, respectively. To investigate gene expression changes, selected genes, including tumorigenic genes, proto-oncogenes, tumor suppressors, and genes involved in apoptosis, were selected and analyzed using the real-time PCR method and in the presence of concentrations obtained for white tea. White tea extract at a concentration of 1000 μg/ml 3 times 16, 8, and 24 hours showed the highest growth inhibition in cancer cells with 53.27, 55.8, and 86.06%. The concentration of 1000 μg/ml aqueous extract of white tea under 24-hour treatment increased the expression of tumor suppressor genes compared to the normal sample.Keywords: catechin, gene expression, suppressor genes, colon cell line
Procedia PDF Downloads 58655 Development of a Bus Information Web System
Authors: Chiyoung Kim, Jaegeol Yim
Abstract:
Bus service is often either main or the only public transportation available in cities. In metropolitan areas, both subways and buses are available whereas in the medium sized cities buses are usually the only type of public transportation available. Bus Information Systems (BIS) provide current locations of running buses, efficient routes to travel from one place to another, points of interests around a given bus stop, a series of bus stops consisting of a given bus route, and so on to users. Thanks to BIS, people do not have to waste time at a bus stop waiting for a bus because BIS provides exact information on bus arrival times at a given bus stop. Therefore, BIS does a lot to promote the use of buses contributing to pollution reduction and saving natural resources. BIS implementation costs a huge amount of budget as it requires a lot of special equipment such as road side equipment, automatic vehicle identification and location systems, trunked radio systems, and so on. Consequently, medium and small sized cities with a low budget cannot afford to install BIS even though people in these cities need BIS service more desperately than people in metropolitan areas. It is possible to provide BIS service at virtually no cost under the assumption that everybody carries a smartphone and there is at least one person with a smartphone in a running bus who is willing to reveal his/her location details while he/she is sitting in a bus. This assumption is usually true in the real world. The smartphone penetration rate is greater than 100% in the developed countries and there is no reason for a bus driver to refuse to reveal his/her location details while driving. We have developed a mobile app that periodically reads values of sensors including GPS and sends GPS data to the server when the bus stops or when the elapsed time from the last send attempt is greater than a threshold. This app detects the bus stop state by investigating the sensor values. The server that receives GPS data from this app has also been developed. Under the assumption that the current locations of all running buses collected by the mobile app are recorded in a database, we have also developed a web site that provides all kinds of information that most BISs provide to users through the Internet. The development environment is: OS: Windows 7 64bit, IDE: Eclipse Luna 4.4.1, Spring IDE 3.7.0, Database: MySQL 5.1.7, Web Server: Apache Tomcat 7.0, Programming Language: Java 1.7.0_79. Given a start and a destination bus stop, it finds a shortest path from the start to the destination using the Dijkstra algorithm. Then, it finds a convenient route considering number of transits. For the user interface, we use the Google map. Template classes that are used by the Controller, DAO, Service and Utils classes include BUS, BusStop, BusListInfo, BusStopOrder, RouteResult, WalkingDist, Location, and so on. We are now integrating the mobile app system and the web app system.Keywords: bus information system, GPS, mobile app, web site
Procedia PDF Downloads 216654 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator
Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo
Abstract:
Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors
Procedia PDF Downloads 405653 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling
Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani
Abstract:
The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.Keywords: material point method, woven fabric composites, forming, material handling
Procedia PDF Downloads 180652 Finite Element Analysis of a Glass Facades Supported by Pre-Tensioned Cable Trusses
Authors: Khair Al-Deen Bsisu, Osama Mahmoud Abuzeid
Abstract:
Significant technological advances have been achieved in the design and building construction of steel and glass in the last two decades. The metal glass support frame has been replaced by further sophisticated technological solutions, for example, the point fixed glazing systems. The minimization of the visual mass has reached extensive possibilities through the evolution of technology in glass production and the better understanding of the structural potential of glass itself, the technological development of bolted fixings, the introduction of the glazing support attachments of the glass suspension systems and the use for structural stabilization of cables that reduce to a minimum the amount of metal used. The variability of solutions of tension structures, allied to the difficulties related to geometric and material non-linear behavior, usually overrules the use of analytical solutions, letting numerical analysis as the only general approach to the design and analysis of tension structures. With the characteristics of low stiffness, lightweight, and small damping, tension structures are obviously geometrically nonlinear. In fact, analysis of cable truss is not only one of the most difficult nonlinear analyses because the analysis path may have rigid-body modes, but also a time consuming procedure. Non-linear theory allowing for large deflections is used. The flexibility of supporting members was observed to influence the stresses in the pane considerably in some cases. No other class of architectural structural systems is as dependent upon the use of digital computers as are tensile structures. Besides complexity, the process of design and analysis of tension structures presents a series of specificities, which usually lead to the use of special purpose programs, instead of general purpose programs (GPPs), such as ANSYS. In a special purpose program, part of the design know how is embedded in program routines. It is very probable that this type of program will be the option of the final user, in design offices. GPPs offer a range of types of analyses and modeling options. Besides, traditional GPPs are constantly being tested by a large number of users, and are updated according to their actual demands. This work discusses the use of ANSYS for the analysis and design of tension structures, such as cable truss structures under wind and gravity loadings. A model to describe the glass panels working in coordination with the cable truss was proposed. Under the proposed model, a FEM model of the glass panels working in coordination with the cable truss was established.Keywords: Glass Construction material, Facades, Finite Element, Pre-Tensioned Cable Truss
Procedia PDF Downloads 280651 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil
Authors: Yasser R. Tawfic, Mohamed A. Eid
Abstract:
Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures
Procedia PDF Downloads 206650 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France
Authors: Aiman Mazhar Qureshi, Ahmed Rachid
Abstract:
Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation
Procedia PDF Downloads 147649 High Performance Computing Enhancement of Agent-Based Economic Models
Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna
Abstract:
This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process
Procedia PDF Downloads 127648 Labor Welfare and Social Security
Authors: Shoaib Alvi
Abstract:
Mahatma Gandhi was said “Man becomes great exactly in the degree in which he works for the welfare of his fellow-men”. Labor welfare is an important fact of Industrial relations. With the growth of industrialization, mechanization and computerization, labor welfare measures have got the fillip. The author believes that Labor welfare includes provisions of various facilities and amenities in and around the work place for the better life of the workers. Labor welfare is, thus, one of the major determinants of industrial relations. It comprises all human efforts the work place for the better life of the worker. The social and economic aspects of the life of the workers have the direct influence on the social and economic development of the nation. Author thinks that there could be multiple objectives in having, labor welfare programme the concern for improving the lot of the workers, a philosophy of humanitarianism or internal social responsibility, a feeling of concern, and caring by providing some of life's basic amenities, besides the basic pay packet. Such caring is supposed to build a sense of loyalty on the part of the employee towards the organization. The author thinks that Social security is the security that the State furnishes against the risks which an individual of small means cannot today, stand up to by himself even in private combination with his fellows. Social security is one of the pillars on which the structure of a welfare state rests, and it constitutes the hardcore of social policy in most countries. It is through social security measures that the state attempts to maintain every citizen at a certain prescribed level below which no one is allowed to fall. According to author, social assistance is a method according to which benefits are given to the needy persons, fulfilling the prescribed conditions, by the government out of its own resources. Author has analyzed and studied the relationship between the labor welfare social security and also studied various international conventions on provisions of social security by International Authorities like United Nations, International Labor Organization, and European Union etc. Author has also studied and analyzed concept of labor welfare and social security schemes of many countries around the globe ex:- Social security in Australia, Social security in Switzerland, Social Security (United States), Mexican Social Security Institute, Welfare in Germany, Social security schemes of India for labor welfare in both organized sector and unorganized sector. In this Research paper, Author has done the study on the Conceptual framework of the Labour Welfare. According to author, labors are highly perishable, which need constant welfare measures for their upgradation and performance in this field. At last author has studied role of trade unions and labor welfare unions and other institutions working for labor welfare, in this research paper author has also identified problems these Unions and labor welfare bodies’ face and tried to find out solutions for the problems and also analyzed various steps taken by the government of various countries around the globe.Keywords: labor welfare, internal social responsibility, social security, international conventions
Procedia PDF Downloads 573647 Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters
Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut
Abstract:
In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.Keywords: underwater vehicles, submarine, autonomous underwater vehicles, AUV, computational fluid dynamics, flow fields, pressure, turbulence, drag
Procedia PDF Downloads 88646 Stochastic Pi Calculus in Financial Markets: An Alternate Approach to High Frequency Trading
Authors: Jerome Joshi
Abstract:
The paper presents the modelling of financial markets using the Stochastic Pi Calculus model. The Stochastic Pi Calculus model is mainly used for biological applications; however, the feature of this model promotes its use in financial markets, more prominently in high frequency trading. The trading system can be broadly classified into exchange, market makers or intermediary traders and fundamental traders. The exchange is where the action of the trade is executed, and the two types of traders act as market participants in the exchange. High frequency trading, with its complex networks and numerous market participants (intermediary and fundamental traders) poses a difficulty while modelling. It involves the participants to seek the advantage of complex trading algorithms and high execution speeds to carry out large volumes of trades. To earn profits from each trade, the trader must be at the top of the order book quite frequently by executing or processing multiple trades simultaneously. This would require highly automated systems as well as the right sentiment to outperform other traders. However, always being at the top of the book is also not best for the trader, since it was the reason for the outbreak of the ‘Hot – Potato Effect,’ which in turn demands for a better and more efficient model. The characteristics of the model should be such that it should be flexible and have diverse applications. Therefore, a model which has its application in a similar field characterized by such difficulty should be chosen. It should also be flexible in its simulation so that it can be further extended and adapted for future research as well as be equipped with certain tools so that it can be perfectly used in the field of finance. In this case, the Stochastic Pi Calculus model seems to be an ideal fit for financial applications, owing to its expertise in the field of biology. It is an extension of the original Pi Calculus model and acts as a solution and an alternative to the previously flawed algorithm, provided the application of this model is further extended. This model would focus on solving the problem which led to the ‘Flash Crash’ which is the ‘Hot –Potato Effect.’ The model consists of small sub-systems, which can be integrated to form a large system. It is designed in way such that the behavior of ‘noise traders’ is considered as a random process or noise in the system. While modelling, to get a better understanding of the problem, a broader picture is taken into consideration with the trader, the system, and the market participants. The paper goes on to explain trading in exchanges, types of traders, high frequency trading, ‘Flash Crash,’ ‘Hot-Potato Effect,’ evaluation of orders and time delay in further detail. For the future, there is a need to focus on the calibration of the module so that they would interact perfectly with other modules. This model, with its application extended, would provide a basis for researchers for further research in the field of finance and computing.Keywords: concurrent computing, high frequency trading, financial markets, stochastic pi calculus
Procedia PDF Downloads 77645 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 181644 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID
Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis
Abstract:
Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.Keywords: artificial intelligence, COVID, neural network, machine learning
Procedia PDF Downloads 92643 A Proposal for an Excessivist Social Welfare Ordering
Authors: V. De Sandi
Abstract:
In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering
Procedia PDF Downloads 61642 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro
Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich
Abstract:
Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve
Procedia PDF Downloads 147