Search results for: multiple connectivity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5105

Search results for: multiple connectivity

125 Big Data Applications for Transportation Planning

Authors: Antonella Falanga, Armando Cartenì

Abstract:

"Big data" refers to extremely vast and complex sets of data, encompassing extraordinarily large and intricate datasets that require specific tools for meaningful analysis and processing. These datasets can stem from diverse origins like sensors, mobile devices, online transactions, social media platforms, and more. The utilization of big data is pivotal, offering the chance to leverage vast information for substantial advantages across diverse fields, thereby enhancing comprehension, decision-making, efficiency, and fostering innovation in various domains. Big data, distinguished by its remarkable attributes of enormous volume, high velocity, diverse variety, and significant value, represent a transformative force reshaping the industry worldwide. Their pervasive impact continues to unlock new possibilities, driving innovation and advancements in technology, decision-making processes, and societal progress in an increasingly data-centric world. The use of these technologies is becoming more widespread, facilitating and accelerating operations that were once much more complicated. In particular, big data impacts across multiple sectors such as business and commerce, healthcare and science, finance, education, geography, agriculture, media and entertainment and also mobility and logistics. Within the transportation sector, which is the focus of this study, big data applications encompass a wide variety, spanning across optimization in vehicle routing, real-time traffic management and monitoring, logistics efficiency, reduction of travel times and congestion, enhancement of the overall transportation systems, but also mitigation of pollutant emissions contributing to environmental sustainability. Meanwhile, in public administration and the development of smart cities, big data aids in improving public services, urban planning, and decision-making processes, leading to more efficient and sustainable urban environments. Access to vast data reservoirs enables deeper insights, revealing hidden patterns and facilitating more precise and timely decision-making. Additionally, advancements in cloud computing and artificial intelligence (AI) have further amplified the potential of big data, enabling more sophisticated and comprehensive analyses. Certainly, utilizing big data presents various advantages but also entails several challenges regarding data privacy and security, ensuring data quality, managing and storing large volumes of data effectively, integrating data from diverse sources, the need for specialized skills to interpret analysis results, ethical considerations in data use, and evaluating costs against benefits. Addressing these difficulties requires well-structured strategies and policies to balance the benefits of big data with privacy, security, and efficient data management concerns. Building upon these premises, the current research investigates the efficacy and influence of big data by conducting an overview of the primary and recent implementations of big data in transportation systems. Overall, this research allows us to conclude that big data better provide to enhance rational decision-making for mobility choices and is imperative for adeptly planning and allocating investments in transportation infrastructures and services.

Keywords: big data, public transport, sustainable mobility, transport demand, transportation planning

Procedia PDF Downloads 60
124 Differential Expression Profile Analysis of DNA Repair Genes in Mycobacterium Leprae by qPCR

Authors: Mukul Sharma, Madhusmita Das, Sundeep Chaitanya Vedithi

Abstract:

Leprosy is a chronic human disease caused by Mycobacterium leprae, that cannot be cultured in vitro. Though treatable with multidrug therapy (MDT), recently, bacteria reported resistance to multiple antibiotics. Targeting DNA replication and repair pathways can serve as the foundation of developing new anti-leprosy drugs. Due to the absence of an axenic culture medium for the propagation of M. leprae, studying cellular processes, especially those belonging to DNA repair pathways, is challenging. Genomic understanding of M. Leprae harbors several protein-coding genes with no previously assigned function known as 'hypothetical proteins'. Here, we report identification and expression of known and hypothetical DNA repair genes from a human skin biopsy and mouse footpads that are involved in base excision repair, direct reversal repair, and SOS response. Initially, a bioinformatics approach was employed based on sequence similarity, identification of known protein domains to screen the hypothetical proteins in the genome of M. leprae, that are potentially related to DNA repair mechanisms. Before testing on clinical samples, pure stocks of bacterial reference DNA of M. leprae (NHDP63 strain) was used to construct standard graphs to validate and identify lower detection limit in the qPCR experiments. Primers were designed to amplify the respective transcripts, and PCR products of the predicted size were obtained. Later, excisional skin biopsies of newly diagnosed untreated, treated, and drug resistance leprosy cases from SIHR & LC hospital, Vellore, India were taken for the extraction of RNA. To determine the presence of the predicted transcripts, cDNA was generated from M. leprae mRNA isolated from clinically confirmed leprosy skin biopsy specimen across all the study groups. Melting curve analysis was performed to determine the integrity of the amplification and to rule out primer‑dimer formation. The Ct values obtained from qPCR were fitted to standard curve to determine transcript copy number. Same procedure was applied for M. leprae extracted after processing a footpad of nude mice of drug sensitive and drug resistant strains. 16S rRNA was used as positive control. Of all the 16 genes involved in BER, DR, and SOS, differential expression pattern of the genes was observed in terms of Ct values when compared to human samples; this was because of the different host and its immune response. However, no drastic variation in gene expression levels was observed in human samples except the nth gene. The higher expression of nth gene could be because of the mutations that may be associated with sequence diversity and drug resistance which suggests an important role in the repair mechanism and remains to be explored. In both human and mouse samples, SOS system – lexA and RecA, and BER genes AlkB and Ogt were expressing efficiently to deal with possible DNA damage. Together, the results of the present study suggest that DNA repair genes are constitutively expressed and may provide a reference for molecular diagnosis, therapeutic target selection, determination of treatment and prognostic judgment in M. leprae pathogenesis.

Keywords: DNA repair, human biopsy, hypothetical proteins, mouse footpads, Mycobacterium leprae, qPCR

Procedia PDF Downloads 103
123 Factors Affecting Treatment Resilience in Patients with Oesophago-Gastric Cancers Undergoing Palliative Chemotherapy: A Literature Review

Authors: Kiran Datta, Daniella Holland-Hart, Anthony Byrne

Abstract:

Introduction: Oesophago-gastric (OG) cancers are the fifth commonest in the UK, accounting for over 12,000 deaths each year. Most patients will present at later stages of the disease, with only 21% of patients with stage 4 disease surviving longer than a year. As a result, many patients are unsuitable for curative surgery and instead receive palliative treatment to improve prognosis and symptom burden. However, palliative chemotherapy can result in significant toxicity: almost half of the patients are unable to complete their chemotherapy regimen, with this proportion rising significantly in older and frailer patients. In addition, clinical trials often exclude older and frailer patients due to strict inclusion criteria, meaning there is limited evidence to guide which patients are most likely to benefit from palliative chemotherapy. Inappropriate chemotherapy administration is at odds with the goals of palliative treatment and care, which are to improve quality of life, and this also represents a significant resource expenditure. This literature review aimed to examine and appraise evidence regarding treatment resilience in order to guide clinicians in identifying the most suitable candidates for palliative chemotherapy. Factors influencing treatment resilience were assessed, as measured by completion rates, dose reductions, and toxicities. Methods: This literature review was conducted using rapid review methodology, utilising modified systematic methods. A literature search was performed across the MEDLINE, EMBASE, and Cochrane Library databases, with results limited to papers within the last 15 years and available in English. Key inclusion criteria included: 1) participants with either oesophageal, gastro-oesophageal junction, or gastric cancers; 2) patients treated with palliative chemotherapy; 3) available data evaluating the association between baseline participant characteristics and treatment resilience. Results: Of the 2326 papers returned, 11 reports of 10 studies were included in this review after excluding duplicates and irrelevant papers. Treatment resilience factors that were assessed included: age, performance status, frailty, inflammatory markers, and sarcopenia. Age was generally a poor predictor for how well patients would tolerate chemotherapy, while poor performance status was a better indicator of the need for dose reduction and treatment non-completion. Frailty was assessed across one cohort using multiple screening tools and was an effective marker of the risk of toxicity and the requirement for dose reduction. Inflammatory markers included lymphopenia and the Glasgow Prognostic Score, which assessed inflammation and hypoalbuminaemia. Although quick to obtain and interpret, these findings appeared less reliable due to the inclusion of patients treated with palliative radiotherapy. Sarcopenia and body composition were often associated with chemotherapy toxicity but not the rate of regimen completion. Conclusion: This review demonstrates that there are numerous measures that can estimate the ability of patients with oesophago-gastric cancer to tolerate palliative chemotherapy, and these should be incorporated into clinical assessments to promote personalised decision-making around treatment. Age should not be a barrier to receiving chemotherapy and older and frailer patients should be included in future clinical trials to better represent typical patients with oesophago-gastric cancers. Decisions regarding palliative treatment should be guided by these factors identified as well as patient preference.

Keywords: frailty, oesophago-gastric cancer, palliative chemotherapy, treatment resilience

Procedia PDF Downloads 76
122 Impact of Transgenic Adipose Derived Stem Cells in the Healing of Spinal Cord Injury of Dogs

Authors: Imdad Ullah Khan, Yongseok Yoon, Kyeung Uk Choi, Kwang Rae Jo, Namyul Kim, Eunbee Lee, Wan Hee Kim, Oh-Kyeong Kweon

Abstract:

The primary spinal cord injury (SCI) causes mechanical damage to the neurons and blood vessels. It leads to secondary SCI, which activates multiple pathological pathways, which expand neuronal damage at the injury site. It is characterized by vascular disruption, ischemia, excitotoxicity, oxidation, inflammation, and apoptotic cell death. It causes nerve demyelination and disruption of axons, which perpetuate a loss of impulse conduction through the injured spinal cord. It also leads to the production of myelin inhibitory molecules, which with a concomitant formation of an astroglial scar, impede axonal regeneration. The pivotal role regarding the neuronal necrosis is played by oxidation and inflammation. During an early stage of spinal cord injury, there occurs an abundant expression of reactive oxygen species (ROS) due to defective mitochondrial metabolism and abundant migration of phagocytes (macrophages, neutrophils). ROS cause lipid peroxidation of the cell membrane, and cell death. Abundant migration of neutrophils, macrophages, and lymphocytes collectively produce pro-inflammatory cytokines such as tumor necrosis factor-alpha (TNF-α), interleukin-6 (IL-6), interleukin-1beta (IL-1β), matrix metalloproteinase, superoxide dismutase, and myeloperoxidases which synergize neuronal apoptosis. Therefore, it is crucial to control inflammation and oxidation injury to minimize the nerve cell death during secondary spinal cord injury. Therefore, in response to oxidation and inflammation, heme oxygenase-1 (HO-1) is induced by the resident cells to ameliorate the milieu. In the meanwhile, neurotrophic factors are induced to promote neuroregeneration. However, it seems that anti-stress enzyme (HO-1) and neurotrophic factor (BDNF) do not significantly combat the pathological events during secondary spinal cord injury. Therefore, optimum healing can be induced if anti-inflammatory and neurotrophic factors are administered in a higher amount through an exogenous source. During the first experiment, the inflammation and neuroregeneration were selectively targeted. HO-1 expressing MSCs (HO-1 MSCs) and BDNF expressing MSCs (BDNF MSC) were co-transplanted in one group (combination group) of dogs with subacute spinal cord injury to selectively control the expression of inflammatory cytokines by HO-1 and induce neuroregeneration by BDNF. We compared the combination group with the HO-1 MSCs group, BDNF MSCs group, and GFP MSCs group. We found that the combination group showed significant improvement in functional recovery. It showed increased expression of neural markers and growth-associated proteins (GAP-43) than in other groups, which depicts enhanced neuroregeneration/neural sparing due to reduced expression of pro-inflammatory cytokines such as TNF-alpha, IL-6 and COX-2; and increased expression of anti-inflammatory markers such as IL-10 and HO-1. Histopathological study revealed reduced intra-parenchymal fibrosis in the injured spinal cord segment in the combination group than in other groups. Thus it was concluded that selectively targeting the inflammation and neuronal growth with the combined use of HO-1 MSCs and BDNF MSCs more favorably promote healing of the SCI. HO-1 MSCs play a role in controlling the inflammation, which favors the BDNF induced neuroregeneration at the injured spinal cord segment of dogs.

Keywords: HO-1 MSCs, BDNF MSCs, neuroregeneration, inflammation, anti-inflammation, spinal cord injury, dogs

Procedia PDF Downloads 118
121 An Analysis of Economical Drivers and Technical Challenges for Large-Scale Biohydrogen Deployment

Authors: Rouzbeh Jafari, Joe Nava

Abstract:

This study includes learnings from an engineering practice normally performed on large scale biohydrogen processes. If properly scale-up is done, biohydrogen can be a reliable pathway for biowaste valorization. Most of the studies on biohydrogen process development have used model feedstock to investigate process key performance indicators (KPIs). This study does not intend to compare different technologies with model feedstock. However, it reports economic drivers and technical challenges which help in developing a road map for expanding biohydrogen economy deployment in Canada. BBA is a consulting firm responsible for the design of hydrogen production projects. Through executing these projects, activity has been performed to identify, register and mitigate technical drawbacks of large-scale hydrogen production. Those learnings, in this study, have been applied to the biohydrogen process. Through data collected by a comprehensive literature review, a base case has been considered as a reference, and several case studies have been performed. Critical parameters of the process were identified and through common engineering practice (process design, simulation, cost estimate, and life cycle assessment) impact of these parameters on the commercialization risk matrix and class 5 cost estimations were reported. The process considered in this study is food waste and woody biomass dark fermentation. To propose a reliable road map to develop a sustainable biohydrogen production process impact of critical parameters was studied on the end-to-end process. These parameters were 1) feedstock composition, 2) feedstock pre-treatment, 3) unit operation selection, and 4) multi-product concept. A couple of emerging technologies also were assessed such as photo-fermentation, integrated dark fermentation, and using ultrasound and microwave to break-down feedstock`s complex matrix and increase overall hydrogen yield. To properly report the impact of each parameter KPIs were identified as 1) Hydrogen yield, 2) energy consumption, 3) secondary waste generated, 4) CO2 footprint, 5) Product profile, 6) $/kg-H2 and 5) environmental impact. The feedstock is the main parameter defining the economic viability of biohydrogen production. Through parametric studies, it was found that biohydrogen production favors feedstock with higher carbohydrates. The feedstock composition was varied, by increasing one critical element (such as carbohydrate) and monitoring KPIs evolution. Different cases were studied with diverse feedstock, such as energy crops, wastewater slug, and lignocellulosic waste. The base case process was applied to have reference KPIs values and modifications such as pretreatment and feedstock mix-and-match were implemented to investigate KPIs changes. The complexity of the feedstock is the main bottleneck in the successful commercial deployment of the biohydrogen process as a reliable pathway for waste valorization. Hydrogen yield, reaction kinetics, and performance of key unit operations highly impacted as feedstock composition fluctuates during the lifetime of the process or from one case to another. In this case, concept of multi-product becomes more reliable. In this concept, the process is not designed to produce only one target product such as biohydrogen but will have two or multiple products (biohydrogen and biomethane or biochemicals). This new approach is being investigated by the BBA team and the results will be shared in another scientific contribution.

Keywords: biohydrogen, process scale-up, economic evaluation, commercialization uncertainties, hydrogen economy

Procedia PDF Downloads 110
120 Innovation Eco-Systems and Cities: Sustainable Innovation and Urban Form

Authors: Claudia Trillo

Abstract:

Regional innovation eco-ecosystems are composed of a variety of interconnected urban innovation eco-systems, mutually reinforcing each other and making the whole territorial system successful. Combining principles drawn from the new economic growth theory and from the socio-constructivist approach to the economic growth, with the new geography of innovation emerging from the networked nature of innovation districts, this paper explores the spatial configuration of urban innovation districts, with the aim of unveiling replicable spatial patterns and transferable portfolios of urban policies. While some authors suggest that cities should be considered ideal natural clusters, supporting cross-fertilization and innovation thanks to the physical setting they provide to the construction of collective knowledge, still a considerable distance persists between regional development strategies and urban policies. Moreover, while public and private policies supporting entrepreneurship normally consider innovation as the cornerstone of any action aimed at uplifting the competitiveness and economic success of a certain area, a growing body of literature suggests that innovation is non-neutral, hence, it should be constantly assessed against equity and social inclusion. This paper draws from a robust qualitative empirical dataset gathered through 4-years research conducted in Boston to provide readers with an evidence-based set of recommendations drawn from the lessons learned through the investigation of the chosen innovation districts in the Boston area. The evaluative framework used for assessing the overall performance of the chosen case studies stems from the Habitat III Sustainable Development Goals rationale. The concept of inclusive growth has been considered essential to assess the social innovation domain in each of the chosen cases. The key success factors for the development of the Boston innovation ecosystem can be generalized as follows: 1) a quadruple helix model embedded in the physical structure of the two cities (Boston and Cambridge), in which anchor Higher Education (HE) institutions continuously nurture the Entrepreneurial Environment. 2) an entrepreneurial approach emerging from the local governments, eliciting risk-taking and bottom-up civic participation in tackling key issues in the city. 3) a networking structure of some intermediary actors supporting entrepreneurial collaboration, cross-fertilization and co-creation, which collaborate at multiple-scales thus enabling positive spillovers from the stronger to the weaker contexts. 4) awareness of the socio-economic value of the built environment as enabler of cognitive networks allowing activation of the collective intelligence. 5) creation of civic-led spaces enabling grassroot collaboration and cooperation. Evidence shows that there is not a single magic recipe for the successful implementation of place-based and social innovation-driven strategies. On the contrary, the variety of place-grounded combinations of micro and macro initiatives, embedded in the social and spatial fine grain of places and encompassing a diversity of actors, can create the conditions enabling places to thrive and local economic activities to grow in a sustainable way.

Keywords: innovation-driven sustainable Eco-systems , place-based sustainable urban development, sustainable innovation districts, social innovation, urban policie

Procedia PDF Downloads 104
119 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
118 Educational Knowledge Transfer in Indigenous Mexican Areas Using Cloud Computing

Authors: L. R. Valencia Pérez, J. M. Peña Aguilar, A. Lamadrid Álvarez, A. Pastrana Palma, H. F. Valencia Pérez, M. Vivanco Vargas

Abstract:

This work proposes a Cooperation-Competitive (Coopetitive) approach that allows coordinated work among the Secretary of Public Education (SEP), the Autonomous University of Querétaro (UAQ) and government funds from National Council for Science and Technology (CONACYT) or some other international organizations. To work on an overall knowledge transfer strategy with e-learning over the Cloud, where experts in junior high and high school education, working in multidisciplinary teams, perform analysis, evaluation, design, production, validation and knowledge transfer at large scale using a Cloud Computing platform. Allowing teachers and students to have all the information required to ensure a homologated nationally knowledge of topics such as mathematics, statistics, chemistry, history, ethics, civism, etc. This work will start with a pilot test in Spanish and initially in two regional dialects Otomí and Náhuatl. Otomí has more than 285,000 speaking indigenes in Queretaro and Mexico´s central region. Náhuatl is number one indigenous dialect spoken in Mexico with more than 1,550,000 indigenes. The phase one of the project takes into account negotiations with indigenous tribes from different regions, and the Information and Communication technologies to deliver the knowledge to the indigenous schools in their native dialect. The methodology includes the following main milestones: Identification of the indigenous areas where Otomí and Náhuatl are the spoken dialects, research with the SEP the location of actual indigenous schools, analysis and inventory or current schools conditions, negotiation with tribe chiefs, analysis of the technological communication requirements to reach the indigenous communities, identification and inventory of local teachers technology knowledge, selection of a pilot topic, analysis of actual student competence with traditional education system, identification of local translators, design of the e-learning platform, design of the multimedia resources and storage strategy for “Cloud Computing”, translation of the topic to both dialects, Indigenous teachers training, pilot test, course release, project follow up, analysis of student requirements for the new technological platform, definition of a new and improved proposal with greater reach in topics and regions. Importance of phase one of the project is multiple, it includes the proposal of a working technological scheme, focusing in the cultural impact in Mexico so that indigenous tribes can improve their knowledge about new forms of crop improvement, home storage technologies, proven home remedies for common diseases, ways of preparing foods containing major nutrients, disclose strengths and weaknesses of each region, communicating through cloud computing platforms offering regional products and opening communication spaces for inter-indigenous cultural exchange.

Keywords: Mexicans indigenous tribes, education, knowledge transfer, cloud computing, otomi, Náhuatl, language

Procedia PDF Downloads 406
117 Medical Workforce Knowledge of Adrenaline (Epinephrine) Administration in Anaphylaxis in Adults Considerably Improved with Training in an UK Hospital from 2010 to 2017

Authors: Jan C. Droste, Justine Burns, Nithin Narayan

Abstract:

Introduction: Life-threatening detrimental effects of inappropriate adrenaline (epinephrine) administration, e.g., by giving the wrong dose, in the context of anaphylaxis management is well documented in the medical literature. Half of the fatal anaphylactic reactions in the UK are iatrogenic, and the median time to a cardio-respiratory arrest can be as short as 5 minutes. It is therefore imperative that hospital doctors of all grades have active and accurate knowledge of the correct route, site, and dosage of administration of adrenaline. Given this time constraint and the potential fatal outcome with inappropriate management of anaphylaxis, it is alarming that surveys over the last 15 years have repeatedly shown only a minority of doctors to have accurate knowledge of adrenaline administration as recommended by the UK Resuscitation Council guidelines (2008 updated 2012). This comparison of survey results of the medical workforce over several years in a small NHS District General Hospital was conducted in order to establish the effect of the employment of multiple educational methods regarding adrenaline administration in anaphylaxis in adults. Methods: Between 2010 and 2017, several education methods and tools were used to repeatedly inform the medical workforce (doctors and advanced clinical practitioners) in a single district general hospital regarding the treatment of anaphylaxis in adults. Whilst the senior staff remained largely the same cohort, junior staff had changed fully in every survey. Examples included: (i) Formal teaching -in Grand Rounds; during the junior doctors’ induction process; advanced life support courses (ii) In-situ simulation training performed by the clinical skills simulation team –several ad hoc sessions and one 3-day event in 2017 visiting 16 separate clinical areas performing an acute anaphylaxis scenario using actors- around 100 individuals from multi-disciplinary teams were involved (iii) Hospital-wide distribution of the simulation event via the Trust’s Simulation Newsletter (iv) Laminated algorithms were attached to the 'crash trolleys' (v) A short email 'alert' was sent to all medical staff 3 weeks prior to the survey detailing the emergency treatment of anaphylaxis (vi) In addition, the performance of the surveys themselves represented a teaching opportunity when gaps in knowledge could be addressed. Face to face surveys were carried out in 2010 ('pre-intervention), 2015, and 2017, in the latter two occasions including advanced clinical practitioners (ACP). All surveys consisted of convenience samples. If verbal consent to conduct the survey was obtained, the medical practitioners' answers were recorded immediately on a data collection sheet. Results: There was a sustained improvement in the knowledge of the medical workforce from 2010 to 2017: Answers improved regarding correct drug by 11% (84%, 95%, and 95%); the correct route by 20% (76%, 90%, and 96%); correct site by 40% (43%, 83%, and 83%) and the correct dose by 45% (27%, 54%, and 72%). Overall, knowledge of all components -correct drug, route, site, and dose-improved from 13% in 2010 to 62% in 2017. Conclusion: This survey comparison shows knowledge of the medical workforce regarding adrenaline administration for treatment of anaphylaxis in adults can be considerably improved by employing a variety of educational methods.

Keywords: adrenaline, anaphylaxis, epinephrine, medical education, patient safety

Procedia PDF Downloads 129
116 National Core Indicators - Aging and Disabilities: A Person-Centered Approach to Understanding Quality of Long-Term Services and Supports

Authors: Stephanie Giordano, Rosa Plasencia

Abstract:

In the USA, in 2013, public service systems such as Medicaid, aging, and disability systems undertook an effort to measure the quality of service delivery by examining the experiences and outcomes of those receiving public services. The goal of this effort was to develop a survey to measure the experiences and outcomes of those receiving public services, with the goal of measuring system performance for quality improvement. The performance indicators were developed through with input from directors of state aging and disability service systems, along with experts and stakeholders in the field across the United States. This effort, National Core Indicators –Aging and Disabilities (NCI-AD), grew out of National Core Indicators –Intellectual and Developmental Disabilities, an effort to measure developmental disability (DD) systems across the States. The survey tool and administration protocol underwent multiple rounds of testing and revision between 2013 and 2015. The measures in the final tool – called the Adult Consumer Survey (ACS) – emphasize not just important indicators of healthcare access and personal safety but also includes indicators of system quality based on person-centered outcomes. These measures indicate whether service systems support older adults and people with disabilities to live where they want, maintain relationships and engage in their communities and have choice and control in their everyday lives. Launched in 2015, the NCI-AD Adult Consumer Survey is now used in 23 states in the US. Surveys are conducted by NCI-AD trained surveyors via direct conversation with a person receiving public long-term services and supports (LTSS). Until 2020, surveys were only conducted in person. However, after a pilot to test the reliability of videoconference and telephone survey modes, these modes were adopted as an acceptable practice. The nature of the survey is that of a “guided conversation” survey administration allows for surveyor to use wording and terminology that is best understand by the person surveyed. The survey includes a subset of questions that may be answered by a proxy respondent who knows the person well if the person is receiving services in unable to provide valid responses on their own. Surveyors undergo a standardized training on survey administration to ensure the fidelity of survey administration. In addition to the main survey section, a Background Information section collects data on personal and service-related characteristics of the person receiving services; these data are typically collected through state administrative record. This information is helps provide greater context around the characteristics of people receiving services. It has also been used in conjunction with outcomes measures to look at disparity (including by race and ethnicity, gender, disability, and living arrangements). These measures of quality are critical for public service delivery systems to understand the unique needs of the population of older adults and improving the lives of older adults as well as people with disabilities. Participating states may use these data to identify areas for quality improvement within their service delivery systems, to advocate for specific policy change, and to better understand the experiences of specific populations of people served.

Keywords: quality of life, long term services and supports, person-centered practices, aging and disability research, survey methodology

Procedia PDF Downloads 121
115 Towards an Effective Approach for Modelling near Surface Air Temperature Combining Weather and Satellite Data

Authors: Nicola Colaninno, Eugenio Morello

Abstract:

The urban environment affects local-to-global climate and, in turn, suffers global warming phenomena, with worrying impacts on human well-being, health, social and economic activities. Physic-morphological features of the built-up space affect urban air temperature, locally, causing the urban environment to be warmer compared to surrounding rural. This occurrence, typically known as the Urban Heat Island (UHI), is normally assessed by means of air temperature from fixed weather stations and/or traverse observations or based on remotely sensed Land Surface Temperatures (LST). The information provided by ground weather stations is key for assessing local air temperature. However, the spatial coverage is normally limited due to low density and uneven distribution of the stations. Although different interpolation techniques such as Inverse Distance Weighting (IDW), Ordinary Kriging (OK), or Multiple Linear Regression (MLR) are used to estimate air temperature from observed points, such an approach may not effectively reflect the real climatic conditions of an interpolated point. Quantifying local UHI for extensive areas based on weather stations’ observations only is not practicable. Alternatively, the use of thermal remote sensing has been widely investigated based on LST. Data from Landsat, ASTER, or MODIS have been extensively used. Indeed, LST has an indirect but significant influence on air temperatures. However, high-resolution near-surface air temperature (NSAT) is currently difficult to retrieve. Here we have experimented Geographically Weighted Regression (GWR) as an effective approach to enable NSAT estimation by accounting for spatial non-stationarity of the phenomenon. The model combines on-site measurements of air temperature, from fixed weather stations and satellite-derived LST. The approach is structured upon two main steps. First, a GWR model has been set to estimate NSAT at low resolution, by combining air temperature from discrete observations retrieved by weather stations (dependent variable) and the LST from satellite observations (predictor). At this step, MODIS data, from Terra satellite, at 1 kilometer of spatial resolution have been employed. Two time periods are considered according to satellite revisit period, i.e. 10:30 am and 9:30 pm. Afterward, the results have been downscaled at 30 meters of spatial resolution by setting a GWR model between the previously retrieved near-surface air temperature (dependent variable), the multispectral information as provided by the Landsat mission, in particular the albedo, and Digital Elevation Model (DEM) from the Shuttle Radar Topography Mission (SRTM), both at 30 meters. Albedo and DEM are now the predictors. The area under investigation is the Metropolitan City of Milan, which covers an area of approximately 1,575 km2 and encompasses a population of over 3 million inhabitants. Both models, low- (1 km) and high-resolution (30 meters), have been validated according to a cross-validation that relies on indicators such as R2, Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). All the employed indicators give evidence of highly efficient models. In addition, an alternative network of weather stations, available for the City of Milano only, has been employed for testing the accuracy of the predicted temperatures, giving and RMSE of 0.6 and 0.7 for daytime and night-time, respectively.

Keywords: urban climate, urban heat island, geographically weighted regression, remote sensing

Procedia PDF Downloads 195
114 Design Thinking and Project-Based Learning: Opportunities, Challenges, and Possibilities

Authors: Shoba Rathilal

Abstract:

High unemployment rates and a shortage of experienced and qualified employees appear to be a paradox that currently plagues most countries worldwide. In a developing country like South Africa, the rate of unemployment is reported to be approximately 35%, the highest recorded globally. At the same time, a countrywide deficit in experienced and qualified potential employees is reported in South Africa, which is causing fierce rivalry among firms. Employers have reported that graduates are very rarely able to meet the demands of the job as there are gaps in their knowledge and conceptual understanding and other 21st-century competencies, attributes, and dispositions required to successfully negotiate the multiple responsibilities of employees in organizations. In addition, the rates of unemployment and suitability of graduates appear to be skewed by race and social class, the continued effects of a legacy of inequitable educational access. Higher Education in the current technologically advanced and dynamic world needs to serve as an agent of transformation, aspiring to develop graduates to be creative, flexible, critical, and with entrepreneurial acumen. This requires that higher education curricula and pedagogy require a re-envisioning of our selection, sequencing, and pacing of the learning, teaching, and assessment. At a particular Higher education Institution in South Africa, Design Thinking and Project Based learning are being adopted as two approaches that aim to enhance the student experience through the provision of a “distinctive education” that brings together disciplinary knowledge, professional engagement, technology, innovation, and entrepreneurship. Using these methodologies forces the students to solve real-time applied problems using various forms of knowledge and finding innovative solutions that can result in new products and services. The intention is to promote the development of skills for self-directed learning, facilitate the development of self-awareness, and contribute to students being active partners in the application and production of knowledge. These approaches emphasize active and collaborative learning, teamwork, conflict resolution, and problem-solving through effective integration of theory and practice. In principle, both these approaches are extremely impactful. However, at the institution in this study, the implementation of the PBL and DT was not as “smooth” as anticipated. This presentation reports on the analysis of the implementation of these two approaches within higher education curricula at a particular university in South Africa. The study adopts a qualitative case study design. Data were generated through the use of surveys, evaluation feedback at workshops, and content analysis of project reports. Data were analyzed using document analysis, content, and thematic analysis. Initial analysis shows that the forces constraining the implementation of PBL and DT range from the capacity to engage with DT and PBL, both from staff and students, educational contextual realities of higher education institutions, administrative processes, and resources. At the same time, the implementation of DT and PBL was enabled through the allocation of strategic funding and capacity development workshops. These factors, however, could not achieve maximum impact. In addition, the presentation will include recommendations on how DT and PBL could be adapted for differing contexts will be explored.

Keywords: design thinking, project based learning, innovative higher education pedagogy, student and staff capacity development

Procedia PDF Downloads 77
113 A Triple Win: Linking Students, Academics, and External Organisations to Provide Real-World Learning Experiences with Real-World Benefits

Authors: Anne E. Goodenough

Abstract:

Students often learn best ‘on the job’ through holistic real-world projects. They need real-world experiences to make classroom learning applicable and to increase their employability. Academics typically value working on projects where new knowledge is created and have a genuine desire to help students engage with learning and develop new skills. They might also have institutional pressure to enhance student engagement, retention, and satisfaction. External organizations - especially non-governmental bodies, charities, and small enterprises - often have fundamental and pressing questions, but lack the manpower and academic expertise to answer them effectively. They might also be on the lookout for talented potential employees. This study examines ways in which these diverse requirements can be met simultaneously by creating three-way projects that provide excellent academic and real-world outcomes for all involved. It studied a range of innovative projects across natural sciences (biology, ecology, physical geography and social sciences (human geography, sociology, criminology, and community engagement) to establish how to best harness the potential of this powerful approach. Focal collaborations included: (1) development of practitioner-linked modules; (2) frameworks where students collected/analyzed data for link organizations in research methods modules; (3) placement-based internships and dissertations; and (4) immersive fieldwork projects in novel locations to allow students engage first-hand with contemporary issues as diverse as rhino poaching in South Africa, segregation in Ireland, and gun crime in Florida. Although there was no ‘magic formula’ for success, the approach was found to work best when small projects were developed that were achievable in a short time-frame, both to tie into modular curricula and meet the immediacy expectations of many link organizations. Bigger projects were found to work well in some cases, especially when they were essentially a series of linked smaller projects, either running concurrently or successively with each building on previous work. Opportunities were maximized when there were tangible benefits to the link organization as this generally increased organization investment in the project and motivated students too. The importance of finding the right approach for a given project was found to be key: it was vital to ensure that something that could work effectively as an independent research project for one student, for example, was not shoehorned into being a project for multiple students within a taught module. In general, students were very positive about collaboration projects. They identified benefits to confidence, time-keeping and communication, as well as conveying their enthusiasm when their work was of benefit to the wider community. Several students have gone on to do further work with the link organization in a voluntary capacity or as paid staff, or used the experiences to help them break into the ever-more competitive job market in other ways. Although this approach involves a substantial time investment, especially from academics, the benefits can be profound. The approach has strong potential to engage students, help retention, improve student satisfaction, and teach new skills; keep the knowledge of academics fresh and current; and provide valuable tangible benefits for link organizations: a real triple win.

Keywords: authentic learning, curriculum development, effective education, employability, higher education, innovative pedagogy, link organizations, student experience

Procedia PDF Downloads 219
112 Ethnic Tourism and Real Estate Development: A Case of Yiren Ancient Town, China

Authors: Li Yang

Abstract:

Tourism is employed by many countries to facilitate socioeconomic development and to assist in the heritage preservation. An “ethnic culture boom” is currently driving the tourism industry in China. Ethnic minorities, commonly portrayed as primitive, colorful and exotic, have become a big tourist draw. Many cultural attractions have been built throughout China to meet the demands of domestic tourists. Sacred cultural heritage sites have been rehabilitated as a major component of ethnic tourism. The purpose of this study is to examine the interconnected consequences of tourism development and tourism-related leisure property development and, and to discuss, in a broader context, issues and considerations that are pertinent to the management and development of ethnic attractions. The role of real estate in tourism development and its sociocultural consequences are explored. An empirical research was conducted in Yiren Ancient Town (literally, "Ancient Town of Yi People") in Chuxiong City, Yunnan Province, China. Multiple research methods, including in-depth interviews, informal discussions, on-site observations, and secondary data review were employed to measure residents and tourism decision-makers’ perceptions of ethnic tourism and to explore the impacts of tourism on local community. Key informants from government officials, tourism developers and local communities were interviewed individually to gather what they think about benefits and costs of tourism, and what their concerns about and hopes for tourism development are. Yiren Ancient Town was constructed in classical Yi architecture style featuring tranquil garden scenery. Commercial streets, entertainment complexes, and accommodation facilities occupied the center of the town, creating culturally distinctive and visually stimulating places for tourists. A variety of activities are presented to visitors, including walking tours of the town, staged dance shows, musical performances, ethnic festivals and ceremonies, tasting minority food and wedding shows. This study reveals that tourism real estate has transformed the town from a traditional neighborhood into diverse real estate landscapes. Ethnic architecture, costumes, festivals and folk culture have been represented, altered and reinvented through the tourist gaze and mechanisms of cultural production. Tourism is now a new economic driver of the community providing opportunities for the creation of small businesses. There was a general appreciation in the community that tourism has created many employment opportunities, especially for self-employment. However, profit-seeking is a primary motivation for the government, developers, businesses, and other actors involved in the tourism development process. As the town has attracted an increasing number of visitors, commercialization and business competition are intense in the town. Many residents complained about elevated land prices, making the town and the surroundings comparatively high-value locales. Local community is also concerned about the decline of traditional ethnic culture and an erosion of the sense of identity and place. A balance is difficult to maintain between protection and development. The preservation of ethnic culture and heritage should be enhanced if long-term sustainable development of tourism is to occur and the loss of ethnic identities is to be avoided.

Keywords: ancient town, ethnic tourism, local community, real estate, China

Procedia PDF Downloads 279
111 Ensemble Methods in Machine Learning: An Algorithmic Approach to Derive Distinctive Behaviors of Criminal Activity Applied to the Poaching Domain

Authors: Zachary Blanks, Solomon Sonya

Abstract:

Poaching presents a serious threat to endangered animal species, environment conservations, and human life. Additionally, some poaching activity has even been linked to supplying funds to support terrorist networks elsewhere around the world. Consequently, agencies dedicated to protecting wildlife habitats have a near intractable task of adequately patrolling an entire area (spanning several thousand kilometers) given limited resources, funds, and personnel at their disposal. Thus, agencies need predictive tools that are both high-performing and easily implementable by the user to help in learning how the significant features (e.g. animal population densities, topography, behavior patterns of the criminals within the area, etc) interact with each other in hopes of abating poaching. This research develops a classification model using machine learning algorithms to aid in forecasting future attacks that is both easy to train and performs well when compared to other models. In this research, we demonstrate how data imputation methods (specifically predictive mean matching, gradient boosting, and random forest multiple imputation) can be applied to analyze data and create significant predictions across a varied data set. Specifically, we apply these methods to improve the accuracy of adopted prediction models (Logistic Regression, Support Vector Machine, etc). Finally, we assess the performance of the model and the accuracy of our data imputation methods by learning on a real-world data set constituting four years of imputed data and testing on one year of non-imputed data. This paper provides three main contributions. First, we extend work done by the Teamcore and CREATE (Center for Risk and Economic Analysis of Terrorism Events) research group at the University of Southern California (USC) working in conjunction with the Department of Homeland Security to apply game theory and machine learning algorithms to develop more efficient ways of reducing poaching. This research introduces ensemble methods (Random Forests and Stochastic Gradient Boosting) and applies it to real-world poaching data gathered from the Ugandan rain forest park rangers. Next, we consider the effect of data imputation on both the performance of various algorithms and the general accuracy of the method itself when applied to a dependent variable where a large number of observations are missing. Third, we provide an alternate approach to predict the probability of observing poaching both by season and by month. The results from this research are very promising. We conclude that by using Stochastic Gradient Boosting to predict observations for non-commercial poaching by season, we are able to produce statistically equivalent results while being orders of magnitude faster in computation time and complexity. Additionally, when predicting potential poaching incidents by individual month vice entire seasons, boosting techniques produce a mean area under the curve increase of approximately 3% relative to previous prediction schedules by entire seasons.

Keywords: ensemble methods, imputation, machine learning, random forests, statistical analysis, stochastic gradient boosting, wildlife protection

Procedia PDF Downloads 292
110 A Foucauldian Analysis of Postcolonial Hybridity in a Kuwaiti Novel

Authors: Annette Louise Dupont

Abstract:

Background and Introduction: Broadly defined, hybridity is a condition of racial and cultural ‘cross-pollination’ which arises as a result of contact between colonized and colonizer. It remains a highly contested concept in postcolonial studies as it is implicitly underpinned by colonial notions of ‘racial purity.’ While some postcolonial scholars argue that individuals exercise significant agency in the construction of their hybrid subjectivities, others underscore associated experiences of exclusion, marginalization, and alienation. Kuwait and the Philippines are among the most disparate of contemporary postcolonial states. While oil resources transformed the former British Mandate of Kuwait into one of the world’s richest countries, enduring poverty in the former US colony of the Philippines drives a global diaspora which produces multiple Filipino hybridities. Although more Filipinos work in the Arabian Gulf than in any other region of the world, scholarly and literary accounts of their experiences of hybridization in this region are relatively scarce when compared to those set in North America, Australia, Asia, and Europe. Study Aims and Significance: This paper aims to address this existing lacuna by investigating hybridity and other postcolonial themes in a novel by a Kuwaiti author which vividly portrays the lives of immigrants and citizens in Kuwait and which gives a rare voice and insight into the struggles of an Arab-Filipino and European-Filipina. Specifically, this paper explores the relationships between colonial discourses of ‘black’ and ‘white’ and postcolonial discourses pertaining to ‘brown’ Filipinos and ‘brown’ Arabs, in order to assess their impacts on the protagonists’ hybrid subjectivities. Methodology: Foucault’s notions of discourse not only provide a conceptual basis for analyzing the colonial ideology of Orientalism, but his theories related to the social exclusion of the ‘mad’ also elucidate the mechanisms by which power can operate to marginalize, alienate and subjectify the Other, therefore a Foucauldian lens is applied to the analysis of postcolonial themes and hybrid subjectivities portrayed in the novel. Findings: The study finds that Kuwaiti and Filipino discursive practices mirror those of former white colonialists and colonized black laborers and that these discursive practices combine with a former British colonial system of foreign labor sponsorship to create a form of governmentality in Kuwait which is based on exclusion and control. The novel’s rich social description and the reflections of the key protagonist and narrator suggest that such fiction has a significant role to play in highlighting the historical and cultural specificities of experiences of postcolonial hybridity in under-researched geographic, economic, social, and political settings. Whereas hybridity can appear abstract in scholarly accounts, the significance of literary accounts in which the lived experiences of hybrid protagonists are anchored to specific historical periods, places and discourses, is that contextual particularities are neither obscured nor dehistoricized. Conclusions: The application of Foucauldian theorizations of discourse, disciplinary, and biopower to the analysis of this Kuwaiti literary text serves to extend an understanding of the effects of contextually-specific discourses on hybrid Filipino subjectivities, as well as a knowledge of prevailing social dynamics in a little-researched postcolonial Arabian Gulf state.

Keywords: Filipino, Foucault, hybridity, Kuwait

Procedia PDF Downloads 128
109 The Impacts of New Digital Technology Transformation on Singapore Healthcare Sector: Case Study of a Public Hospital in Singapore from a Management Accounting Perspective

Authors: Junqi Zou

Abstract:

As one of the world’s most tech-ready countries, Singapore has initiated the Smart Nation plan to harness the full power and potential of digital technologies to transform the way people live and work, through the more efficient government and business processes, to make the economy more productive. The key evolutions of digital technology transformation in healthcare and the increasing deployment of Internet of Things (IoTs), Big Data, AI/cognitive, Robotic Process Automation (RPA), Electronic Health Record Systems (EHR), Electronic Medical Record Systems (EMR), Warehouse Management System (WMS in the most recent decade have significantly stepped up the move towards an information-driven healthcare ecosystem. The advances in information technology not only bring benefits to patients but also act as a key force in changing management accounting in healthcare sector. The aim of this study is to investigate the impacts of digital technology transformation on Singapore’s healthcare sector from a management accounting perspective. Adopting a Balanced Scorecard (BSC) analysis approach, this paper conducted an exploratory case study of a newly launched Singapore public hospital, which has been recognized as amongst the most digitally advanced healthcare facilities in Asia-Pacific region. Specifically, this study gains insights on how the new technology is changing healthcare organizations’ management accounting from four perspectives under the Balanced Scorecard approach, 1) Financial Perspective, 2) Customer (Patient) Perspective, 3) Internal Processes Perspective, and 4) Learning and Growth Perspective. Based on a thorough review of archival records from the government and public, and the interview reports with the hospital’s CIO, this study finds the improvements from all the four perspectives under the Balanced Scorecard framework as follows: 1) Learning and Growth Perspective: The Government (Ministry of Health) works with the hospital to open up multiple training pathways to health professionals that upgrade and develops new IT skills among the healthcare workforce to support the transformation of healthcare services. 2) Internal Process Perspective: The hospital achieved digital transformation through Project OneCare to integrate clinical, operational, and administrative information systems (e.g., EHR, EMR, WMS, EPIB, RTLS) that enable the seamless flow of data and the implementation of JIT system to help the hospital operate more effectively and efficiently. 3) Customer Perspective: The fully integrated EMR suite enhances the patient’s experiences by achieving the 5 Rights (Right Patient, Right Data, Right Device, Right Entry and Right Time). 4) Financial Perspective: Cost savings are achieved from improved inventory management and effective supply chain management. The use of process automation also results in a reduction of manpower costs and logistics cost. To summarize, these improvements identified under the Balanced Scorecard framework confirm the success of utilizing the integration of advanced ICT to enhance healthcare organization’s customer service, productivity efficiency, and cost savings. Moreover, the Big Data generated from this integrated EMR system can be particularly useful in aiding management control system to optimize decision making and strategic planning. To conclude, the new digital technology transformation has moved the usefulness of management accounting to both financial and non-financial dimensions with new heights in the area of healthcare management.

Keywords: balanced scorecard, digital technology transformation, healthcare ecosystem, integrated information system

Procedia PDF Downloads 161
108 CT Images Based Dense Facial Soft Tissue Thickness Measurement by Open-source Tools in Chinese Population

Authors: Ye Xue, Zhenhua Deng

Abstract:

Objectives: Facial soft tissue thickness (FSTT) data could be obtained from CT scans by measuring the face-to-skull distances at sparsely distributed anatomical landmarks by manually located on face and skull. However, automated measurement using 3D facial and skull models by dense points using open-source software has become a viable option due to the development of computed assisted imaging technologies. By utilizing dense FSTT information, it becomes feasible to generate plausible automated facial approximations. Therefore, establishing a comprehensive and detailed, densely calculated FSTT database is crucial in enhancing the accuracy of facial approximation. Materials and methods: This study utilized head CT scans from 250 Chinese adults of Han ethnicity, with 170 participants originally born and residing in northern China and 80 participants in southern China. The age of the participants ranged from 14 to 82 years, and all samples were divided into five non-overlapping age groups. Additionally, samples were also divided into three categories based on BMI information. The 3D Slicer software was utilized to segment bone and soft tissue based on different Hounsfield Unit (HU) thresholds, and surface models of the face and skull were reconstructed for all samples from CT data. Following procedures were performed unsing MeshLab, including converting the face models into hollowed cropped surface models amd automatically measuring the Hausdorff Distance (referred to as FSTT) between the skull and face models. Hausdorff point clouds were colorized based on depth value and exported as PLY files. A histogram of the depth distributions could be view and subdivided into smaller increments. All PLY files were visualized of Hausdorff distance value of each vertex. Basic descriptive statistics (i.e., mean, maximum, minimum and standard deviation etc.) and distribution of FSTT were analysis considering the sex, age, BMI and birthplace. Statistical methods employed included Multiple Regression Analysis, ANOVA, principal component analysis (PCA). Results: The distribution of FSTT is mainly influenced by BMI and sex, as further supported by the results of the PCA analysis. Additionally, FSTT values exceeding 30mm were found to be more sensitive to sex. Birthplace-related differences were observed in regions such as the forehead, orbital, mandibular, and zygoma. Specifically, there are distribution variances in the depth range of 20-30mm, particularly in the mandibular region. Northern males exhibit thinner FSTT in the frontal region of the forehead compared to southern males, while females shows fewer distribution differences between the northern and southern, except for the zygoma region. The observed distribution variance in the orbital region could be attributed to differences in orbital size and shape. Discussion: This study provides a database of Chinese individuals distribution of FSTT and suggested opening source tool shows fine function for FSTT measurement. By incorporating birthplace as an influential factor in the distribution of FSTT, a greater level of detail can be achieved in facial approximation.

Keywords: forensic anthropology, forensic imaging, cranial facial reconstruction, facial soft tissue thickness, CT, open-source tool

Procedia PDF Downloads 58
107 The Legal and Regulatory Gaps of Blockchain-Enabled Energy Prosumerism

Authors: Karisma Karisma, Pardis Moslemzadeh Tehrani

Abstract:

This study aims to conduct a high-level strategic dialogue on the lack of consensus, consistency, and legal certainty regarding blockchain-based energy prosumerism so that appropriate institutional and governance structures can be put in place to address the inadequacies and gaps in the legal and regulatory framework. The drive to achieve national and global decarbonization targets is a driving force behind climate goals and policies under the Paris Agreement. In recent years, efforts to ‘demonopolize’ and ‘decentralize’ energy generation and distribution have driven the energy transition toward decentralized systems, invoking concepts such as ownership, sovereignty, and autonomy of RE sources. The emergence of individual and collective forms of prosumerism and the rapid diffusion of blockchain is expected to play a critical role in the decarbonization and democratization of energy systems. However, there is a ‘regulatory void’ relating to individual and collective forms of prosumerism that could prevent the rapid deployment of blockchain systems and potentially stagnate the operationalization of blockchain-enabled energy sharing and trading activities. The application of broad and facile regulatory fixes may be insufficient to address the major regulatory gaps. First, to the authors’ best knowledge, the concepts and elements circumjacent to individual and collective forms of prosumerism have not been adequately described in the legal frameworks of many countries. Second, there is a lack of legal certainty regarding the creation and adaptation of business models in a highly regulated and centralized energy system, which inhibits the emergence of prosumer-driven niche markets. There are also current and prospective challenges relating to the legal status of blockchain-based platforms for facilitating energy transactions, anticipated with the diffusion of blockchain technology. With the rise of prosumerism in the energy sector, the areas of (a) network charges, (b) energy market access, (c) incentive schemes, (d) taxes and levies, and (e) licensing requirements are still uncharted territories in many countries. The uncertainties emanating from this area pose a significant hurdle to the widespread adoption of blockchain technology, a complementary technology that offers added value and competitive advantages for energy systems. The authors undertake a conceptual and theoretical investigation to elucidate the lack of consensus, consistency, and legal certainty in the study of blockchain-based prosumerism. In addition, the authors set an exploratory tone to the discussion by taking an analytically eclectic approach that builds on multiple sources and theories to delve deeper into this topic. As an interdisciplinary study, this research accounts for the convergence of regulation, technology, and the energy sector. The study primarily adopts desk research, which examines regulatory frameworks and conceptual models for crucial policies at the international level to foster an all-inclusive discussion. With their reflections and insights into the interaction of blockchain and prosumerism in the energy sector, the authors do not aim to develop definitive regulatory models or instrument designs, but to contribute to the theoretical dialogue to navigate seminal issues and explore different nuances and pathways. Given the emergence of blockchain-based energy prosumerism, identifying the challenges, gaps and fragmentation of governance regimes is key to facilitating global regulatory transitions.

Keywords: blockchain technology, energy sector, prosumer, legal and regulatory.

Procedia PDF Downloads 181
106 The Effects of the Interaction between Prenatal Stress and Diet on Maternal Insulin Resistance and Inflammatory Profile

Authors: Karen L. Lindsay, Sonja Entringer, Claudia Buss, Pathik D. Wadhwa

Abstract:

Maternal nutrition and stress are independently recognized as among the most important factors that influence prenatal biology, with implications for fetal development and poor pregnancy outcomes. While there is substantial evidence from non-pregnancy human and animal studies that a complex, bi-directional relationship exists between nutrition and stress, to the author’s best knowledge, their interaction in the context of pregnancy has been significantly understudied. The aim of this study is to assess the interaction between maternal psychological stress and diet quality across pregnancy and its effects on biomarkers of prenatal insulin resistance and inflammation. This is a prospective longitudinal study of N=235 women carrying a healthy, singleton pregnancy, recruited from prenatal clinics of the University of California, Irvine Medical Center. Participants completed a 4-day ambulatory assessment in early, middle and late pregnancy, which included multiple daily electronic diary entries using Ecological Momentary Assessment (EMA) technology on a dedicated study smartphone. The EMA diaries gathered moment-level data on maternal perceived stress, negative mood, positive mood and quality of social interactions. The numerical scores for these variables were averaged across each study time-point and converted to Z-scores. A single composite variable for 'STRESS' was computed as follows: (Negative mood+Perceived stress)–(Positive mood+Social interaction quality). Dietary intakes were assessed by three 24-hour dietary recalls conducted within two weeks of each 4-day assessment. Daily nutrient and food group intakes were averaged across each study time-point. The Alternative Healthy Eating Index adapted for pregnancy (AHEI-P) was computed for early, middle and late pregnancy as a validated summary measure of diet quality. At the end of each 4-day ambulatory assessment, women provided a fasting blood sample, which was assayed for levels of glucose, insulin, Interleukin (IL)-6 and Tumor Necrosis Factor (TNF)-α. Homeostasis Model Assessment of Insulin Resistance (HOMA-IR) was computed. Pearson’s correlation was used to explore the relationship between maternal STRESS and AHEI-P within and between each study time-point. Linear regression was employed to test the association of the stress-diet interaction (STRESS*AHEI-P) with the biological markers HOMA-IR, IL-6 and TNF-α at each study time-point, adjusting for key covariates (pre-pregnancy body mass index, maternal education level, race/ethnicity). Maternal STRESS and AHEI-P were significantly inversely correlated in early (r=-0.164, p=0.018) and mid-pregnancy (-0.160, p=0.019), and AHEI-P from earlier gestational time-points correlated with later STRESS (early AHEI-P x mid STRESS: r=-0.168, p=0.017; mid AHEI-P x late STRESS: r=-0.142, p=0.041). In regression models, the interaction term was not associated with HOMA-IR or IL-6 at any gestational time-point. The stress-diet interaction term was significantly associated with TNF-α according to the following patterns: early AHEI-P*early STRESS vs early TNF-α (p=0.005); early AHEI-P*early STRESS vs mid TNF-α (p=0.002); early AHEI-P*mid STRESS vs mid TNF-α (p=0.005); mid AHEI-P*mid STRESS vs mid TNF-α (p=0.070); mid AHEI-P*late STRESS vs late TNF-α (p=0.011). Poor diet quality is significantly related to higher psychosocial stress levels in pregnant women across gestation, which may promote inflammation via TNF-α. Future prenatal studies should consider the combined effects of maternal stress and diet when evaluating either one of these factors on pregnancy or infant outcomes.

Keywords: diet quality, inflammation, insulin resistance, nutrition, pregnancy, stress, tumor necrosis factor-alpha

Procedia PDF Downloads 200
105 Examining the Behavioral, Hygienic and Expectational Changes in Adolescents and Young Women during COVID-19 Quarantine in Colombia

Authors: Rocio Murad, Marcela Sanchez, Mariana Calderon Jaramillo, Danny Rivera, Angela Cifuentes, Daniela Roldán, Juan Carlos Rivillas

Abstract:

Women and girls have specific health needs, but during health pandemics such as COVID19 they are less likely to have access to quality essential health information, commodities and services, or insurance coverage for routine and catastrophic health expenses, especially in rural and marginalized communities. This is compounded by multiple or intersecting inequalities, such as ethnicity, socioeconomic status, disability, age, geographic location, and sexual orientation, among others. Despite concerted collective action, there is a lack of information on the situation of women, adolescents and youth, including gender inequalities exacerbated by the pandemic. Much more needs to be done to amplify the lived realities of women and adolescents in global and national advocacy and policy responses. The COVID 19 pandemic reflects the need for systematic advocacy policies based on the lived experiences of women and adolescents, underpinned by human rights. This research is part of the initiative of Profamilia Association (Solidarity Study), and its objective is twofold: i) to analyze the behavioral changes and immediate expectations of Colombians during the stage of relaxation of the confinement measures decreed by the national government; and ii) to identify the needs, experiences and resilient practices of adolescents and young women during the COVID-19 crisis in Colombia. Descriptive analysis of data collected by Profamilia through the Solidaridad study, an exploratory cross-sectional descriptive study that used subnational level data from a nonprobabilistic sample survey conducted to 1735 adults, between September 01 and 11, 2020. Interviews were conducted with key stakeholders about their experiences during COVID19, under three key axes: i) main challenges for adolescents and young women; ii) examples of what has worked well in responding to the challenge; and iii) how/what services are/should be provided during COVID-19 (and beyond) to address the challenge. Interviewees were selected based on prior mapping of social groups of interest. In total, 23 adolescents and young women participated in the interviews. The results show that people adopted behavioral changes such as wearing masks, avoiding people with symptoms, and reducing mobility, but there was also a doubling of concerns for many reasons, from effects on mental health, sexual health, and unattended reproductive health to the burden of care and working at home. The favorable perception that people had at the beginning of the quarantine about the response and actions of the national and local government to control Covid-19 decreased over the course of the quarantine. The challenges and needs of adolescents and young women were highlighted during the most restrictive measures to contain the COVID-19 pandemic, which resulted in disruptions to daily activities, education and work, as well as restrictions to mobility and social interaction. Concerns raised by participants included: impact on mental health and wellbeing due to disruption of daily life; limitations in access to formal and informal education; food insecurity; migration; loss of livelihoods; lack of access to health information and services; limitations to sexual and reproductive health and rights; insecurity problems; and problems in communication and treatment among household members.

Keywords: COVID-19, changes in behavior, adolescents, women

Procedia PDF Downloads 108
104 Erectile Dysfunction in A Middle Aged Man 6 Years After Bariatric Surgery: A Case Report

Authors: Thaminda Liyanage, Chamila Shamika Kurukulasuriya

Abstract:

Introduction: Morbid obesity has been successfully treated with bariatric surgery for over 60 years. Although operative procedures have improved and associated complications have reduced substantially, surgery still carries the risk of post-operative malabsorption, malnutrition and a range of gastrointestinal disorders. Overweight by itself can impair libido in both sexes and cause erectile dysfunction in males by inducing a state of hypogonadotropic hypogonadism, proportional to the degree of obesity. Impact of weight reduction on libido and sexual activity remains controversial, however it is broadly accepted that weight loss improves sexual drive. Zinc deficiency, subsequent to malabsorption, may lead to impaired testosterone synthesis in men while excessive and/or rapid weight loss in females may result in reversible amenorrhoea leading to sub-fertility. Methods: We describe a 37 year old male, 6 years post Roux-en-Y gastric bypass surgery, who presented with erectile dysfunction, loss of libido, worsening fatigue and generalized weakness for 4 months. He also complained of constipation and frequent muscle cramps but denied having headache, vomiting or visual disturbances. Patient had lost 38 kg of body weight post gastric bypass surgery over four years {135kg (BMI 42.6 kg/m2) to 97 kg (BMI 30.6 kg/m2)} and the weight had been stable for past two years. He had no recognised co-morbidities at the time of the surgery and noted marked improvement in general wellbeing, physical fitness and psychological confident post surgery, up until four months before presentation. Clinical examination revealed dry pale skin with normal body hair distribution, no thyroid nodules or goitre, normal size testicles and normal neurological examination with no visual field defects or diplopia. He had low serum testosterone, follicular stimulating hormone (FSH), luteinizing hormone (LH), T3, T4, thyroid stimulating hormone (TSH), insulin like growth factor 1 (IGF-1) and 24-hour urine cortisol levels. Serum cortisol demonstrated an appropriate rise to ACTH stimulation test but growth hormone (GH) failed increase on insulin tolerance test. Other biochemical and haematological studies were normal, except for low zinc and folate with minimally raised liver enzymes. MRI scan of the head confirmed a solid pituitary mass with no mass effect on optic chiasm. Results: In this patient clinical, biochemical and radiological findings were consistent with anterior pituitary dysfunction. However, there were no features of raised intracranial pressure or neurological compromise. He was commenced on appropriate home replacement therapy and referred for neurosurgical evaluation. Patient reported marked improvement in his symptoms, specially libido and erectile dysfunction, on subsequent follow up visits. Conclusion: Sexual dysfunction coupled with non specific constitutional symptoms has multiple aetiologies. Clinical symptoms out of proportion to nutritional deficiencies post bariatric surgery should be thoroughly investigated. Close long term follow up is crucial for overall success.

Keywords: obesity, bariatric surgery, erectile dysfunction, loss of libido

Procedia PDF Downloads 283
103 Geographic Information System Based Multi-Criteria Subsea Pipeline Route Optimisation

Authors: James Brown, Stella Kortekaas, Ian Finnie, George Zhang, Christine Devine, Neil Healy

Abstract:

The use of GIS as an analysis tool for engineering decision making is now best practice in the offshore industry. GIS enables multidisciplinary data integration, analysis and visualisation which allows the presentation of large and intricate datasets in a simple map-interface accessible to all project stakeholders. Presenting integrated geoscience and geotechnical data in GIS enables decision makers to be well-informed. This paper is a successful case study of how GIS spatial analysis techniques were applied to help select the most favourable pipeline route. Routing a pipeline through any natural environment has numerous obstacles, whether they be topographical, geological, engineering or financial. Where the pipeline is subjected to external hydrostatic water pressure and is carrying pressurised hydrocarbons, the requirement to safely route the pipeline through hazardous terrain becomes absolutely paramount. This study illustrates how the application of modern, GIS-based pipeline routing techniques enabled the identification of a single most-favourable pipeline route crossing of a challenging seabed terrain. Conventional approaches to pipeline route determination focus on manual avoidance of primary constraints whilst endeavouring to minimise route length. Such an approach is qualitative, subjective and is liable to bias towards the discipline and expertise that is involved in the routing process. For very short routes traversing benign seabed topography in shallow water this approach may be sufficient, but for deepwater geohazardous sites, the need for an automated, multi-criteria, and quantitative approach is essential. This study combined multiple routing constraints using modern least-cost-routing algorithms deployed in GIS, hitherto unachievable with conventional approaches. The least-cost-routing procedure begins with the assignment of geocost across the study area. Geocost is defined as a numerical penalty score representing hazard posed by each routing constraint (e.g. slope angle, rugosity, vulnerability to debris flows) to the pipeline. All geocosted routing constraints are combined to generate a composite geocost map that is used to compute the least geocost route between two defined terminals. The analyses were applied to select the most favourable pipeline route for a potential gas development in deep water. The study area is geologically complex with a series of incised, potentially active, canyons carved into a steep escarpment, with evidence of extensive debris flows. A similar debris flow in the future could cause significant damage to a poorly-placed pipeline. Protruding inter-canyon spurs offer lower-gradient options for ascending an escarpment but the vulnerability of periodic failure of these spurs is not well understood. Close collaboration between geoscientists, pipeline engineers, geotechnical engineers and of course the gas export pipeline operator guided the analyses and assignment of geocosts. Shorter route length, less severe slope angles, and geohazard avoidance were the primary drivers in identifying the most favourable route.

Keywords: geocost, geohazard, pipeline route determination, pipeline route optimisation, spatial analysis

Procedia PDF Downloads 406
102 Methotrexate Associated Skin Cancer: A Signal Review of Pharmacovigilance Center

Authors: Abdulaziz Alakeel, Abdulrahman Alomair, Mohammed Fouda

Abstract:

Introduction: Methotrexate (MTX) is an antimetabolite used to treat multiple conditions, including neoplastic diseases, severe psoriasis, and rheumatoid arthritis. Skin cancer is the out-of-control growth of abnormal cells in the epidermis, the outermost skin layer, caused by unrepaired DNA damage that triggers mutations. These mutations lead the skin cells to multiply rapidly and form malignant tumors. The aim of this review is to evaluate the risk of skin cancer associated with the use of methotrexate and to suggest regulatory recommendations if required. Methodology: Signal Detection team at Saudi Food and Drug Authority (SFDA) performed a safety review using National Pharmacovigilance Center (NPC) database as well as the World Health Organization (WHO) VigiBase, alongside with literature screening to retrieve related information for assessing the causality between skin cancer and methotrexate. The search conducted in July 2020. Results: Four published articles support the association seen while searching in literature, a recent randomized control trial published in 2020 revealed a statistically significant increase in skin cancer among MTX users. Another study mentioned methotrexate increases the risk of non-melanoma skin cancer when used in combination with immunosuppressant and biologic agents. In addition, the incidence of melanoma for methotrexate users was 3-fold more than the general population in a cohort study of rheumatoid arthritis patients. The last article estimated the risk of cutaneous malignant melanoma (CMM) in a cohort study shows a statistically significant risk increase for CMM was observed in MTX exposed patients. The WHO database (VigiBase) searched for individual case safety reports (ICSRs) reported for “Skin Cancer” and 'Methotrexate' use, which yielded 121 ICSRs. The initial review revealed that 106 cases are insufficiently documented for proper medical assessment. However, the remaining fifteen cases have extensively evaluated by applying the WHO criteria of causality assessment. As a result, 30 percent of the cases showed that MTX could possibly cause skin cancer; five cases provide unlikely association and five un-assessable cases due to lack of information. The Saudi NPC database searched to retrieve any reported cases for the combined terms methotrexate/skin cancer; however, no local cases reported up to date. The data mining of the observed and the expected reporting rate for drug/adverse drug reaction pair is estimated using information component (IC), a tool developed by the WHO Uppsala Monitoring Centre to measure the reporting ratio. Positive IC reflects higher statistical association, while negative values translated as a less statistical association, considering the null value equal to zero. Results showed that a combination of 'Methotrexate' and 'Skin cancer' observed more than expected when compared to other medications in the WHO database (IC value is 1.2). Conclusion: The weighted cumulative pieces of evidence identified from global cases, data mining, and published literature are sufficient to support a causal association between the risk of skin cancer and methotrexate. Therefore, health care professionals should be aware of this possible risk and may consider monitoring any signs or symptoms of skin cancer in patients treated with methotrexate.

Keywords: methotrexate, skin cancer, signal detection, pharmacovigilance

Procedia PDF Downloads 114
101 i2kit: A Tool for Immutable Infrastructure Deployments

Authors: Pablo Chico De Guzman, Cesar Sanchez

Abstract:

Microservice architectures are increasingly in distributed cloud applications due to the advantages on the software composition, development speed, release cycle frequency and the business logic time to market. On the other hand, these architectures also introduce some challenges on the testing and release phases of applications. Container technology solves some of these issues by providing reproducible environments, easy of software distribution and isolation of processes. However, there are other issues that remain unsolved in current container technology when dealing with multiple machines, such as networking for multi-host communication, service discovery, load balancing or data persistency (even though some of these challenges are already solved by traditional cloud vendors in a very mature and widespread manner). Container cluster management tools, such as Kubernetes, Mesos or Docker Swarm, attempt to solve these problems by introducing a new control layer where the unit of deployment is the container (or the pod — a set of strongly related containers that must be deployed on the same machine). These tools are complex to configure and manage and they do not follow a pure immutable infrastructure approach since servers are reused between deployments. Indeed, these tools introduce dependencies at execution time for solving networking or service discovery problems. If an error on the control layer occurs, which would affect running applications, specific expertise is required to perform ad-hoc troubleshooting. As a consequence, it is not surprising that container cluster support is becoming a source of revenue for consulting services. This paper presents i2kit, a deployment tool based on the immutable infrastructure pattern, where the virtual machine is the unit of deployment. The input for i2kit is a declarative definition of a set of microservices, where each microservice is defined as a pod of containers. Microservices are built into machine images using linuxkit —- a tool for creating minimal linux distributions specialized in running containers. These machine images are then deployed to one or more virtual machines, which are exposed through a cloud vendor load balancer. Finally, the load balancer endpoint is set into other microservices using an environment variable, providing service discovery. The toolkit i2kit reuses the best ideas from container technology to solve problems like reproducible environments, process isolation, and software distribution, and at the same time relies on mature, proven cloud vendor technology for networking, load balancing and persistency. The result is a more robust system with no learning curve for troubleshooting running applications. We have implemented an open source prototype that transforms i2kit definitions into AWS cloud formation templates, where each microservice AMI (Amazon Machine Image) is created on the fly using linuxkit. Even though container cluster management tools have more flexibility for resource allocation optimization, we defend that adding a new control layer implies more important disadvantages. Resource allocation is greatly improved by using linuxkit, which introduces a very small footprint (around 35MB). Also, the system is more secure since linuxkit installs the minimum set of dependencies to run containers. The toolkit i2kit is currently under development at the IMDEA Software Institute.

Keywords: container, deployment, immutable infrastructure, microservice

Procedia PDF Downloads 179
100 Benzenepropanamine Analogues as Non-detergent Microbicidal Spermicide for Effective Pre-exposure Prophylaxis

Authors: Veenu Bala, Yashpal S. Chhonker, Bhavana Kushwaha, Rabi S. Bhatta, Gopal Gupta, Vishnu L. Sharma

Abstract:

According to UNAIDS 2013 estimate nearly 52% of all individuals living with HIV are now women of reproductive age (15–44 years). Seventy-five percent cases of HIV acquisition are through heterosexual contacts and sexually transmitted infections (STIs), attributable to unsafe sexual behaviour. Each year, an estimated 500 million people acquire atleast one of four STIs: chlamydia, gonorrhoea, syphilis and trichomoniasis. Trichomonas vaginalis (TV) is exclusively sexually transmitted in adults, accounting for 30% of STI cases and associated with pelvic inflammatory disease (PID), vaginitis and pregnancy complications in women. TV infection resulted in impaired vaginal milieu, eventually favoring HIV transmission. In the absence of an effective prophylactic HIV vaccine, prevention of new infections has become a priority. It was thought worthwhile to integrate HIV prevention and reproductive health services including unintended pregnancy protection for women as both are related with unprotected sex. Initially, nonoxynol-9 (N-9) had been proposed as a spermicidal agent with microbicidal activity but on the contrary it increased HIV susceptibility due to surfactant action. Thus, to accomplish an urgent need of novel woman controlled non-detergent microbicidal spermicides benzenepropanamine analogues have been synthesized. At first, five benzenepropanamine-dithiocarbamate hybrids have been synthesized and evaluated for their spermicidal, anti-Trichomonas and anti-fungal activities along with safety profiling to cervicovaginal cells. In order to further enhance the scope of above study benzenepropanamine was hybridized with thiourea as to introduce anti-HIV potential. The synthesized hybrid molecules were evaluated for their reverse transcriptase (RT) inhibition, spermicidal, anti-Trichomonas and antimicrobial activities as well as their safety against vaginal flora and cervical cells. simulated vaginal fluid (SVF) stability and pharmacokinetics of most potent compound versus N-9 was examined in female Newzealand (NZ) rabbits to observe its absorption into systemic circulation and subsequent exposure in blood plasma through vaginal wall. The study resulted in the most promising compound N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) exhibiting better activity profile than N-9 as it showed RT inhibition (72.30 %), anti-Trichomonas (MIC, 46.72 µM against MTZ susceptible and MIC, 187.68 µM against resistant strain), spermicidal (MEC, 0.01%) and antifungal activity (MIC, 3.12–50 µg/mL) against four fungal strains. The high safety against vaginal epithelium (HeLa cells) and compatibility with vaginal flora (lactobacillus), SVF stability and least vaginal absorption supported its suitability for topical vaginal application. Docking study was performed to gain an insight into the binding mode and interactions of the most promising compound, N-butyl-4-(3-oxo-3-phenylpropyl) piperazin-1-carbothioamide (29) with HIV-1 Reverse Transcriptase. The docking study has revealed that compound (29) interacted with HIV-1 RT similar to standard drug Nevirapine. It may be concluded that hybridization of benzenepropanamine and thiourea moiety resulted into novel lead with multiple activities including RT inhibition. A further lead optimization may result into effective vaginal microbicides having spermicidal, anti-Trichomonas, antifungal and anti-HIV potential altogether with enhanced safety to cervico-vaginal cells in comparison to Nonoxynol-9.

Keywords: microbicidal, nonoxynol-9, reverse transcriptase, spermicide

Procedia PDF Downloads 344
99 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 362
98 Multiphysic Coupling Between Hypersonc Reactive Flow and Thermal Structural Analysis with Ablation for TPS of Space Lunchers

Authors: Margarita Dufresne

Abstract:

This study devoted to development TPS for small space re-usable launchers. We have used SIRIUS design for S1 prototype. Multiphysics coupling for hypersonic reactive flow and thermos-structural analysis with and without ablation is provided by -CCM+ and COMSOL Multiphysics and FASTRAN and ACE+. Flow around hypersonic flight vehicles is the interaction of multiple shocks and the interaction of shocks with boundary layers. These interactions can have a very strong impact on the aeroheating experienced by the flight vehicle. A real gas implies the existence of a gas in equilibrium, non-equilibrium. Mach number ranged from 5 to 10 for first stage flight.The goals of this effort are to provide validation of the iterative coupling of hypersonic physics models in STAR-CCM+ and FASTRAN with COMSOL Multiphysics and ACE+. COMSOL Multiphysics and ACE+ are used for thermal structure analysis to simulate Conjugate Heat Transfer, with Conduction, Free Convection and Radiation to simulate Heat Flux from hypersonic flow. The reactive simulations involve an air chemical model of five species: N, N2, NO, O and O2. Seventeen chemical reactions, involving dissociation and recombination probabilities calculation include in the Dunn/Kang mechanism. Forward reaction rate coefficients based on a modified Arrhenius equation are computed for each reaction. The algorithms employed to solve the reactive equations used the second-order numerical scheme is obtained by a “MUSCL” (Monotone Upstream-cantered Schemes for Conservation Laws) extrapolation process in the structured case. Coupled inviscid flux: AUSM+ flux-vector splitting The MUSCL third-order scheme in STAR-CCM+ provides third-order spatial accuracy, except in the vicinity of strong shocks, where, due to limiting, the spatial accuracy is reduced to second-order and provides improved (i.e., reduced) dissipation compared to the second-order discretization scheme. initial unstructured mesh is refined made using this initial pressure gradient technique for the shock/shock interaction test case. The suggested by NASA turbulence models are the K-Omega SST with a1 = 0.355 and QCR (quadratic) as the constitutive option. Specified k and omega explicitly in initial conditions and in regions – k = 1E-6 *Uinf^2 and omega = 5*Uinf/ (mean aerodynamic chord or characteristic length). We put into practice modelling tips for hypersonic flow as automatic coupled solver, adaptative mesh refinement to capture and refine shock front, using advancing Layer Mesher and larger prism layer thickness to capture shock front on blunt surfaces. The temperature range from 300K to 30 000 K and pressure between 1e-4 and 100 atm. FASTRAN and ACE+ are coupled to provide high-fidelity solution for hot hypersonic reactive flow and Conjugate Heat Transfer. The results of both approaches meet the CIRCA wind tunnel results.

Keywords: hypersonic, first stage, high speed compressible flow, shock wave, aerodynamic heating, conugate heat transfer, conduction, free convection, radiation, fastran, ace+, comsol multiphysics, star-ccm+, thermal protection system (tps), space launcher, wind tunnel

Procedia PDF Downloads 72
97 Developing a Sustainable Transit Planning Index Using Analytical Hierarchy Process Method for ZEB Implementation in Canada

Authors: Mona Ghafouri-Azar, Sara Diamond, Jeremy Bowes, Grace Yuan, Aimee Burnett, Michelle Wyndham-West, Sara Wagner, Anand Pariyarath

Abstract:

Transportation is the fastest growing source of greenhouse gas emissions worldwide. In Canada, it is responsible for 23% of total CO2emissions from fuel combustion, and emissions from the transportation sector are the second largest source of emissions after the oil and gas sector. Currently, most Canadian public transportation systems rely on buses that operateon fossil fuels.Canada is currently investing billions of dollars to replacediesel buses with electric busesas this isperceived to have a significant impact on climate mitigation. This paper focuses on the possible impacts of zero emission buses (ZEB) on sustainable development, considering three dimensions of sustainability; environmental quality, economic growth, and social development.A sustainable transportation system is one that is safe, affordable, accessible, efficient, and resilient and that contributes minimal emissions of carbon and other pollutants.To enable implementation of these goals, relevant indicators were selected and defined that measure progress towards a sustainable transportation system. These were drawn from Canadian and international examples. Studies compare different European cities in terms of development, sustainability, and infrastructures, by using transport performance indicators. A Normalized Transport Sustainability index measures and compares policies in different urban areas and allows fine-tuning of policies. Analysts use a number ofmethods for sustainable analysis, like cost-benefit analysis (CBA) toassess economic benefit, life-cycle assessment (LCA) to assess social, economic, and environment factors and goals, and multi-criteria decision making (MCDM) analysis which can comparediffering stakeholder preferences.A multi criteria decision making approach is an appropriate methodology to plan and evaluate sustainable transit development and to provide insights and meaningful information for decision makers and transit agencies. It is essential to develop a system thataggregates specific discrete indices to assess the sustainability of transportation systems.Theseprioritize indicators appropriate for the differentCanadian transit system agencies and theirpreferences and requirements. This studywill develop an integrating index that alliesexistingdiscrete indexes to supporta reliable comparison between the current transportation system (diesel buses) and the new ZEB system emerging in Canada. As a first step, theindexes for each category are selected, and the index matrix constructed. Second, the selected indicators arenormalized to remove anyinconsistency between them. Next, the normalized matrix isweighted based on the relative importance of each index to the main domains of sustainability using the analytical hierarchy process (AHP) method. This is accomplished through expert judgement around the relative importance of different attributes with respect to the goals through apairwise comparison matrix. The considerationof multiple environmental, economic, and social factors (including equity and health) is integrated intoa sustainable transit planning index (STPI) which supportsrealistic ZEB implementation in Canada and beyond and is useful to different stakeholders, agencies, and ministries.

Keywords: zero emission buses, sustainability, sustainable transit, transportation, analytical hierarchy process, environment, economy, social

Procedia PDF Downloads 128
96 Gene Expression Meta-Analysis of Potential Shared and Unique Pathways Between Autoimmune Diseases Under anti-TNFα Therapy

Authors: Charalabos Antonatos, Mariza Panoutsopoulou, Georgios K. Georgakilas, Evangelos Evangelou, Yiannis Vasilopoulos

Abstract:

The extended tissue damage and severe clinical outcomes of autoimmune diseases, accompanied by the high annual costs to the overall health care system, highlight the need for an efficient therapy. Increasing knowledge over the pathophysiology of specific chronic inflammatory diseases, namely Psoriasis (PsO), Inflammatory Bowel Diseases (IBD) consisting of Crohn’s disease (CD) and Ulcerative colitis (UC), and Rheumatoid Arthritis (RA), has provided insights into the underlying mechanisms that lead to the maintenance of the inflammation, such as Tumor Necrosis Factor alpha (TNF-α). Hence, the anti-TNFα biological agents pose as an ideal therapeutic approach. Despite the efficacy of anti-TNFα agents, several clinical trials have shown that 20-40% of patients do not respond to treatment. Nowadays, high-throughput technologies have been recruited in order to elucidate the complex interactions in multifactorial phenotypes, with the most ubiquitous ones referring to transcriptome quantification analyses. In this context, a random effects meta-analysis of available gene expression cDNA microarray datasets was performed between responders and non-responders to anti-TNFα therapy in patients with IBD, PsO, and RA. Publicly available datasets were systematically searched from inception to 10th of November 2020 and selected for further analysis if they assessed the response to anti-TNFα therapy with clinical score indexes from inflamed biopsies. Specifically, 4 IBD (79 responders/72 non-responders), 3 PsO (40 responders/11 non-responders) and 2 RA (16 responders/6 non-responders) datasetswere selected. After the separate pre-processing of each dataset, 4 separate meta-analyses were conducted; three disease-specific and a single combined meta-analysis on the disease-specific results. The MetaVolcano R package (v.1.8.0) was utilized for a random-effects meta-analysis through theRestricted Maximum Likelihood (RELM) method. The top 1% of the most consistently perturbed genes in the included datasets was highlighted through the TopConfects approach while maintaining a 5% False Discovery Rate (FDR). Genes were considered as Differentialy Expressed (DEGs) as those with P ≤ 0.05, |log2(FC)| ≥ log2(1.25) and perturbed in at least 75% of the included datasets. Over-representation analysis was performed using Gene Ontology and Reactome Pathways for both up- and down-regulated genes in all 4 performed meta-analyses. Protein-Protein interaction networks were also incorporated in the subsequentanalyses with STRING v11.5 and Cytoscape v3.9. Disease-specific meta-analyses detected multiple distinct pro-inflammatory and immune-related down-regulated genes for each disease, such asNFKBIA, IL36, and IRAK1, respectively. Pathway analyses revealed unique and shared pathways between each disease, such as Neutrophil Degranulation and Signaling by Interleukins. The combined meta-analysis unveiled 436 DEGs, 86 out of which were up- and 350 down-regulated, confirming the aforementioned shared pathways and genes, as well as uncovering genes that participate in anti-inflammatory pathways, namely IL-10 signaling. The identification of key biological pathways and regulatory elements is imperative for the accurate prediction of the patient’s response to biological drugs. Meta-analysis of such gene expression data could aid the challenging approach to unravel the complex interactions implicated in the response to anti-TNFα therapy in patients with PsO, IBD, and RA, as well as distinguish gene clusters and pathways that are altered through this heterogeneous phenotype.

Keywords: anti-TNFα, autoimmune, meta-analysis, microarrays

Procedia PDF Downloads 182