Search results for: school based
3725 A Machine Learning Model for Dynamic Prediction of Chronic Kidney Disease Risk Using Laboratory Data, Non-Laboratory Data, and Metabolic Indices
Authors: Amadou Wurry Jallow, Adama N. S. Bah, Karamo Bah, Shih-Ye Wang, Kuo-Chung Chu, Chien-Yeh Hsu
Abstract:
Chronic kidney disease (CKD) is a major public health challenge with high prevalence, rising incidence, and serious adverse consequences. Developing effective risk prediction models is a cost-effective approach to predicting and preventing complications of chronic kidney disease (CKD). This study aimed to develop an accurate machine learning model that can dynamically identify individuals at risk of CKD using various kinds of diagnostic data, with or without laboratory data, at different follow-up points. Creatinine is a key component used to predict CKD. These models will enable affordable and effective screening for CKD even with incomplete patient data, such as the absence of creatinine testing. This retrospective cohort study included data on 19,429 adults provided by a private research institute and screening laboratory in Taiwan, gathered between 2001 and 2015. Univariate Cox proportional hazard regression analyses were performed to determine the variables with high prognostic values for predicting CKD. We then identified interacting variables and grouped them according to diagnostic data categories. Our models used three types of data gathered at three points in time: non-laboratory, laboratory, and metabolic indices data. Next, we used subgroups of variables within each category to train two machine learning models (Random Forest and XGBoost). Our machine learning models can dynamically discriminate individuals at risk for developing CKD. All the models performed well using all three kinds of data, with or without laboratory data. Using only non-laboratory-based data (such as age, sex, body mass index (BMI), and waist circumference), both models predict chronic kidney disease as accurately as models using laboratory and metabolic indices data. Our machine learning models have demonstrated the use of different categories of diagnostic data for CKD prediction, with or without laboratory data. The machine learning models are simple to use and flexible because they work even with incomplete data and can be applied in any clinical setting, including settings where laboratory data is difficult to obtain.Keywords: chronic kidney disease, glomerular filtration rate, creatinine, novel metabolic indices, machine learning, risk prediction
Procedia PDF Downloads 1053724 Multi-Objective Discrete Optimization of External Thermal Insulation Composite Systems in Terms of Thermal and Embodied Energy Performance
Authors: Berfin Yildiz
Abstract:
These days, increasing global warming effects, limited amount of energy resources, etc., necessitates the awareness that must be present in every profession group. The architecture and construction sectors are responsible for both the embodied and operational energy of the materials. This responsibility has led designers to seek alternative solutions for energy-efficient material selection. The choice of energy-efficient material requires consideration of the entire life cycle, including the building's production, use, and disposal energy. The aim of this study is to investigate the method of material selection of external thermal insulation composite systems (ETICS). Embodied and in-use energy values of material alternatives were used for the evaluation in this study. The operational energy is calculated according to the u-value calculation method defined in the TS 825 (Thermal Insulation Requirements) standard for Turkey, and the embodied energy is calculated based on the manufacturer's Energy Performance Declaration (EPD). ETICS consists of a wall, adhesive, insulation, lining, mechanical, mesh, and exterior finishing materials. In this study, lining, mechanical, and mesh materials were ignored because EPD documents could not be obtained. The material selection problem is designed as a hypothetical volume area (5x5x3m) and defined as a multi-objective discrete optimization problem for external thermal insulation composite systems. Defining the problem as a discrete optimization problem is important in order to choose between materials of various thicknesses and sizes. Since production and use energy values, which are determined as optimization objectives in the study, are often conflicting values, material selection is defined as a multi-objective optimization problem, and it is aimed to obtain many solution alternatives by using Hypervolume (HypE) algorithm. The enrollment process started with 100 individuals and continued for 50 generations. According to the obtained results, it was observed that autoclaved aerated concrete and Ponce block as wall material, glass wool, as insulation material gave better results.Keywords: embodied energy, multi-objective discrete optimization, performative design, thermal insulation
Procedia PDF Downloads 1413723 Migrants and Non Migrants: Class Level Distinctions from a Village Level Analysis of Mahabubnagar District
Authors: T. P. Muhammed Jamsheer
Abstract:
This paper tries to explains some of differences between migrants and non-migrants households by taking ten indicators like land ownership, land distribution, lease in land, lease out land, demand of labour, supply of labour, land operational potential, holding of agriculture implements and livestock’s, irrigation potential of households and credit holding by the households of highly dry, drought affected, poverty stricken, multi caste and pluralistic sub castes village in very backward Mahabubnagar district of Andhra Pradesh. The paper is purely field work based research and conducted census survey of field work among the 298 households in highly dry village called Keppatta from Bhoothpur mandel. One of the main objectives of the paper is that, to find out the factors which differentiate migrants and non-migrants households and what are distress elements which forced the poor peasants migrants to outside the village. It concludes that among the migrants and non-migrants households and among the differences between the categories wise of both types of households, there are differences, except two indicators like lease in and lease out, all other indicators like land holding pattern, demand and supply of labour, land operation, irrigation potential, implements and livestock and credit facilities of migrants and non-migrants households shows that non-migrants have high share than the migrants households. This paper also showing the landed households are more migrants, means among the BC and FC households landed households are migrants while SC landless are more migrants which is contradictory to general/existing literatures conclusion that, landless are more migrant than landed households, here also showing that when the number of land in acres increases the share of SC is declining while the share of FC is increasing among the both migrants and non-migrants households. In the class wise SC households are more in distress situation than any other class and that might be leading to the highest share of migrants from the respective village. In the logistic econometric model to find out the relation between migration and other ten variables, the result shows that supply of labour, lease in of the land and size of the family are statically significantly related with migration and all other variables not significant relation with migration although the theoretical explanation shows the different results.Keywords: class, migrants, non migrants, economic indicators, distress factors
Procedia PDF Downloads 3333722 Maximizing Giant Prawn Resource Utilization in Banjar Regency, Indonesia: A CPUE and MSY Analysis
Authors: Ahmadi, Iriansyah, Raihana Yahman
Abstract:
The giant freshwater prawn (Macrobrachium rosenbergii de Man, 1879) is a valuable species for fisheries and aquaculture, especially in Southeast Asia, including Indonesia due to their high market demand and potential for export. The growing demand for prawns is straining the sustainability of the Banjar Regency fishery. To ensure the long-term sustainability and economic viability of the prawn fishing in this region, it is imperative to implement evidence-based management practices. This requires comprehensive data on the Catch per Unit Effort (CPUE), Maximum Sustainable Yield (MSY) and the current rate of prawn resource exploitation. it analyzed five years of prawn catch data (2019-2023) obtained from South Kalimantan Marine and Fisheries Services. Fishing gears (e.g. hook & line and cast net) were first standardized with Fishing Power Index, and then calculated effort and MSY. The intercept (a) and the slope (b) values of regression curve were used to estimate the catch-maximum sustainable yield (CMSY) and optimal fishing effort (Fopt) levels within the framework of the Surplus Production Model. The estimated rates of resource utilization were then compared to the criteria of The National Commission of Marine Fish Stock Assessment. The findings showed that the CPUE value peaked in 2019 at 33.48 kg/trip, while the lowest value observed in 2022 at 5.12 kg/trip. The CMSY value was estimated to be 17,396 kg/year, corresponding to the Fopt level of 1,636 trips/year. The highest utilization rate was 56.90% recorded in 2020, while the lowest rate was observed in 2021 at 46.16%. The annual utilization rates were classified as “medium”, suggesting that increasing fishing effort by 45% could potentially maximize prawn catches at an optimum level. These findings provide a baseline for sustainable fisheries management in the region.Keywords: giant prawns, CPUE, fishing power index, sustainable potential, utilization rate
Procedia PDF Downloads 163721 The Influence of Wildlife Watching Experience on Tourists’ Connection to Wildlife Conservation Caring and Awareness
Authors: Fiffy Hanisdah Saikim, Bruce Prideaux
Abstract:
One of the aims of wildlife tourism is to educate visitors about the threats facing wildlife, in general, and the actions needed to protect the environment and maintain biodiversity. Annually, millions of tourists visit natural areas and zoos primarily to view flagship species such as rhinos and elephants. Venues rely on the inherent charisma of these species to increase visitation and anchor conservation efforts. Expected visitor outcomes from the use of flagships include raised levels of awareness and pro-conservation behaviors. However, the role of flagships in wildlife tourism has been criticized for not delivering conservation benefits for species of interest or biodiversity and producing negative site impacts. Furthermore, little is known about how the connection to a species influences conservation behaviors. This paper addresses this gap in knowledge by extending previous work exploring wildlife tourism to include the emotional connection formed with wildlife species and pro-conservation behaviors for individual species and biodiversity. This paper represents a substantial contribution to the field because (a) it incorporates the role of the experience in understanding how tourists connect with a species and how this connection influences pro-conservation behaviors; and (b) is the first attempt to operationalize Conservation Caring as a measure of tourists’ connection with a species. Existing studies have investigated how specific elements, such as interpretation or species’ morphology may influence programmatic goals or awareness. However, awareness is a poor measure of an emotional connection with an animal. Furthermore, there has not been work done to address the holistic nature of the wildlife viewing experience, and its subsequent influence on behaviors. Results based on the structural equation modelling, support the validity of Conservation Caring as a factor; the ability of wildlife tourism to influence Conservation Caring; and that this connection is a strong predictor of conservation awareness behaviors. These findings suggest wildlife tourism can deliver conservation outcomes. The studies in this paper also provide a valuable framework for structuring wildlife tourism experiences to align with flagship related conservation outcomes, and exploring a wider assemblage of species as potential flagships.Keywords: wildlife tourism, conservation caring, conservation awareness, structural equation modelling
Procedia PDF Downloads 2913720 A Literature Study on IoT Based Monitoring System for Smart Agriculture
Authors: Sonu Rana, Jyoti Verma, A. K. Gautam
Abstract:
In most developing countries like India, the majority of the population heavily relies on agriculture for their livelihood. The yield of agriculture is heavily dependent on uncertain weather conditions like a monsoon, soil fertility, availability of irrigation facilities and fertilizers as well as support from the government. The agricultural yield is quite less compared to the effort put in due to inefficient agricultural facilities and obsolete farming practices on the one hand and lack of knowledge on the other hand, and ultimately agricultural community does not prosper. It is therefore essential for the farmers to improve their harvest yield by the acquisition of related data such as soil condition, temperature, humidity, availability of irrigation facilities, availability of, manure, etc., and adopt smart farming techniques using modern agricultural equipment. Nowadays, using IOT technology in agriculture is the best solution to improve the yield with fewer efforts and economic costs. The primary focus of this work-related is IoT technology in the agriculture field. By using IoT all the parameters would be monitored by mounting sensors in an agriculture field held at different places, will collect real-time data, and could be transmitted by a transmitting device like an antenna. To improve the system, IoT will interact with other useful systems like Wireless Sensor Networks. IoT is exploring every aspect, so the radio frequency spectrum is getting crowded due to the increasing demand for wireless applications. Therefore, Federal Communications Commission is reallocating the spectrum for various wireless applications. An antenna is also an integral part of the newly designed IoT devices. The main aim is to propose a new antenna structure used for IoT agricultural applications and compatible with this new unlicensed frequency band. The main focus of this paper is to present work related to these technologies in the agriculture field. This also presented their challenges & benefits. It can help in understanding the job of data by using IoT and correspondence advancements in the horticulture division. This will help to motivate and educate the unskilled farmers to comprehend the best bits of knowledge given by the huge information investigation utilizing smart technology.Keywords: smart agriculture, IoT, agriculture technology, data analytics, smart technology
Procedia PDF Downloads 1163719 Exploring the Development of Inter-State Relations under the Mechanism of the Hirschman Effect: A Case Study of Malaysia-China Relations in a Political Crisis (2020-2022)
Authors: Zhao Xinlei
Abstract:
In general, interstate relations are diverse and include economic, political, military, and diplomatic. Therefore, by analyzing the development of the relationship between Malaysia and China, we can better verify how the Hirschman effect works between small countries and great powers. This paper mainly adopts qualitative research methods and uses Malaysia's 2020-2022 political crisis as a specific case to verify the practice of the Hirschman effect between small and large countries. In particular, the interest groups in small countries that are closely related to trade with extraordinary abilities, as the primary beneficiaries in the development of trade between the two countries, although they may use their resources to a certain extent to influence the decisions of small countries towards great powers, they do not fundamentally determine the small countries' response to large countries. In this process, the relative power asymmetry between states plays a dominant role, as small states lack trust and suspicion in political diplomacy towards large states based on the perception of threat arising from the relative power asymmetry. When developing bilateral relations with large countries, small states seek practical cooperation to promote economic and trade development but become more cautious in their political ties to avoid being caught in power struggles between large states or being controlled by them. The case of Malaysia-China relations also illustrates that despite the ongoing political crisis in Malaysia, which saw the country go through the transition from (Perikatan Nasional) PN to (Barisan Nasional) BN, different governments have maintained a pragmatic and proactive economic policy towards China to reduce suspicion and mistrust between the two countries in political and diplomatic affairs, thereby enhancing cooperation and interactions between the two countries. At the same time, the Malaysian government is developing multi-dimensional foreign relations and actively participating in multilateral, regional organizations and platforms, such as those organized by the United States, to maintain a relative balance in the influence of the US and China on Malaysia.Keywords: Hirschman effect, interest groups, Malaysia, China, bilateral relations
Procedia PDF Downloads 683718 Ligandless Extraction and Determination of Trace Amounts of Lead in Pomegranate, Zucchini and Lettuce Samples after Dispersive Liquid-Liquid Microextraction with Ultrasonic Bath and Optimization of Extraction Condition with RSM Design
Authors: Fariba Tadayon, Elmira Hassanlou, Hasan Bagheri, Mostafa Jafarian
Abstract:
Heavy metals are released into water, plants, soil, and food by natural and human activities. Lead has toxic roles in the human body and may cause serious problems even in low concentrations, since it may have several adverse effects on human. Therefore, determination of lead in different samples is an important procedure in the studies of environmental pollution. In this work, an ultrasonic assisted-ionic liquid based-liquid-liquid microextraction (UA-IL-DLLME) procedure for the determination of lead in zucchini, pomegranate, and lettuce has been established and developed by using flame atomic absorption spectrometer (FAAS). For UA-IL-DLLME procedure, 10 mL of the sample solution containing Pb2+ was adjusted to pH=5 in a glass test tube with a conical bottom; then, 120 μL of 1-Hexyl-3-methylimidazolium hexafluoro phosphate (CMIM)(PF6) was rapidly injected into the sample solution with a microsyringe. After that, the resulting cloudy mixture was treated by ultrasonic for 5 min, then the separation of two phases was obtained by centrifugation for 5 min at 3000 rpm and IL-phase diluted with 1 cc ethanol, and the analytes were determined by FAAS. The effect of different experimental parameters in the extraction step including: ionic liquid volume, sonication time and pH was studied and optimized simultaneously by using Response Surface Methodology (RSM) employing a central composite design (CCD). The optimal conditions were determined to be an ionic liquid volume of 120 μL, sonication time of 5 min, and pH=5. The linear ranges of the calibration curve for the determination by FAAS of lead were 0.1-4 ppm with R2=0.992. Under optimized conditions, the limit of detection (LOD) for lead was 0.062 μg.mL-1, the enrichment factor (EF) was 93, and the relative standard deviation (RSD) for lead was calculated as 2.29%. The levels of lead for pomegranate, zucchini, and lettuce were calculated as 2.88 μg.g-1, 1.54 μg.g-1, 2.18 μg.g-1, respectively. Therefore, this method has been successfully applied for the analysis of the content of lead in different food samples by FAAS.Keywords: Dispersive liquid-liquid microextraction, Central composite design, Food samples, Flame atomic absorption spectrometry.
Procedia PDF Downloads 2833717 A Comparative Study of Burnout and Coping Strategies between HIV Counselors: Face to Face and Online Counseling Services in Addis Ababa
Authors: Yemisrach Mihertu Amsale
Abstract:
The purpose of this study was to compare burnout and coping strategies between HIV counselors in face to face and online counseling settings in Addis Ababa. The study was mixed approach design that was quantitative and qualitative. For the quantitative data the participants involved in this study included 64 face to face and 47 online HIV counselors in both counseling settings. In addition, 23 participants were involved to offer qualitative data from both counseling settings. For the purpose of gathering the quantitative data, the instruments, namely, demographic questionnaire, Maslach Burnout Inventory and the COPE questionnaire, were used to gather quantitative data. Qualitative data was also gathered in the FGD Guide and Interview Guide. Thus, this study revealed that HIV counselors in online counseling settings scored high on emotional exhaustion, depersonalization and low in personal accomplishment dimensions of burnout as compared to HIV counselors in face to face setting and the difference was statistically significant in emotional exhaustion and personal accomplishment, but there was no a significant difference on depersonalization dimension of burnout between the two groups. In addition, the present study revealed a statistically significant difference on problem focused coping strategy between the two groups and yet for on the emotion focused coping strategy the difference was not statistically significant. Statistically negative correlation was observed between some demographic variables such as age with emotional exhaustion and depersonalization dimensions of burnout; years of experiences and personal accomplishment dimension of burnout. A statistically positive correlation was also observed between average number of clients served per day and emotional exhaustion. Sex was having a statistically positive correlation with coping strategy. Lastly, a significant positive correlation was also observed in the emotional exhaustion dimension of the burnout and the emotional focused coping strategy. Generally, this study has shown that HIV counselors suffer from moderate to high level of burnout. Based on the findings, conclusions were made and recommendations were forwarded.Keywords: counseling, burnout management, psychological, behavioral sciences
Procedia PDF Downloads 3053716 Developing Manufacturing Process for the Graphene Sensors
Authors: Abdullah Faqihi, John Hedley
Abstract:
Biosensors play a significant role in the healthcare sectors, scientific and technological progress. Developing electrodes that are easy to manufacture and deliver better electrochemical performance is advantageous for diagnostics and biosensing. They can be implemented extensively in various analytical tasks such as drug discovery, food safety, medical diagnostics, process controls, security and defence, in addition to environmental monitoring. Development of biosensors aims to create high-performance electrochemical electrodes for diagnostics and biosensing. A biosensor is a device that inspects the biological and chemical reactions generated by the biological sample. A biosensor carries out biological detection via a linked transducer and transmits the biological response into an electrical signal; stability, selectivity, and sensitivity are the dynamic and static characteristics that affect and dictate the quality and performance of biosensors. In this research, a developed experimental study for laser scribing technique for graphene oxide inside a vacuum chamber for processing of graphene oxide is presented. The processing of graphene oxide (GO) was achieved using the laser scribing technique. The effect of the laser scribing on the reduction of GO was investigated under two conditions: atmosphere and vacuum. GO solvent was coated onto a LightScribe DVD. The laser scribing technique was applied to reduce GO layers to generate rGO. The micro-details for the morphological structures of rGO and GO were visualised using scanning electron microscopy (SEM) and Raman spectroscopy so that they could be examined. The first electrode was a traditional graphene-based electrode model, made under normal atmospheric conditions, whereas the second model was a developed graphene electrode fabricated under a vacuum state using a vacuum chamber. The purpose was to control the vacuum conditions, such as the air pressure and the temperature during the fabrication process. The parameters to be assessed include the layer thickness and the continuous environment. Results presented show high accuracy and repeatability achieving low cost productivity.Keywords: laser scribing, lightscribe DVD, graphene oxide, scanning electron microscopy
Procedia PDF Downloads 1223715 Epidemiology of Congenital Heart Defects in Kazakhstan: Data from Unified National Electronic Healthcare System 2014-2020
Authors: Dmitriy Syssoyev, Aslan Seitkamzin, Natalya Lim, Kamilla Mussina, Abduzhappar Gaipov, Dimitri Poddighe, Dinara Galiyeva
Abstract:
Background: Data on the epidemiology of congenital heart defects (CHD) in Kazakhstan is scarce. Therefore, the aim of this study was to describe the incidence, prevalence and all-cause mortality of patients with CHD in Kazakhstan, using national large-scale registry data from the Unified National Electronic Healthcare System (UNEHS) for the period of 2014-2020. Methods: In this retrospective cohort study, the included data pertained to all patients diagnosed with CHD in Kazakhstan and registered in UNEHS between January 2014 and December 2020. CHD was defined based on International Classification of Diseases 10th Revision (ICD-10) codes Q20-Q26. Incidence, prevalence, and all-cause mortality rates were calculated per 100,000 population. Survival analysis was performed using Cox proportional hazards regression modeling and the Kaplan-Meier method. Results: In total, 66,512 patients were identified. Among them, 59,534 (89.5%) were diagnosed with a single CHD, while 6,978 (10.5%) had more than two CHDs. The median age at diagnosis was 0.08 years (interquartile range (IQR) 0.01 – 0.66) for people with multiple CHD types and 0.39 years (IQR 0.04 – 8.38) for those with a single CHD type. The most common CHD types were atrial septal defect (ASD) and ventricular septal defect (VSD), accounting for 25.8% and 21.2% of single CHD cases, respectively. The most common multiple types of CHD were ASD with VSD (23.4%), ASD with patent ductus arteriosus (PDA) (19.5%), and VSD with PDA (17.7%). The incidence rate of CHD decreased from 64.6 to 47.1 cases per 100,000 population among men and from 68.7 to 42.4 among women. The prevalence rose from 66.1 to 334.1 cases per 100,000 population among men and from 70.8 to 328.7 among women. Mortality rates showed a slight increase from 3.5 to 4.7 deaths per 100,000 in men and from 2.9 to 3.7 in women. Median follow-up was 5.21 years (IQR 2.47 – 11.69). Male sex (HR 1.60, 95% CI 1.45 - 1.77), having multiple CHDs (HR 2.45, 95% CI 2.01 - 2.97), and living in a rural area (HR 1.32, 95% CI 1.19 - 1.47) were associated with a higher risk of all-cause mortality. Conclusion: The incidence of CHD in Kazakhstan has shown a moderate decrease between 2014 and 2020, while prevalence and mortality have increased. Male sex, multiple CHD types, and rural residence were significantly associated with a higher risk of all-cause mortality.Keywords: congenital heart defects (CHD), epidemiology, incidence, Kazakhstan, mortality, prevalence
Procedia PDF Downloads 963714 Ownership and Shareholder Schemes Effects on Airport Corporate Strategy in Europe
Authors: Dimitrios Dimitriou, Maria Sartzetaki
Abstract:
In the early days of the of civil aviation, airports are totally state-owned companies under the control of national authorities or regional governmental bodies. From that time the picture has totally changed and airports privatisation and airport business commercialisation are key success factors to stimulate air transport demand, generate revenues and attract investors, linked to reliable and resilience of air transport system. Nowadays, airport's corporate strategy deals with policies and actions, affecting essential the business plans, the financial targets and the economic footprint in a regional economy they serving. Therefore, exploring airport corporate strategy is essential to support the decision in business planning, management efficiency, sustainable development and investment attractiveness on one hand; and define policies towards traffic development, revenues generation, capacity expansion, cost efficiency and corporate social responsibility. This paper explores key outputs in airport corporate strategy for different ownership schemes. The airport corporations are grouped in three major schemes: (a) Public, in which the public airport operator acts as part of the government administration or as a corporised public operator; (b) Mixed scheme, in which the majority of the shares and the corporate strategy is driven by the private or the public sector; and (c) Private, in which the airport strategy is driven by the key aspects of globalisation and liberalisation of the aviation sector. By a systemic approach, the key drivers in corporate strategy for modern airport business structures are defined. Key objectives are to define the key strategic opportunities and challenges and assess the corporate goals and risks towards sustainable business development for each scheme. The analysis based on an extensive cross-sectional dataset for a sample of busy European airports providing results on corporate strategy key priorities, risks and business models. The conventional wisdom is to highlight key messages to authorities, institutes and professionals on airport corporate strategy trends and directions.Keywords: airport corporate strategy, airport ownership, airports business models, corporate risks
Procedia PDF Downloads 3043713 Fire Safety Assessment of At-Risk Groups
Authors: Naser Kazemi Eilaki, Carolyn Ahmer, Ilona Heldal, Bjarne Christian Hagen
Abstract:
Older people and people with disabilities are recognized as at-risk groups when it comes to egress and travel from hazard zone to safe places. One's disability can negatively influence her or his escape time, and this becomes even more important when people from this target group live alone. This research deals with the fire safety of mentioned people's buildings by means of probabilistic methods. For this purpose, fire safety is addressed by modeling the egress of our target group from a hazardous zone to a safe zone. A common type of detached house with a prevalent plan has been chosen for safety analysis, and a limit state function has been developed according to the time-line evacuation model, which is based on a two-zone and smoke development model. An analytical computer model (B-Risk) is used to consider smoke development. Since most of the involved parameters in the fire development model pose uncertainty, an appropriate probability distribution function has been considered for each one of the variables with indeterministic nature. To achieve safety and reliability for the at-risk groups, the fire safety index method has been chosen to define the probability of failure (causalities) and safety index (beta index). An improved harmony search meta-heuristic optimization algorithm has been used to define the beta index. Sensitivity analysis has been done to define the most important and effective parameters for the fire safety of the at-risk group. Results showed an area of openings and intervals to egress exits are more important in buildings, and the safety of people would improve with increasing dimensions of occupant space (building). Fire growth is more critical compared to other parameters in the home without a detector and fire distinguishing system, but in a home equipped with these facilities, it is less important. Type of disabilities has a great effect on the safety level of people who live in the same home layout, and people with visual impairment encounter more risk of capturing compared to visual and movement disabilities.Keywords: fire safety, at-risk groups, zone model, egress time, uncertainty
Procedia PDF Downloads 1033712 An In-Depth Comparison Study of Canadian and Danish's Entrepreneurship and Education System
Authors: Amna Khaliq
Abstract:
In this research paper, a comparison study has been undertaken between Canada and Denmark to analyze the education system between the countries in entrepreneurship. Denmark, a land of high wages and high taxes, and Canada, a land of immigrants and opportunities, have seen a positive relationship in entrepreneurs' growth. They are both considered one of the top ten countries to start a business and to have government support globally. However, education is entirely free to Danish students, including university degrees, compared to Canadians, which can further hurdle for Canadian millennials to grow in the business world—the business experience more growth with educated entrepreneurs with international backgrounds in new immigrants. Denmark has seen a gradual increase in female entrepreneurs over the decade but is still lower than OECD countries. Compassionate management and work-life balance are prioritized in Denmark, unlike in Canada. Danish are early adopters of technology and have excellent infrastructure to support the technology industry, whereas Canada is still a service-oriented and manufacturer-based country. 2018 has been the highest number of opening businesses for Canada and Denmark. Some companies offer high wages, hiring bonuses, flexible working hours, wellness, and mental health benefits during Pandemic to keep the companies running and keep their workers' morale high. Pandemic has taught consumers new patterns to shop online. It is essential now to use technology and automation to increase productivity in businesses. Only those companies will survive that are applying this strategy. The Pandemic has ultimately changed entrepreneurs' and employees' behavior in the business world. Along with Ph.D. professors, entrepreneurs should be allowed to teach at learning intuitions. Millennials turn out to be the most entrepreneurial generation in both countries. Entrepreneurship education will only be beneficial when students create businesses and learn from real-life experiences. Managing physical, mental, emotional, and psychological health while dealing with high pressure in entrepreneurship are soft skills learned through practical work.Keywords: entrepreneurship education, millennials, pandemic, Denmark, Canada
Procedia PDF Downloads 1053711 Soil Liquefaction Hazard Evaluation for Infrastructure in the New Bejaia Quai, Algeria
Authors: Mohamed Khiatine, Amal Medjnoun, Ramdane Bahar
Abstract:
The North Algeria is a highly seismic zone, as evidenced by the historical seismicity. During the past two decades, it has experienced several moderate to strong earthquakes. Therefore, the geotechnical engineering problems that involve dynamic loading of soils and soil-structure interaction system requires, in the presence of saturated loose sand formations, liquefaction studies. Bejaia city, located in North-East of Algiers, Algeria, is a part of the alluvial plain which covers an area of approximately 750 hectares. According to the Algerian seismic code, it is classified as moderate seismicity zone. This area had not experienced in the past urban development because of the different hazards identified by hydraulic and geotechnical studies conducted in the region. The low bearing capacity of the soil, its high compressibility and the risk of liquefaction and flooding are among these risks and are a constraint on urbanization. In this area, several cases of structures founded on shallow foundations have suffered damages. Hence, the soils need treatment to reduce the risk. Many field and laboratory investigations, core drilling, pressuremeter test, standard penetration test (SPT), cone penetrometer test (CPT) and geophysical down hole test, were performed in different locations of the area. The major part of the area consists of silty fine sand , sometimes heterogeneous, has not yet reached a sufficient degree of consolidation. The ground water depth changes between 1.5 and 4 m. These investigations show that the liquefaction phenomenon is one of the critical problems for geotechnical engineers and one of the obstacles found in design phase of projects. This paper presents an analysis to evaluate the liquefaction potential, using the empirical methods based on Standard Penetration Test (SPT), Cone Penetration Test (CPT) and shear wave velocity and numerical analysis. These liquefaction assessment procedures indicate that liquefaction can occur to considerable depths in silty sand of harbor zone of Bejaia.Keywords: earthquake, modeling, liquefaction potential, laboratory investigations
Procedia PDF Downloads 3533710 One-Step Synthesis and Characterization of Biodegradable ‘Click-Able’ Polyester Polymer for Biomedical Applications
Authors: Wadha Alqahtani
Abstract:
In recent times, polymers have seen a great surge in interest in the field of medicine, particularly chemotherapeutics. One recent innovation is the conversion of polymeric materials into “polymeric nanoparticles”. These nanoparticles can be designed and modified to encapsulate and transport drugs selectively to cancer cells, minimizing collateral damage to surrounding healthy tissues, and improve patient quality of life. In this study, we have synthesized pseudo-branched polyester polymers from bio-based small molecules, including sorbitol, glutaric acid and a propargylic acid derivative to further modify the polymer to make it “click-able" with an azide-modified target ligand. Melt polymerization technique was used for this polymerization reaction, using lipase enzyme catalyst NOVO 435. This reaction was conducted between 90- 95 °C for 72 hours. The polymer samples were collected in 24-hour increments for characterization and to monitor reaction progress. The resulting polymer was purified with the help of methanol dissolving and filtering with filter paper then characterized via NMR, GPC, FTIR, DSC, TGA and MALDI-TOF. Following characterization, these polymers were converted to a polymeric nanoparticle drug delivery system using solvent diffusion method, wherein DiI optical dye and chemotherapeutic drug Taxol can be encapsulated simultaneously. The efficacy of the nanoparticle’s apoptotic effects were analyzed in-vitro by incubation with prostate cancer (LNCaP) and healthy (CHO) cells. MTT assays and fluorescence microscopy were used to assess the cellular uptake and viability of the cells after 24 hours at 37 °C and 5% CO2 atmosphere. Results of the assays and fluorescence imaging confirmed that the nanoparticles were successful in both selectively targeting and inducing apoptosis in 80% of the LNCaP cells within 24 hours without affecting the viability of the CHO cells. These results show the potential of using biodegradable polymers as a vehicle for receptor-specific drug delivery and a potential alternative for traditional systemic chemotherapy. Detailed experimental results will be discussed in the e-poster.Keywords: chemotherapeutic drug, click chemistry, nanoparticle, prostat cancer
Procedia PDF Downloads 1153709 Malpractice, Even in Conditions of Compliance With the Rules of Dental Ethics
Authors: Saimir Heta, Kers Kapaj, Rialda Xhizdari, Ilma Robo
Abstract:
Despite the existence of different dental specialties, the dentist-patient relationship is unique, in the very fact that the treatment is performed by one doctor and the patient identifies the malpractice presented as part of that doctor's practice; this is in complete contrast to cases of medical treatments where the patient can be presented to a team of doctors, to treat a specific pathology. The rules of dental ethics are almost the same as the rules of medical ethics. The appearance of dental malpractice affects exactly this two-party relationship, created on the basis of professionalism, without deviations in this direction, between the dentist and the patient, but with very narrow individual boundaries, compared to cases of medical malpractice. Main text: Malpractice can have different reasons for its appearance, starting from professional negligence, but also from the lack of professional knowledge of the dentist who undertakes the dental treatment. It should always be seen in perspective that we are not talking about the individual - the dentist who goes to work with the intention of harming their patients. Malpractice can also be a consequence of the impossibility, for anatomical or physiological reasons of the tooth under dental treatment, to realize the predetermined dental treatment plan. On the other hand, the dentist himself is an individual who can be affected by health conditions, or have vices that affect the systemic health of the dentist as an individual, which in these conditions can cause malpractice. So, depending on the reason that led to the appearance of malpractice, the method of treatment from a legal point of view also varies, for the dentist who committed the malpractice, evaluating the latter if the malpractice came under the conditions of applying the rules of dental ethics. Conclusions: The deviation from the predetermined dental plan is the minimum sign of malpractice and the latter should not be definitively related only to cases of difficult dental treatments. The identification of the reason for the appearance of malpractice is the initial element, which makes the difference in the way of its treatment, from a legal point of view, and the involvement of the dentist in the assessment of the malpractice committed, must be based on the legislation in force, which must be said to have their specific changes in different states. Malpractice should be referred to, or included in the lectures or in the continuing education of professionals, because it serves as a method of obtaining professional experience in order not to repeat the same thing several times, by different professionals.Keywords: dental ethics, malpractice, negligence, legal basis, continuing education, dental treatments
Procedia PDF Downloads 613708 Dosimetry in Interventional Radiology Examinations for Occupational Exposure Monitoring
Authors: Ava Zarif Sanayei, Sedigheh Sina
Abstract:
Interventional radiology (IR) uses imaging guidance, including X-rays and CT scans, to deliver therapy precisely. Most IR procedures are performed under local anesthesia and start with a small needle being inserted through the skin, which may be called pinhole surgery or image-guided surgery. There is increasing concern about radiation exposure during interventional radiology procedures due to procedure complexity. The basic aim of optimizing radiation protection as outlined in ICRP 139, is to strike a balance between image quality and radiation dose while maximizing benefits, ensuring that diagnostic interpretation is satisfactory. This study aims to estimate the equivalent doses to the main trunk of the body for the Interventional radiologist and Superintendent using LiF: Mg, Ti (TLD-100) chips at the IR department of a hospital in Shiraz, Iran. In the initial stage, the dosimeters were calibrated with the use of various phantoms. Afterward, a group of dosimeters was prepared, following which they were used for three months. To measure the personal equivalent dose to the body, three TLD chips were put in a tissue-equivalent batch and used under a protective lead apron. After the completion of the duration, TLDs were read out by a TLD reader. The results revealed that these individuals received equivalent doses of 387.39 and 145.11 µSv, respectively. The findings of this investigation revealed that the total radiation exposure to the staff was less than the annual limit of occupational exposure. However, it's imperative to implement appropriate radiation protection measures. Although the dose received by the interventional radiologist is a bit noticeable, it may be due to the reason for using conventional equipment with over-couch x-ray tubes for interventional procedures. It is therefore important to use dedicated equipment and protective means such as glasses and screens whenever compatible with the intervention when they are available or have them fitted to equipment if they are not present. Based on the results, the placement of staff in an appropriate location led to increasing the dose to the radiologist. Manufacturing and installation of moveable lead curtains with a thickness of 0.25 millimeters can effectively minimize the radiation dose to the body. Providing adequate training on radiation safety principles, particularly for technologists, can be an optimal approach to further decreasing exposure.Keywords: interventional radiology, personal monitoring, radiation protection, thermoluminescence dosimetry
Procedia PDF Downloads 623707 Distribution of Antioxidants between Sour Cherry Juice and Pomace
Authors: Sonja Djilas, Gordana Ćetković, Jasna Čanadanović-Brunet, Vesna Tumbas Šaponjac, Slađana Stajčić, Jelena Vulić, Milica Vinčić
Abstract:
In recent years, interest in food rich in bioactive compounds, such as polyphenols, increased the advantages of the functional food products. Bioactive components help to maintain health and prevention of diseases such as cancer, cardiovascular and many other degenerative diseases. Recent research has shown that the fruit pomace, a byproduct generated from the production of juice, can be a potential source of valuable bioactive compounds. The use of fruit industrial waste in the processing of functional foods represents an important new step for the food industry. Sour cherries have considerable nutritional, medicinal, dietetic and technological value. According to the production volume of cherries, Serbia ranks seventh in the world, with a share of 7% of the total production. The use of sour cherry pomace has so far been limited to animal feed, even though it can be potentially a good source of polyphenols. For this study, local variety of sour cherry cv. ‘Feketićka’ was chosen for its more intensive taste and deeper red color, indicating high anthocyanin content. The contents of total polyphenols, flavonoids and anthocyanins, as well as radical scavenging activity on DPPH radicals and reducing power of sour cherry juice and pomace were compared using spectrophotometrical assays. According to the results obtained, 66.91% of total polyphenols, 46.77% of flavonoids, 46.77% of total anthocyanins and 47.88% of anthocyanin monomers from sour cherry fruits have been transferred to juice. On the other hand, 29.85% of total polyphenols, 33.09% of flavonoids, 53.23% of total anthocyanins and 52.12% of anthocyanin monomers remained in pomace. Regarding radical scavenging activity, 65.51% of Trolox equivalents from sour cherries were exported to juice, while 34.49% was left in pomace. However, reducing power of sour cherry juice was much stronger than pomace (91.28% and 8.72% of Trolox equivalents from sour cherry fruits, respectively). Based on our results it can be concluded that sour cherry pomace is still a rich source of natural antioxidants, especially anthocyanins with coloring capacity, therefore it can be used for dietary supplements development and food fortification.Keywords: antioxidants, polyphenols, pomace, sour cherry
Procedia PDF Downloads 3253706 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 1573705 Laser - Ultrasonic Method for the Measurement of Residual Stresses in Metals
Authors: Alexander A. Karabutov, Natalia B. Podymova, Elena B. Cherepetskaya
Abstract:
The theoretical analysis is carried out to get the relation between the ultrasonic wave velocity and the value of residual stresses. The laser-ultrasonic method is developed to evaluate the residual stresses and subsurface defects in metals. The method is based on the laser thermooptical excitation of longitudinal ultrasonic wave sand their detection by a broadband piezoelectric detector. A laser pulse with the time duration of 8 ns of the full width at half of maximum and with the energy of 300 µJ is absorbed in a thin layer of the special generator that is inclined relative to the object under study. The non-uniform heating of the generator causes the formation of a broadband powerful pulse of longitudinal ultrasonic waves. It is shown that the temporal profile of this pulse is the convolution of the temporal envelope of the laser pulse and the profile of the in-depth distribution of the heat sources. The ultrasonic waves reach the surface of the object through the prism that serves as an acoustic duct. At the interface ‚laser-ultrasonic transducer-object‘ the conversion of the most part of the longitudinal wave energy takes place into the shear, subsurface longitudinal and Rayleigh waves. They spread within the subsurface layer of the studied object and are detected by the piezoelectric detector. The electrical signal that corresponds to the detected acoustic signal is acquired by an analog-to-digital converter and when is mathematically processed and visualized with a personal computer. The distance between the generator and the piezodetector as well as the spread times of acoustic waves in the acoustic ducts are the characteristic parameters of the laser-ultrasonic transducer and are determined using the calibration samples. There lative precision of the measurement of the velocity of longitudinal ultrasonic waves is 0.05% that corresponds to approximately ±3 m/s for the steels of conventional quality. This precision allows one to determine the mechanical stress in the steel samples with the minimal detection threshold of approximately 22.7 MPa. The results are presented for the measured dependencies of the velocity of longitudinal ultrasonic waves in the samples on the values of the applied compression stress in the range of 20-100 MPa.Keywords: laser-ultrasonic method, longitudinal ultrasonic waves, metals, residual stresses
Procedia PDF Downloads 3253704 Engineering Microstructural Evolution during Arc Wire Directed Energy Deposition of Magnesium Alloy (AZ31)
Authors: Nivatha Elangovan, Lakshman Neelakantan, Murugaiyan Amirthalingam
Abstract:
Magnesium and its alloys are widely used for various lightweight engineering and biomedical applications as they render high strength to low weight ratio and excellent corrosion resistance. These alloys possess good bio-compatibility and similar mechanical properties to natural bone. However, manufacturing magnesium alloy components by conventional formative and subtractive methods is challenging due to their poor castability, oxidation potential, and machinability. Therefore, efforts are made to produce complex-design containing magnesium alloy components by additive manufacturing (AM). Arc-wire directed energy deposition (AW-DED), also known as wire arc additive manufacturing (WAAM), is more attractive to produce large volume components with increased productivity than any other AM technique. In this research work, efforts were made to optimise the deposition parameters to build thick-walled (about 10 mm) AZ31 magnesium alloy components by a gas metal arc (GMA) based AW-DED process. By using controlled dip short-circuiting metal transfer in a GMA process, depositions were carried out without defects and spatter formation. Current and voltage waveforms were suitably modified to achieve stable metal transfer. Moreover, the droplet transfer behaviour was analysed using high-speed image analysis and correlated with arc energy. Optical and scanning electron microscopy analyses were carried out to correlate the influence of deposition parameters with the microstructural evolution during deposition. The investigation reveals that by carefully controlling the current-voltage waveform and droplet transfer behaviour, it is possible to stabilise equiaxed grain microstructures in the deposited AZ31 components. The printed component exhibited an improved mechanical property as equiaxed grains improve the ductility and enhance the toughness. The equiaxed grains in the component improved the corrosion-resistant behaviour of other conventionally manufactured components.Keywords: arc wire directed energy deposition, AZ31 magnesium alloy, equiaxed grain, corrosion
Procedia PDF Downloads 1243703 Temporality in Architecture and Related Knowledge
Authors: Gonca Z. Tuncbilek
Abstract:
Architectural research tends to define architecture in terms of its permanence. In this study, the term ‘temporality’ and its use in architectural discourse is re-visited. The definition, proposition, and efficacy of the temporality occur both in architecture and in its related knowledge. The temporary architecture not only fulfills the requirement of the architectural programs, but also plays a significant role in generating an environment of architectural discourse. In recent decades, there is a great interest on the temporary architectural practices regarding to the installations, exhibition spaces, pavilions, and expositions; inviting the architects to experience and think about architecture. The temporary architecture has a significant role among the architecture, the architect, and the architectural discourse. Experiencing the contemporary materials, methods and technique; they have proposed the possibilities of the future architecture. These structures give opportunities to the architects to a wide-ranging variety of freedoms to experience the ‘new’ in architecture. In addition to this experimentation, they can be considered as an agent to redefine and reform the boundaries of the architectural discipline itself. Although the definition of architecture is re-analyzed in terms of its temporality rather than its permanence; architecture, in reality, still relies on historically codified types and principles of the formation. The concept of type can be considered for several different sciences, and there is a tendency to organize and understand the world in terms of classification in many different cultures and places. ‘Type’ is used as a classification tool with/without the scope of the critical invention. This study considers theories of type, putting forward epistemological and discursive arguments related to the form of architecture, being related to historical and formal disciplinary knowledge in architecture. This study has been to emphasize the importance of the temporality in architecture as a creative tool to reveal the position within the architectural discourse. The temporary architecture offers ‘new’ opportunities in the architectural field to be analyzed. In brief, temporary structures allow the architect freedoms to the experimentation in architecture. While redefining the architecture in terms of temporality, architecture still relies on historically codified types (pavilions, exhibitions, expositions, and installations). The notion of architectural types and its varying interpretations are analyzed based on the texts of architectural theorists since the Age of Enlightenment. Investigating the classification of type in architecture particularly temporary architecture, it is necessary to return to the discussion of the origin of the knowledge and its classification.Keywords: classification of architecture, exhibition design, pavilion design, temporary architecture
Procedia PDF Downloads 3653702 Application of Biomimetic Approach in Optimizing Buildings Heat Regulating System Using Parametric Design Tools to Achieve Thermal Comfort in Indoor Spaces in Hot Arid Regions
Authors: Aya M. H. Eissa, Ayman H. A. Mahmoud
Abstract:
When it comes to energy efficient thermal regulation system, natural systems do not only offer an inspirational source of innovative strategies but also sustainable and even regenerative ones. Using biomimetic design an energy efficient thermal regulation system can be developed. Although, conventional design process methods achieved fairly efficient systems, they still had limitations which can be overcome by using parametric design software. Accordingly, the main objective of this study is to apply and assess the efficiency of heat regulation strategies inspired from termite mounds in residential buildings’ thermal regulation system. Parametric design software is used to pave the way for further and more complex biomimetic design studies and implementations. A hot arid region is selected due to the deficiency of research in this climatic region. First, the analysis phase in which the stimuli, affecting, and the parameters, to be optimized, are set mimicking the natural system. Then, based on climatic data and using parametric design software Grasshopper, building form and openings height and areas are altered till settling on an optimized solution. Finally, an assessment of the efficiency of the optimized system, in comparison with a conventional system, is determined by firstly, indoors airflow and indoors temperature, by Ansys Fluent (CFD) simulation. Secondly by and total solar radiation falling on the building envelope, which was calculated using Ladybug, Grasshopper plugin. The results show an increase in the average indoor airflow speed from 0.5m/s to 1.5 m/s. Also, a slight decrease in temperature was noticed. And finally, the total radiation was decreased by 4%. In conclusion, despite the fact that applying a single bio-inspired heat regulation strategy might not be enough to achieve an optimum system, the concluded system is more energy efficient than the conventional ones as it aids achieving indoors comfort through passive techniques. Thus demonstrating the potential of parametric design software in biomimetic design.Keywords: biomimicry, heat regulation systems, hot arid regions, parametric design, thermal comfort
Procedia PDF Downloads 2943701 Study on Preparation and Storage of Composite Vegetable Squash of Tomato, Pumpkin and Ginger
Authors: K. Premakumar, R. G. Lakmali, S. M. A. C. U. Senarathna
Abstract:
In the present world, production and consumption of fruit and vegetable beverages have increased owing to the healthy life style of the people. Therefore, a study was conducted to develop composite vegetable squash by incorporating nutritional, medicinal and organoleptic properties of tomato, pumpkin and ginger. Considering the finding of several preliminary studies, five formulations in different combinations tomato pumpkin were taken and their physico-chemical parameters such as pH, TSS, titrable acidity, ascorbic acid content and total sugar and organoleptic parameters such as colour, aroma, taste, nature, overall acceptability were analyzed. Then the best sample was improved by using 1 % ginger (50% tomato+ 50% pumpkin+ 1% ginger). Best three formulations were selected for storage studied. The formulations were stored at 30 °C room temperature and 70-75% of RH for 12 weeks. Physicochemical parameters , organoleptic and microbial activity (total plate count, yeast and mold, E-coil) were analyzed during storage periods and protein content, fat content, ash were also analysed%.The study on the comparison of physico-chemical and sensory qualities of stored Squashes was done up to 12 weeks storage periods. The nutritional analysis of freshly prepared tomato pumpkin vegetable squash formulations showed increasing trend in titratable acidity, pH, total sugar, non -reducing sugar, total soluble solids and decreasing trend in ascorbic acid and reducing sugar with storage periods. The results of chemical analysis showed that, there were the significant different difference (p < 0.05) between tested formulations. Also, sensory analysis also showed that there were significant differences (p < 0.05) for organoleptic character characters between squash formulations. The highest overall acceptability was observed in formulation with 50% tomato+ 50% pumpkin+1% ginger and all the all the formulations were microbiologically safe for consumption. Based on the result of physico-chemical characteristics, sensory attributes and microbial test, the Composite Vegetable squash with 50% tomato+50% pumpkin+1% ginger was selected as best formulation and could be stored for 12 weeks without any significant changes in quality characteristics.Keywords: nutritional analysis, formulations, sensory attributes, squash
Procedia PDF Downloads 1993700 Bayesian Parameter Inference for Continuous Time Markov Chains with Intractable Likelihood
Authors: Randa Alharbi, Vladislav Vyshemirsky
Abstract:
Systems biology is an important field in science which focuses on studying behaviour of biological systems. Modelling is required to produce detailed description of the elements of a biological system, their function, and their interactions. A well-designed model requires selecting a suitable mechanism which can capture the main features of the system, define the essential components of the system and represent an appropriate law that can define the interactions between its components. Complex biological systems exhibit stochastic behaviour. Thus, using probabilistic models are suitable to describe and analyse biological systems. Continuous-Time Markov Chain (CTMC) is one of the probabilistic models that describe the system as a set of discrete states with continuous time transitions between them. The system is then characterised by a set of probability distributions that describe the transition from one state to another at a given time. The evolution of these probabilities through time can be obtained by chemical master equation which is analytically intractable but it can be simulated. Uncertain parameters of such a model can be inferred using methods of Bayesian inference. Yet, inference in such a complex system is challenging as it requires the evaluation of the likelihood which is intractable in most cases. There are different statistical methods that allow simulating from the model despite intractability of the likelihood. Approximate Bayesian computation is a common approach for tackling inference which relies on simulation of the model to approximate the intractable likelihood. Particle Markov chain Monte Carlo (PMCMC) is another approach which is based on using sequential Monte Carlo to estimate intractable likelihood. However, both methods are computationally expensive. In this paper we discuss the efficiency and possible practical issues for each method, taking into account the computational time for these methods. We demonstrate likelihood-free inference by performing analysing a model of the Repressilator using both methods. Detailed investigation is performed to quantify the difference between these methods in terms of efficiency and computational cost.Keywords: Approximate Bayesian computation(ABC), Continuous-Time Markov Chains, Sequential Monte Carlo, Particle Markov chain Monte Carlo (PMCMC)
Procedia PDF Downloads 2033699 Corpus-Based Neural Machine Translation: Empirical Study Multilingual Corpus for Machine Translation of Opaque Idioms - Cloud AutoML Platform
Authors: Khadija Refouh
Abstract:
Culture bound-expressions have been a bottleneck for Natural Language Processing (NLP) and comprehension, especially in the case of machine translation (MT). In the last decade, the field of machine translation has greatly advanced. Neural machine translation NMT has recently achieved considerable development in the quality of translation that outperformed previous traditional translation systems in many language pairs. Neural machine translation NMT is an Artificial Intelligence AI and deep neural networks applied to language processing. Despite this development, there remain some serious challenges that face neural machine translation NMT when translating culture bounded-expressions, especially for low resources language pairs such as Arabic-English and Arabic-French, which is not the case with well-established language pairs such as English-French. Machine translation of opaque idioms from English into French are likely to be more accurate than translating them from English into Arabic. For example, Google Translate Application translated the sentence “What a bad weather! It runs cats and dogs.” to “يا له من طقس سيء! تمطر القطط والكلاب” into the target language Arabic which is an inaccurate literal translation. The translation of the same sentence into the target language French was “Quel mauvais temps! Il pleut des cordes.” where Google Translate Application used the accurate French corresponding idioms. This paper aims to perform NMT experiments towards better translation of opaque idioms using high quality clean multilingual corpus. This Corpus will be collected analytically from human generated idiom translation. AutoML translation, a Google Neural Machine Translation Platform, is used as a custom translation model to improve the translation of opaque idioms. The automatic evaluation of the custom model will be compared to the Google NMT using Bilingual Evaluation Understudy Score BLEU. BLEU is an algorithm for evaluating the quality of text which has been machine-translated from one natural language to another. Human evaluation is integrated to test the reliability of the Blue Score. The researcher will examine syntactical, lexical, and semantic features using Halliday's functional theory.Keywords: multilingual corpora, natural language processing (NLP), neural machine translation (NMT), opaque idioms
Procedia PDF Downloads 1493698 Comparison of the Thermal Behavior of Different Crystal Forms of Manganese(II) Oxalate
Authors: B. Donkova, M. Nedyalkova, D. Mehandjiev
Abstract:
Sparingly soluble manganese oxalate is an appropriate precursor for the preparation of nanosized manganese oxides, which have a wide range of technological application. During the precipitation of manganese oxalate, three crystal forms could be obtained – α-MnC₂O₄.2H₂O (SG C2/c), γ-MnC₂O₄.2H₂O (SG P212121) and orthorhombic MnC₂O₄.3H₂O (SG Pcca). The thermolysis of α-MnC₂O₄.2H₂O has been extensively studied during the years, while the literature data for the other two forms has been quite scarce. The aim of the present communication is to highlight the influence of the initial crystal structure on the decomposition mechanism of these three forms, their magnetic properties, the structure of the anhydrous oxalates, as well as the nature of the obtained oxides. For the characterization of the samples XRD, SEM, DTA, TG, DSC, nitrogen adsorption, and in situ magnetic measurements were used. The dehydration proceeds in one step with α-MnC₂O₄.2H2O and γ-MnC₂O₄.2H₂O, and in three steps with MnC₂O₄.3H2O. The values of dehydration enthalpy are 97, 149 and 132 kJ/mol, respectively, and the last two were reported for the first time, best to our knowledge. The magnetic measurements show that at room temperature all samples are antiferomagnetic, however during the dehydration of α-MnC₂O₄.2H₂O the exchange interaction is preserved, for MnC₂O₄.3H₂O it changes to ferromagnetic above 35°C, and for γ-MnC₂O₄.2H₂O it changes twice from antiferomagnetic to ferromagnetic above 70°C. The experimental results for magnetic properties are in accordance with the computational results obtained with Wien2k code. The difference in the initial crystal structure of the forms used determines different changes in the specific surface area during dehydration and different extent of Mn(II) oxidation during decomposition in the air; both being highest at α-MnC₂O₄.2H₂O. The isothermal decomposition of the different oxalate forms shows that the type and physicochemical properties of the oxides, obtained at the same annealing temperature depend on the precursor used. Based on the results from the non-isothermal and isothermal experiments, and from different methods used for characterization of the sample, a comparison of the nature, mechanism and peculiarities of the thermolysis of the different crystal forms of manganese oxalate was made, which clearly reveals the influence of the initial crystal structure. Acknowledgment: 'Science and Education for Smart Growth', project BG05M2OP001-2.009-0028, COST Action MP1306 'Modern Tools for Spectroscopy on Advanced Materials', and project DCOST-01/18 (Bulgarian Science Fund).Keywords: crystal structure, magnetic properties, manganese oxalate, thermal behavior
Procedia PDF Downloads 1713697 Testing Serum Proteome between Elite Sprinters and Long-Distance Runners
Authors: Hung-Chieh Chen, Kuo-Hui Wang, Tsu-Lin Yeh
Abstract:
Proteomics represent the performance of genomic complement proteins and the protein level on functional genomics. This study adopted proteomic strategies for comparing serum proteins among three groups: elite sprinter (sprint runner group, SR), long-distance runners (long-distance runner group, LDR), and the untrained control group (control group, CON). Purposes: This study aims to identify elite sprinters and long-distance runners’ serum protein and to provide a comparison of their serum proteome’ composition. Methods: Serum protein fractionations that separated by sodium dodecyl sulfate-polyacrylamide gel electrophoresis (SDS-PAGE) and analyzed by a quantitative nano-LC-MS/MS-based proteomic profiling. The one-way analysis of variance (ANOVA) and Scheffe post hoc comparison (α= 0.05) was used to determine whether there is any significant difference in each protein level among the three groups. Results: (1) After analyzing the 307 identified proteins, there were 26 unique proteins in the SR group, and 18 unique proteins in the LDR group. (2) For the LDR group, 7 coagulation function-associated proteins’ expression levels were investigated: vitronectin, serum paraoxonase/arylesterase 1, fibulin-1, complement C3, vitamin K-dependent protein, inter-alpha-trypsin inhibitor heavy chain H3 and von Willebrand factor, and the findings show the seven coagulation function-associated proteins were significantly lower than the group of SR. (3) Comparing to the group of SR, this study found that the LDR group’s expression levels of the 2 antioxidant proteins (afamin and glutathione peroxidase 3) were also significantly lower. (4) The LDR group’s expression levels of seven immune function-related proteins (Ig gamma-3 chain C region, Ig lambda-like polypeptide 5, clusterin, complement C1s subcomponent, complement factor B, complement C4-A, complement C1q subcomponent subunit A) were also significantly lower than the group of SR. Conclusion: This study identified the potential serum protein markers for elite sprinters and long-distance runners. The changes in the regulation of coagulation, antioxidant, or immune function-specific proteins may also provide further clinical applications for these two different track athletes.Keywords: biomarkers, coagulation, immune response, oxidative stress
Procedia PDF Downloads 1173696 Association of a Genetic Polymorphism in Cytochrome P450, Family 1 with Risk of Developing Esophagus Squamous Cell Carcinoma
Authors: Soodabeh Shahid Sales, Azam Rastgar Moghadam, Mehrane Mehramiz, Malihe Entezari, Kazem Anvari, Mohammad Sadegh Khorrami, Saeideh Ahmadi Simab, Ali Moradi, Seyed Mahdi Hassanian, Majid Ghayour-Mobarhan, Gordon A. Ferns, Amir Avan
Abstract:
Background Esophageal cancer has been reported as the eighth most common cancer universal and the seventh cause of cancer-related death in men .recent studies have revealed that cytochrome P450, family 1, subfamily B, polypeptide 1, which plays a role in metabolizing xenobiotics, is associated with different cancers. Therefore in the present study, we investigated the impact of CYP1B1-rs1056836 on esophagus squamous cell carcinoma (ESCC) patients. Method: 317 subjects, with and without ESCC were recruited. DNA was extracted and genotyped via Real-time PCR-Based Taq Man. Kaplan Meier curves were utilized to assess overall and progression-free survival. To evaluate the relationship between patients clinicopathological data, genotypic frequencies, disease prognosis, and patients survival, Pearson chi-square and t-test were used. Logistic regression was utilized to assess the association between the risk of ESCC and genotypes. Results: the genotypic frequency for GG, GC, and CC are respectively 58.6% , 29.8%, 11.5% in the healthy group and 51.8%, 36.14% and 12% in ESCC group. With respect to the recessive genetic inheritance model, an association between the GG genotype and stage of ESCC were found. Also, statistically significant results were not found for this variation and risk of ESCC. Patients with GG genotype had a decreased risk of nodal metastasis in comparison with patients with CC/CG genotype, although this link was not statistically significant. Conclusion: Our findings illustrated the correlation of CYP1B1-rs1056836 as a potential biomarker for ESCC patients, supporting further studies in larger populations in different ethnic groups. Moreover, further investigations are warranted to evaluate the association of emerging marker with dietary intake and lifestyle.Keywords: Cytochrome P450, esophagus squamous cell carcinoma, dietary intake, lifestyle
Procedia PDF Downloads 199