Search results for: daily probability model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19528

Search results for: daily probability model

9508 Loading and Unloading Scheduling Problem in a Multiple-Multiple Logistics Network: Modelling and Solving

Authors: Yasin Tadayonrad

Abstract:

Most of the supply chain networks have many nodes starting from the suppliers’ side up to the customers’ side that each node sends/receives the raw materials/products from/to the other nodes. One of the major concerns in this kind of supply chain network is finding the best schedule for loading /unloading the shipments through the whole network by which all the constraints in the source and destination nodes are met and all the shipments are delivered on time. One of the main constraints in this problem is loading/unloading capacity in each source/ destination node at each time slot (e.g., per week/day/hour). Because of the different characteristics of different products/groups of products, the capacity of each node might differ based on each group of products. In most supply chain networks (especially in the Fast-moving consumer goods industry), there are different planners/planning teams working separately in different nodes to determine the loading/unloading timeslots in source/destination nodes to send/receive the shipments. In this paper, a mathematical problem has been proposed to find the best timeslots for loading/unloading the shipments minimizing the overall delays subject to respecting the capacity of loading/unloading of each node, the required delivery date of each shipment (considering the lead-times), and working-days of each node. This model was implemented on python and solved using Python-MIP on a sample data set. Finally, the idea of a heuristic algorithm has been proposed as a way of improving the solution method that helps to implement the model on larger data sets in real business cases, including more nodes and shipments.

Keywords: supply chain management, transportation, multiple-multiple network, timeslots management, mathematical modeling, mixed integer programming

Procedia PDF Downloads 89
9507 Valorization of a Forest Waste, Modified P-Brutia Cones, by Biosorption of Methyl Geen

Authors: Derradji Chebli, Abdallah Bouguettoucha, Abdelbaki Reffas Khalil Guediri, Abdeltif Amrane

Abstract:

The removal of Methyl Green dye (MG) from aqueous solutions using modified P-brutia cones (PBH and PBN), has been investigated work. The physical parameters such as pH, temperature, initial MG concentration, ionic strength are examined in batch experiments on the sorption of the dye. Adsorption removal of MG was conducted at natural pH 4.5 because the dye is only stable in the range of pH 3.8 to 5. It was observed in experiments that the P-brutia cones treated with NaOH (PBN) exhibited high affinity and adsorption capacity compared to the MG P-brutia cones treated with HCl (PBH) and biosorption capacity of modified P-brutia cones (PBN and PBH) was enhanced by increasing the temperature. This is confirmed by the thermodynamic parameters (ΔG° and ΔH°) which show that the adsorption of MG was spontaneous and endothermic in nature. The positive values of ΔS° suggested an irregular increase in the randomness for both adsorbent (PBN and PBH) during the adsorption process. The kinetic model pseudo-first order, pseudo-second order, and intraparticle diffusion coefficient were examined to analyze the sorption process; they showed that the pseudo-second-order model is the one that best describes the adsorption process (MG) on PBN and PBH with a correlation coefficient R²> 0.999. The ionic strength has shown that it has a negative impact on the adsorption of MG on two supports. A reduction of 68.5% of the adsorption capacity for a value Ce=30 mg/L was found for the PBH, while the PBN did not show a significant influence of the ionic strength on adsorption especially in the presence of NaCl. Among the tested isotherm models, the Langmuir isotherm was found to be the most relevant to describe MG sorption onto modified P-brutia cones with a correlation factor R²>0.999. The capacity adsorption of P-brutia cones, was confirmed for the removal of a dye, MG, from aqueous solution. We note also that P-brutia cones is a material very available in the forest and low-cost biomaterial

Keywords: adsorption, p-brutia cones, forest wastes, dyes, isotherm

Procedia PDF Downloads 371
9506 A Dual Spark Ignition Timing Influence for the High Power Aircraft Radial Engine Using a CFD Transient Modeling

Authors: Tytus Tulwin, Ksenia Siadkowska, Rafał Sochaczewski

Abstract:

A high power radial reciprocating engine is characterized by a large displacement volume of a combustion chamber. Choosing the right moment for ignition is important for a high performance or high reliability and ignition certainty. This work shows methods of simulating ignition process and its impact on engine parameters. For given conditions a flame speed is limited when a deflagration combustion takes place. Therefore, a larger length scale of the combustion chamber compared to a standard size automotive engine makes combustion take longer time to propagate. In order to speed up the mixture burn-up time the second spark is introduced. The transient Computational Fluid Dynamics model capable of simulating multicycle engine processes was developed. The CFD model consists of ECFM-3Z combustion and species transport models. A relative ignition timing difference for the both spark sources is constant. The temperature distribution on engine walls was calculated in the separate conjugate heat transfer simulation. The in-cylinder pressure validation was performed for take-off power flight conditions. The influence of ignition timing on parameters like in-cylinder temperature or rate of heat release was analyzed. The most advantageous spark timing for the highest power output was chosen. The conditions around the spark plug locations for the pre-ignition period were analyzed. This work has been financed by the Polish National Centre for Research and Development, INNOLOT, under Grant Agreement No. INNOLOT/I/1/NCBR/2013.

Keywords: CFD, combustion, ignition, simulation, timing

Procedia PDF Downloads 292
9505 Anti-lipidemic and Hematinic Potentials of Moringa Oleifera Leaves: A Clinical Trial on Type 2 Diabetic Subjects in a Rural Nigerian Community

Authors: Ifeoma C. Afiaenyi, Elizabeth K. Ngwu, Rufina N. B. Ayogu

Abstract:

Diabetes has crept into the rural areas of Nigeria, causing devastating effects on its sufferers; most of them could not afford diabetic medications. Moringa oleifera has been used extensively in animal models to demonstrate its antilipidaemic and haematinic qualities; however, there is a scarcity of data on the effect of graded levels of Moringa oleifera leaves on the lipid profile and hematological parameters in human diabetic subjects. The study determined the effect of Moringa oleifera leaves on the lipid profile and hematological parameters of type 2 diabetic subjects in Ukehe, a rural Nigerian community. Twenty-four adult male and female diabetic subjects were purposively selected for the study. These subjects were shared into four groups of six subjects each. The diets used in the study were isocaloric. A control group (diabetics, group 1) was fed diets without Moringa oleifera leaves. Experimental groups 2, 3 and 4 received 20g, 40g and 60g of Moringa oleifera leaves daily, respectively, in addition to the diets. The subjects' lipid profile and hematological parameters were measured prior to the feeding trial and at the end of the feeding trial. The feeding trial lasted for fourteen days. The data obtained were analyzed using the computer program Statistical Product for Service Solution (SPSS) for windows version 21. A Paired-samples t-test was used to compare the means of values collected before and after the feeding trial within the groups and significance was accepted at p < 0.05. There was a non-significant (p > 0.05) decrease in the mean total cholesterol of the subjects in groups 1, 2 and 3 after the feeding trial. There was a non-significant (p > 0.05) decrease in the mean triglyceride levels of the subjects in group 1 after the feeding trial. Groups 1 and 3 subjects had a non-significant (p > 0.05) decrease in their mean low-density lipoprotein (LDL) cholesterol after the feeding trial. Groups 1, 2 and 4 had a significant (p < 0.05) increase in their mean high-density lipoprotein (HDL) cholesterol after the feeding trial. A significant (p < 0.05) decrease in the mean hemoglobin level was observed only in group 4 subjects. Similarly, there was a significant (p < 0.05) decrease in the mean packed cell volume of group 4 subjects. It was only in group 4 that a significant (p < 0.05) decrease in the mean white blood cells of the subjects was also observed. The changes observed in the parameters assessed were not dose-dependent. Therefore, a similar study of longer duration and more samples is imperative to authenticate these results.

Keywords: anemia, diabetic subjects, lipid profile, moringa oleifera

Procedia PDF Downloads 194
9504 Integrated Dynamic Analysis of Semi-Submersible Flap Type Concept

Authors: M. Rafiur Rahman, M. Mezbah Uddin, Mohammad Irfan Uddin, M. Moinul Islam

Abstract:

With a rapid development of offshore renewable energy industry, the research activities in regards of harnessing power from offshore wind and wave energy are increasing day by day. Integration of wind turbines and wave energy converters into one combined semi-submersible platform might be a cost-economy and beneficial option. In this paper, the coupled integrated dynamic analysis in the time domain (TD) of a simplified semi-submersible flap type concept (SFC) is accomplished via state-of-the-art numerical code referred as Simo-Riflex-Aerodyn (SRA). This concept is a combined platform consisting of a semi-submersible floater supporting a 5 MW horizontal axis wind turbine (WT) and three elliptical shaped flap type wave energy converters (WECs) on three pontoons. The main focus is to validate the numerical model of SFC with experimental results and perform the frequency domain (FD) and TD response analysis. The numerical analysis is performed using potential flow theory for hydrodynamics and blade element momentum (BEM) theory for aerodynamics. A variety of environmental conditions encompassing the functional & survival conditions for short-term sea (1-hour simulation) are tested to evaluate the sustainability of the SFC. The numerical analysis is performed in full scale. Finally, the time domain analysis of heave, pitch & surge motions is performed numerically using SRA and compared with the experimental results. Due to the simplification of the model, there are some discrepancies which are discussed in brief.

Keywords: coupled integrated dynamic analysis, SFC, time domain analysis, wave energy converters

Procedia PDF Downloads 216
9503 Knowledge Transfer through Entrepreneurship: From Research at the University to the Consolidation of a Spin-off Company

Authors: Milica Lilic, Marina Rosales Martínez

Abstract:

Academic research cannot be oblivious to social problems and needs, so projects that have the capacity for transformation and impact should have the opportunity to go beyond the University circles and bring benefit to society. Apart from patents and R&D research contracts, this opportunity can be achieved through entrepreneurship as one of the most direct tools to turn knowledge into a tangible product. Thus, as an example of good practices, it is intended to analyze the case of an institutional entrepreneurship program carried out at the University of Seville, aimed at researchers interested in assessing the business opportunity of their research and expanding their knowledge on procedures for the commercialization of technologies used at academic projects. The program is based on three pillars: training, teamwork sessions and networking. The training includes aspects such as product-client fit, technical-scientific and economic-financial feasibility of a spin-off, institutional organization and decision making, public and private fundraising, and making the spin-off visible in the business world (social networks, key contacts, corporate image and ethical principles). On the other hand, the teamwork sessions are guided by a mentor and aimed at identifying research results with potential, clarifying financial needs and procedures to obtain the necessary resources for the consolidation of the spin-off. This part of the program is considered to be crucial in order for the participants to convert their academic findings into a business model. Finally, the networking part is oriented to workshops about the digital transformation of a project, the accurate communication of the product or service a spin-off offers to society and the development of transferable skills necessary for managing a business. This blended program results in the final stage where each team, through an elevator pitch format, presents their research turned into a business model to an experienced jury. The awarded teams get a starting capital for their enterprise and enjoy the opportunity of formally consolidating their spin-off company at the University. Studying the results of the program, it has been shown that many researchers have basic or no knowledge of entrepreneurship skills and different ways to turn their research results into a business model with a direct impact on society. Therefore, the described program has been used as an example to highlight the importance of knowledge transfer at the University and the role that this institution should have in providing the tools to promote entrepreneurship within it. Keeping in mind that the University is defined by three main activities (teaching, research and knowledge transfer), it is safe to conclude that the latter, and the entrepreneurship as an expression of it, is crucial in order for the other two to comply with their purpose.

Keywords: good practice, knowledge transfer, a spin-off company, university

Procedia PDF Downloads 139
9502 The Relationship between the Content of Inner Human Experience and Well-Being: An Experience Sampling Study

Authors: Xinqi Guo, Karen R. Dobkins

Abstract:

Background and Objectives: Humans are probably the only animals whose minds are constantly filled with thoughts, feelings and emotions. Previous studies have investigated human minds from different dimensions, including its proportion of time for not being present, its representative format, its personal relevance, its temporal locus, and affect valence. The current study aims at characterizing human mind by employing Experience Sampling Methods (ESM), a self-report research procedure for studying daily experience. This study emphasis on answering the following questions: 1) How does the contents of the inner experience vary across demographics, 2) Are certain types of inner experiences correlated with level of mindfulness and mental well-being (e.g., are people who spend more time being present happier, and are more mindful people more at-present?), 3) Will being prompted to report one’s inner experience increase mindfulness and mental well-being? Methods: Participants were recruited from the subject pool of UC San Diego or from the social media. They began by filling out two questionnaires: 1) Five Facet Mindfulness Questionnaire-Short Form, and 2) Warwick-Edinburgh Mental Well-being Scale, and demographic information. Then they participated in the ESM part by responding to the prompts which contained questions about their real-time inner experience: if they were 'at-present', 'mind-wandering', or 'zoned-out'. The temporal locus, the clarity, and the affect valence, and the personal importance of the thought they had the moment before the prompt were also assessed. A mobile app 'RealLife Exp' randomly delivered these prompts 3 times/day for 6 days during wake-time. After the 6 days, participants completed questionnaire (1) and (2) again. Their changes of score were compared to a control group who did not participate in the ESM procedure (yet completed (1) and (2) one week apart). Results: Results are currently preliminary as we continue to collect data. So far, there is a trend that participants are present, mind-wandering and zoned-out, about 53%, 23% and 24% during wake-time, respectively. The thoughts of participants are ranked to be clearer and more neutral if they are present vs. mind-wandering. Mind-wandering thoughts are 66% about the past, consisting 80% of inner speech. Discussion and Conclusion: This study investigated the subjective account of human mind by a tool with high ecological validity. And it broadens the understanding of the relationship between contents of mind and well-being.

Keywords: experience sampling method, meta-memory, mindfulness, mind-wandering

Procedia PDF Downloads 129
9501 Finite Element Analysis of a Glass Facades Supported by Pre-Tensioned Cable Trusses

Authors: Khair Al-Deen Bsisu, Osama Mahmoud Abuzeid

Abstract:

Significant technological advances have been achieved in the design and building construction of steel and glass in the last two decades. The metal glass support frame has been replaced by further sophisticated technological solutions, for example, the point fixed glazing systems. The minimization of the visual mass has reached extensive possibilities through the evolution of technology in glass production and the better understanding of the structural potential of glass itself, the technological development of bolted fixings, the introduction of the glazing support attachments of the glass suspension systems and the use for structural stabilization of cables that reduce to a minimum the amount of metal used. The variability of solutions of tension structures, allied to the difficulties related to geometric and material non-linear behavior, usually overrules the use of analytical solutions, letting numerical analysis as the only general approach to the design and analysis of tension structures. With the characteristics of low stiffness, lightweight, and small damping, tension structures are obviously geometrically nonlinear. In fact, analysis of cable truss is not only one of the most difficult nonlinear analyses because the analysis path may have rigid-body modes, but also a time consuming procedure. Non-linear theory allowing for large deflections is used. The flexibility of supporting members was observed to influence the stresses in the pane considerably in some cases. No other class of architectural structural systems is as dependent upon the use of digital computers as are tensile structures. Besides complexity, the process of design and analysis of tension structures presents a series of specificities, which usually lead to the use of special purpose programs, instead of general purpose programs (GPPs), such as ANSYS. In a special purpose program, part of the design know how is embedded in program routines. It is very probable that this type of program will be the option of the final user, in design offices. GPPs offer a range of types of analyses and modeling options. Besides, traditional GPPs are constantly being tested by a large number of users, and are updated according to their actual demands. This work discusses the use of ANSYS for the analysis and design of tension structures, such as cable truss structures under wind and gravity loadings. A model to describe the glass panels working in coordination with the cable truss was proposed. Under the proposed model, a FEM model of the glass panels working in coordination with the cable truss was established.

Keywords: Glass Construction material, Facades, Finite Element, Pre-Tensioned Cable Truss

Procedia PDF Downloads 274
9500 Liquefaction Phenomenon in the Kathmandu Valley during the 2015 Earthquake of Nepal

Authors: Kalpana Adhikari, Mandip Subedi, Keshab Sharma, Indra P. Acharya

Abstract:

The Gorkha Nepal earthquake of moment magnitude (Mw) 7.8 struck the central region of Nepal on April 25, 2015 with the epicenter about 77 km northwest of Kathmandu Valley . Peak ground acceleration observed during the earthquake was 0.18g. This motion induced several geotechnical effects such as landslides, foundation failures liquefaction, lateral spreading and settlement, and local amplification. An aftershock of moment magnitude (Mw) 7.3 hit northeast of Kathmandu on May 12 after 17 days of main shock caused additional damages. Kathmandu is the largest city in Nepal, have a population over four million. As the Kathmandu Valley deposits are composed mainly of sand, silt and clay layers with a shallow ground water table, liquefaction is highly anticipated. Extensive liquefaction was also observed in Kathmandu Valley during the 1934 Nepal-Bihar earthquake. Field investigations were carried out in Kathmandu Valley immediately after Mw 7.8, April 25 main shock and Mw 7.3, May 12 aftershock. Geotechnical investigation of both liquefied and non-liquefied sites were conducted after the earthquake. This paper presents observations of liquefaction and liquefaction induced damage, and the liquefaction potential assessment based on Standard Penetration Tests (SPT) for liquefied and non-liquefied sites. SPT based semi-empirical approach has been used for evaluating liquefaction potential of the soil and Liquefaction Potential Index (LPI) has been used to determine liquefaction probability. Recorded ground motions from the event are presented. Geological aspect of Kathmandu Valley and local site effect on the occurrence of liquefaction is described briefly. Observed liquefaction case studies are described briefly. Typically, these are sand boils formed by freshly ejected sand forced out of over-pressurized sub-strata. At most site, sand was ejected to agricultural fields forming deposits that varied from millimetres to a few centimeters thick. Liquefaction-induced damage to structures in these areas was not significant except buildings on some places tilted slightly. Boiled soils at liquefied sites were collected and the particle size distributions of ejected soils were analyzed. SPT blow counts and the soil profiles at ten liquefied and non-liquefied sites were obtained. The factors of safety against liquefaction with depth and liquefaction potential index of the ten sites were estimated and compared with observed liquefaction after 2015 Gorkha earthquake. The liquefaction potential indices obtained from the analysis were found to be consistent with the field observation. The field observations along with results from liquefaction assessment were compared with the existing liquefaction hazard map. It was found that the existing hazard maps are unrepresentative and underestimate the liquefaction susceptibility in Kathmandu Valley. The lessons learned from the liquefaction during this earthquake are also summarized in this paper. Some recommendations are also made to the seismic liquefaction mitigation in the Kathmandu Valley.

Keywords: factor of safety, geotechnical investigation, liquefaction, Nepal earthquake

Procedia PDF Downloads 321
9499 Development and Testing of Health Literacy Scales for Chinese Primary and Secondary School Students

Authors: Jiayue Guo, Lili You

Abstract:

Background: Children and adolescent health are crucial for both personal well-being and the nation's future health landscape. Health Literacy (HL) is important in enabling adolescents to self-manage their health, a fundamental step towards health empowerment. However, there are limited tools for assessing HL among elementary and junior high school students. This study aims to construct and validate a test-based HL scale for Chinese students, offering a scientific reference for cross-cultural HL tool development. Methods: We conducted a cross-sectional online survey. Participants were recruited from a stratified cluster random sampling method, a total of 4189 Chinese in-school primary and secondary students. The development of the scale was completed by defining the concept of HL, establishing the item indicator system, screening items (7 health content dimensions), and evaluating reliability and validity. Delphi method expert consultation was used to screen items, the Rasch model was conducted for quality analysis, and Cronbach’s alpha coefficient was used to examine the internal consistency. Results: We developed four versions of the HL scale, each with a total score of 100, encompassing seven key health areas: hygiene, nutrition, physical activity, mental health, disease prevention, safety awareness, and digital health literacy. Each version measures four dimensions of health competencies: knowledge, skills, motivation, and behavior. After the second round of expert consultation, the average importance score of each item by experts is 4.5–5.0, and the coefficient of variation is 0.000–0.174. The knowledge and skills dimensions are judgment-based and multiple-choice questions, with the Rasch model confirming unidimensionality at a 5.7% residual variance. The behavioral and motivational dimensions, measured with scale-type items, demonstrated internal consistency via Cronbach's alpha and strong inter-item correlation with KMO values of 0.924 and 0.787, respectively. Bartlett's test of sphericity, with p-values <0.001, further substantiates the scale's reliability. Conclusions: The new test-based scale, designed to evaluate competencies within a multifaceted framework, aligns with current international adolescent literacy theories and China's health education policies, focusing not only on knowledge acquisition but also on the application of health-related thinking and behaviors. The scale can be used as a comprehensive tool for HL evaluation and a reference for other countries.

Keywords: adolescent health, Chinese, health literacy, rasch model, scale development

Procedia PDF Downloads 18
9498 Protective Effect of Nigella sativa Oil and Its Neutral Lipid Fraction on Ethanol-Induced Hepatotoxicity in Rat Model

Authors: Asma Mosbah, Hanane Khither, Kamelia Mosbah, Noreddine Kacem Chaouche, Mustapha Benboubetra

Abstract:

In the present investigation, total oil (TO) and its neutral lipid fraction (NLF) extracted from the seed of the well know studied medicinal plant Nigella sativa were tested for their therapeutically effect on alcohol-induced liver injury in rat model. Male Albino rats were divided into five groups of eight animals each and fed a Lieber–DeCarli liquid diet containing 5% ethanol for experimental groups and dextran for control group, for a period of six weeks. Afterwards, rats received, orally, treatments with Nigella sativa extracts (TO, NLF) and N- acetylcysteine (NAC) as a positive control for four weeks. Activities of antioxidant enzymes; superoxide dismutase (SOD) and catalase (CAT), as well as malondialdehyde (MDA) and reduced glutathione (GSH). Biochemical parameters for kidney and liver functions, in treated and non treated rats, were evaluated throughout the time course of an experiment. Liver histological changes were taken into account. Enzymatic activities of both SOD and CAT increased significantly in rats treated with NLF and TO. While MDA level decreased in TO and NLF treated rats, GSH level increased significantly in TO and NLF treated rats. We noted equally a decrease in liver enzymes AST, ALT, and ALP. Microscopic observation of slides from the liver of ethanol treated rats showed a severe hepatotoxicity with lesions. Treatment with fractions leads to an improvement in liver lesions and a marked reduction in necrosis and infiltration. As a conclusion, both extracts of Nigella sativa seeds, TO and NLF, possess an important therapeutic protective potential against ethanol-induced hepatotoxicity in rats.

Keywords: alcohol-induced hepatotoxicity, antioxidant enzymes, Nigella sativa seeds, oil fractions

Procedia PDF Downloads 163
9497 Numerical Calculation and Analysis of Fine Echo Characteristics of Underwater Hemispherical Cylindrical Shell

Authors: Hongjian Jia

Abstract:

A finite-length cylindrical shell with a spherical cap is a typical engineering approximation model of actual underwater targets. The research on the omni-directional acoustic scattering characteristics of this target model can provide a favorable basis for the detection and identification of actual underwater targets. The elastic resonance characteristics of the target are the results of the comprehensive effect of the target length, shell-thickness ratio and materials. Under the conditions of different materials and geometric dimensions, the coincidence resonance characteristics of the target have obvious differences. Aiming at this problem, this paper obtains the omni-directional acoustic scattering field of the underwater hemispherical cylindrical shell by numerical calculation and studies the influence of target geometric parameters (length, shell-thickness ratio) and material parameters on the coincidence resonance characteristics of the target in turn. The study found that the formant interval is not a stable value and changes with the incident angle. Among them, the formant interval is less affected by the target length and shell-thickness ratio and is significantly affected by the material properties, which is an effective feature for classifying and identifying targets of different materials. The quadratic polynomial is utilized to fully fit the change relationship between the formant interval and the angle. The results show that the three fitting coefficients of the stainless steel and aluminum targets are significantly different, which can be used as an effective feature parameter to characterize the target materials.

Keywords: hemispherical cylindrical shell;, fine echo characteristics;, geometric and material parameters;, formant interval

Procedia PDF Downloads 103
9496 Impact of Applying Bag House Filter Technology in Cement Industry on Ambient Air Quality - Case Study: Alexandria Cement Company

Authors: Haggag H. Mohamed, Ghatass F. Zekry, Shalaby A. Elsayed

Abstract:

Most sources of air pollution in Egypt are of anthropogenic origin. Alexandria Governorate is located at north of Egypt. The main contributing sectors of air pollution in Alexandria are industry, transportation and area source due to human activities. Alexandria includes more than 40% of the industrial activities in Egypt. Cement manufacture contributes a significant amount to the particulate pollution load. Alexandria Portland Cement Company (APCC) surrounding was selected to be the study area. APCC main kiln stack Total Suspended Particulate (TSP) continuous monitoring data was collected for assessment of dust emission control technology. Electro Static Precipitator (ESP) was fixed on the cement kiln since 2002. The collected data of TSP for first quarter of 2012 was compared to that one in first quarter of 2013 after installation of new bag house filter. In the present study, based on these monitoring data and metrological data a detailed air dispersion modeling investigation was carried out using the Industrial Source Complex Short Term model (ISC3-ST) to find out the impact of applying new bag house filter control technology on the neighborhood ambient air quality. The model results show a drastic reduction of the ambient TSP hourly average concentration from 44.94μg/m3 to 5.78μg/m3 which assures the huge positive impact on the ambient air quality by applying bag house filter technology on APCC cement kiln

Keywords: air pollution modeling, ambient air quality, baghouse filter, cement industry

Procedia PDF Downloads 265
9495 A Bayesian Population Model to Estimate Reference Points of Bombay-Duck (Harpadon nehereus) in Bay of Bengal, Bangladesh Using CMSY and BSM

Authors: Ahmad Rabby

Abstract:

The demographic trend analyses of Bombay-duck from time series catch data using CMSY and BSM for the first time in Bangladesh. During 2000-2018, CMSY indicates average lowest production in 2000 and highest in 2018. This has been used in the estimation of prior biomass by the default rules. Possible 31030 viable trajectories for 3422 r-k pairs were found by the CMSY analysis and the final estimates for intrinsic rate of population increase (r) was 1.19 year-1 with 95% CL= 0.957-1.48 year-1. The carrying capacity(k) of Bombay-duck was 283×103 tons with 95% CL=173×103 - 464×103 tons and MSY was 84.3×103tons year-1, 95% CL=49.1×103-145×103 tons year-1. Results from Bayesian state-space implementation of the Schaefer production model (BSM) using catch & CPUE data, found catchabilitiy coefficient(q) was 1.63 ×10-6 from lcl=1.27×10-6 to ucl=2.10×10-6 and r= 1.06 year-1 with 95% CL= 0.727 - 1.55 year-1, k was 226×103 tons with 95% CL=170×103-301×103 tons and MSY was 60×103 tons year-1 with 95% CL=49.9 ×103- 72.2 ×103 tons year-1. Results for Bombay-duck fishery management based on BSM assessment from time series catch data illustrated that, Fmsy=0.531 with 95% CL =0.364 - 0.775 (if B > 1/2 Bmsy then Fmsy =0.5r); Fmsy=0.531 with 95% CL =0.364-0.775 (r and Fmsy are linearly reduced if B < 1/2Bmsy). Biomass in 2018 was 110×103 tons with 2.5th to 97.5th percentile=82.3-155×103 tons. Relative biomass (B/Bmsy) in last year was 0.972 from 2.5th percentile to 97.5th percentile=0.728 -1.37. Fishing mortality in last year was 0.738 with 2.5th-97.5th percentile=0.525-1.37. Exploitation F/Fmsy was 1.39, from 2.5th to 97.5th percentile it was 0.988 -1.86. The biological reference points of B/BMSY was smaller than 1.0, while F/FMSY was higher than 1.0 revealed an over-exploitation of the fishery, indicating that more conservative management strategies are required for Bombay-duck fishery.

Keywords: biological reference points, catchability coefficient, carrying capacity, intrinsic rate of population increase

Procedia PDF Downloads 124
9494 Modeling Loads Applied to Main and Crank Bearings in the Compression-Ignition Two-Stroke Engine

Authors: Marcin Szlachetka, Mateusz Paszko, Grzegorz Baranski

Abstract:

This paper discusses the AVL EXCITE Designer simulation research into loads applied to main and crank bearings in the compression-ignition two-stroke engine. There was created a model of engine lubrication system which covers the part of this system related to particular nodes of a bearing system, i.e. a connection of main bearings in an engine block with a crankshaft, a connection of crank pins with a connecting rod. The analysis focused on the load given as a distribution of hydrodynamic oil film pressure corresponding different values of radial internal clearance. There was also studied the impact of gas force on minimal oil film thickness in main and crank bearings versus crankshaft rotational speed. Our model calculates oil film parameters, an oil film pressure distribution, an oil temperature change and dimensions of bearings as well as an oil temperature distribution on surfaces of bearing seats. Accordingly, it was possible to select, for example, a correct clearance for each of the node bearings. The research was performed for several values of engine crankshaft speed ranging from 800 RPM to 4000 RPM. Bearing oil pressure was changed according to engine speed ranging between 1 bar and 5 bar and an oil temperature of 90°C. The main bearing clearances made initially for the calculation and research were: 0.015 mm, 0.025 mm, 0.035 mm, 0.05 mm, 0.1 mm. The oil used for the research corresponded the SAE 5W-40 classification. The paper presents the selected research results referring to certain specific operating points and bearing radial internal clearances. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.

Keywords: crank bearings, diesel engine, oil film, two-stroke engine

Procedia PDF Downloads 207
9493 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging

Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi

Abstract:

Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.

Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA

Procedia PDF Downloads 276
9492 Challenging Role of Talent Management, Career Development and Compensation Management toward Employee Retention and Organizational Performance with Mediating Effect of Employee Motivation in Service Sector of Pakistan

Authors: Muhammad Younas, Sidra Sawati, M. Razzaq Athar

Abstract:

Organizational development history reveals that it has ever been a challenge to identify and fathom the role of talent management, career development and compensation management towards employees’ retention and organizational performance. Organizations strive hard to measure the impact of all those factors which affect employee retention and organizational performance. Researchers have worked in great deal in order to know the relationship of independent variables i.e. Talent Management, Career Development and Compensation Management on dependent variables i.e. Employee Retention and Organizational Performance. Employees adorned with latest skills with long lasting loyalty play a significant role towards successful achievement of short term as well as long term goals of the organizations. Retention of valuable and resourceful employees for a longer time is equally essential for meeting the set goals. The organizations which spend reasonable chunk of their resources for taking such measures that help to retain their employees through talent management and satisfactory career development always enjoy a competitive edge over their competitors. Human resource is regarded as one of the most precious and difficult resource to management. It has its own needs and requirement. It becomes an easy prey to monotony when lacks career development. Wants and aspirations of this resource are seldom met completely but can be managed through career development and compensation management. In this era of competition, organizations have to take viable steps to management their resources especially human resource. Top management and Managers keep on working for an amenable solution in order to address the challenges relating career development and compensation management as their ultimate goal is to ensure the organizational performance on optimum level. The current study was conducted to examine the impact of Talent Management, Career Development and Compensation Management towards Employees Retention and Organizational Performance with mediating effect of Employees Motivation in Service Sector of Pakistan. The current study is based on Resource Based View (RBV) and Ability Motivation Opportunity (AMO) theories. It explains that by increasing internal resources we can manage employee talent, career development through compensation management and employee motivation more effectively. It will result in effective execution of HRM practices for employee retention enabling an organization to achieve and sustain competitive advantage through optimal performance. Data collection was made through a structured questionnaire which was based upon adopted instruments after testing reliability and validity. A total 300 employees of 30 firms in service sector of Pakistan were sampled through non-probability sampling technique. Regression analysis revealed that talent management, career development and compensation management have significant positive impact on employee retention and perceived organizational performance. The results further showed that employee motivation have a significant mediating effect on employee retention and organizational performance. The interpretation of the findings and limitations, theoretical and managerial implications are also discussed.

Keywords: career development, compensation management, employee retention, organizational performance, talent management

Procedia PDF Downloads 315
9491 Powerful Media: Reflection of Professional Audience

Authors: Hamide Farshad, Mohammadreza Javidi Abdollah Zadeh Aval

Abstract:

As a result of the growing penetration of the media into human life, a new role under the title of "audience" is defined in the social life .A kind of role which is dramatically changed since its formation. This article aims to define the audience position in the new media equations which is concluded to the transformation of the media role. By using the Library and Attributive method to study the history, the evolutionary outlook to the audience and the recognition of the audience and the media relation in the new media context is studied. It was perceived in past that public communication would result in receiving the audience. But after the emergence of the interactional media and transformation in the audience social life, a new kind of public communication is formed, and also the imaginary picture of the audience is replaced by the audience impact on the communication process. Part of this impact can be seen in the form of feedback which is one of the public communication elements. In public communication, the audience feedback is completely accepted. But in many cases, and along with the audience feedback, the media changes its direction; this direction shift is known as media feedback. At this state, the media and the audience are both doers and consistently change their positions in an interaction. With the greater number of the audience and the media, this process has taken a new role, and the role of this doer is sometimes taken by an audience while influencing another audience, or a media while influencing another media. In this article, this multiple public communication process is shown through representing a model under the title of ”The bilateral influence of the audience and the media.” Based on this model, the audience and the media power are not the two sides of a coin, and as a result, by accepting these two as the doers, the bilateral power of the audience and the media will be complementary to each other. Also more, the compatibility between the media and the audience is analyzed in the bilateral and interactional relation hypothesis, and by analyzing the action law hypothesis, the dos and don’ts of this role are defined, and media is obliged to know and accept them in order to be able to survive. They also have a determining role in the strategic studies of a media.

Keywords: audience, effect, media, interaction, action laws

Procedia PDF Downloads 483
9490 An Artificial Intelligence Framework to Forecast Air Quality

Authors: Richard Ren

Abstract:

Air pollution is a serious danger to international well-being and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.Air pollution is a serious danger to international wellbeing and economies - it will kill an estimated 7 million people every year, costing world economies $2.6 trillion by 2060 due to sick days, healthcare costs, and reduced productivity. In the United States alone, 60,000 premature deaths are caused by poor air quality. For this reason, there is a crucial need to develop effective methods to forecast air quality, which can mitigate air pollution’s detrimental public health effects and associated costs by helping people plan ahead and avoid exposure. The goal of this study is to propose an artificial intelligence framework for predicting future air quality based on timing variables (i.e. season, weekday/weekend), future weather forecasts, as well as past pollutant and air quality measurements. The proposed framework utilizes multiple machine learning algorithms (logistic regression, random forest, neural network) with different specifications and averages the results of the three top-performing models to eliminate inaccuracies, weaknesses, and biases from any one individual model. Over time, the proposed framework uses new data to self-adjust model parameters and increase prediction accuracy. To demonstrate its applicability, a prototype of this framework was created to forecast air quality in Los Angeles, California using datasets from the RP4 weather data repository and EPA pollutant measurement data. The results showed good agreement between the framework’s predictions and real-life observations, with an overall 92% model accuracy. The combined model is able to predict more accurately than any of the individual models, and it is able to reliably forecast season-based variations in air quality levels. Top air quality predictor variables were identified through the measurement of mean decrease in accuracy. This study proposed and demonstrated the efficacy of a comprehensive air quality prediction framework leveraging multiple machine learning algorithms to overcome individual algorithm shortcomings. Future enhancements should focus on expanding and testing a greater variety of modeling techniques within the proposed framework, testing the framework in different locations, and developing a platform to automatically publish future predictions in the form of a web or mobile application. Accurate predictions from this artificial intelligence framework can in turn be used to save and improve lives by allowing individuals to protect their health and allowing governments to implement effective pollution control measures.

Keywords: air quality prediction, air pollution, artificial intelligence, machine learning algorithms

Procedia PDF Downloads 120
9489 The Chemical Transport Mechanism of Emitter Micro-Particles in Tungsten Electrode: A Metallurgical Study

Authors: G. Singh, H.Schuster, U. Füssel

Abstract:

The stability of electric arc and durability of electrode tip used in Tungsten Inert Gas (TIG) welding demand a metallurgical study about the chemical transport mechanism of emitter oxide particles in tungsten electrode during its real welding conditions. The tungsten electrodes doped with emitter oxides of rare earth oxides such as La₂O₃, Th₂O₃, Y₂O₃, CeO₂ and ZrO₂ feature a comparatively lower work function than tungsten and thus have superior emission characteristics due to lesser surface temperature of the cathode. The local change in concentration of these emitter particles in tungsten electrode due to high temperature diffusion (chemical transport) can change its functional properties like electrode temperature, work function, electron emission, and stability of the electrode tip shape. The resulting increment in tip surface temperature results in the electrode material loss. It was also observed that the tungsten recrystallizes to large grains at high temperature. When the shape of grain boundaries are granular in shape, the intergranular diffusion of oxide emitter particles takes more time to reach the electrode surface. In the experimental work, the microstructure of the used electrode's tip surface will be studied by scanning electron microscope and reflective X-ray technique in order to gauge the extent of the diffusion and chemical reaction of emitter particles. Besides, a simulated model is proposed to explain the effect of oxide particles diffusion on the electrode’s microstructure, electron emission characteristics, and electrode tip erosion. This model suggests metallurgical modifications in tungsten electrode to enhance its erosion resistance.

Keywords: rare-earth emitter particles, temperature-dependent diffusion, TIG welding, Tungsten electrode

Procedia PDF Downloads 183
9488 The Development of E-Commerce in Mexico: An Econometric Analysis

Authors: Alma Lucero Ortiz, Mario Gomez

Abstract:

Technological advances contribute to the well-being of humanity by allowing man to perform in a more efficient way. Technology offers tangible advantages to countries with the adoption of information technologies, communication, and the Internet in all social and productive sectors. The Internet is a networking infrastructure that allows the communication of people throughout the world, exceeding the limits of time and space. Nowadays the internet has changed the way of doing business leading to a digital economy. In this way, e-commerce has emerged as a commercial transaction conducted over the Internet. For this inquiry e-commerce is seen as a source of economic growth for the country. Thereby, these research aims to answer the research question, which are the main variables that have affected the development of e-commerce in Mexico. The research includes a period of study from 1990 to 2017. This inquiry aims to get insight on how the independent variables influence the e-commerce development. The independent variables are information infrastructure construction, urbanization level, economic level, technology level, human capital level, educational level, standards of living, and price index. The results suggest that the independent variables have an impact on development of the e-commerce in Mexico. The present study is carried out in five parts. After the introduction, in the second part, a literature review about the main qualitative and quantitative studies to measure the variables subject to the study is presented. After, an empirical study is applied through time series data, and to process the data an econometric model is performed. In the fourth part, the analysis and discussion of results are presented, and finally, some conclusions are included.

Keywords: digital economy, e-commerce, econometric model, economic growth, internet

Procedia PDF Downloads 232
9487 Predictions of Thermo-Hydrodynamic State for Single and Three Pads Gas Foil Bearings Operating at Steady-State Based on Multi-Physics Coupling Computer Aided Engineering Simulations

Authors: Tai Yuan Yu, Pei-Jen Wang

Abstract:

Oil-free turbomachinery is considered one of the critical technologies for future green power generation systems as rotor machinery systems. Oil-free technology allows clean, compact, and maintenance-free working, and gas foil bearings, abbreviated as GFBs, are important for the technology. Since the first applications in the auxiliary power units and air cycle machines in the 1970s, obvious improvement has been created to the computational models for dynamic rotor behavior. However, many technical issues are still poorly understood or remain unsolved, and some of those are thermal management and the pattern of how pressure will be distributed in bearing clearance. This paper presents a three-dimensional, abbreviated as 3D, fluid-structure interaction model of single pad foil bearings and three pad foil bearings to predict bearing working behavior that researchers could compare characteristics of those. The coupling analysis model involves dynamic working characteristics applied to all the gas film and mechanical structures. Therefore, the elastic deformation of foil structure and the hydrodynamic pressure of gas film can both be calculated by a finite element method program. As a result, the temperature distribution pattern could also be iteratively solved by coupling analysis. In conclusion, the working fluid state in a gas film of various pad forms of bearings working characteristic at constant rotational speed for both can be solved for comparisons with the experimental results.

Keywords: fluid-structure interaction, multi-physics simulations, gas foil bearing, oil-free, transient thermo-hydrodynamic

Procedia PDF Downloads 160
9486 Student Feedback of a Major Curricular Reform Based on Course Integration and Continuous Assessment in Electrical Engineering

Authors: Heikki Valmu, Eero Kupila, Raisa Vartia

Abstract:

A major curricular reform was implemented in Metropolia UAS in 2014. The teaching was to be based on larger course entities and collaborative pedagogy. The most thorough reform was conducted in the department of electrical engineering and automation technology. It has been already shown that the reform has been extremely successful with respect to student progression and drop-out rate. The improvement of the results has been much more significant in this department compared to the other engineering departments making only minor pedagogical changes. In the beginning of the spring term of 2017, a thorough student feedback project was conducted in the department. The study consisted of thirty questions about the implementation of the curriculum, the student workload and other matters related to student satisfaction. The reply rate was more than 40%. The students were divided to four different categories: first year students [cat.1] and students of all the three different majors [categories 2-4]. These categories were found valid since all the students have the same course structure in the first two semesters after which they may freely select the major. All staff members are divided into four teams respectively. The curriculum consists of consecutive 15 credit (ECTS) courses each taught by a group of teachers (3-5). There are to be no end exams and continuous assessment is to be employed. In 2014 the different teacher groups were encouraged to employ innovatively different assessment methods within the given specs. One of these methods has been since used in categories 1 and 2. These students have to complete a number of compulsory tasks each week to pass the course and the actual grade is defined by a smaller number of tests throughout the course. The tasks vary from homework assignments, reports and laboratory exercises to larger projects and the actual smaller tests are usually organized during the regular lecture hours. The teachers of the other two majors have been pedagogically more conservative. The student progression has been better in categories 1 and 2 compared to categories 3 and 4. One of the main goals of this survey was to analyze the reasons for the difference and the assessment methods in detail besides the general student satisfaction. The results show that in the categories following more strictly the specified assessment model much more versatile assessment methods are used and the basic spirit of the new pedagogy is followed. Also, the student satisfaction is significantly better in categories 1 and 2. It may be clearly stated that continuous assessment and teacher cooperation improve the learning outcomes, student progression as well as student satisfaction. Too much academic freedom seems to lead to worse results [cat 3 and 4]. A standardized assessment model is launched for all students in autumn 2017. This model is different from the one used so far in categories 1 and 2 allowing more flexibility to teacher groups, but it will force all the teacher groups to follow the general rules in order to improve the results and the student satisfaction further.

Keywords: continuous assessment, course integration, curricular reform, student feedback

Procedia PDF Downloads 201
9485 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 121
9484 Stress-Strain Relation for Human Trabecular Bone Based on Nanoindentation Measurements

Authors: Marek Pawlikowski, Krzysztof Jankowski, Konstanty Skalski, Anna Makuch

Abstract:

Nanoindentation or depth-sensing indentation (DSI) technique has proven to be very useful to measure mechanical properties of various tissues at a micro-scale. Bone tissue, both trabecular and cortical one, is one of the most commonly tested tissues by means of DSI. Most often such tests on bone samples are carried out to compare the mechanical properties of lamellar and interlamellar bone, osteonal bone as well as compact and cancellous bone. In the paper, a relation between stress and strain for human trabecular bone is presented. The relation is based on the results of nanoindentation tests. The formulation of a constitutive model for human trabecular bone is based on nanoindentation tests. In the study, the approach proposed by Olivier-Pharr is adapted. The tests were carried out on samples of trabecular tissue extracted from human femoral heads. The heads were harvested during surgeries of artificial hip joint implantation. Before samples preparation, the heads were kept in 95% alcohol in temperature 4 Celsius degrees. The cubic samples cut out of the heads were stored in the same conditions. The dimensions of the specimens were 25 mm x 25 mm x 20 mm. The number of 20 samples have been tested. The age range of donors was between 56 and 83 years old. The tests were conducted with the indenter spherical tip of the diameter 0.200 mm. The maximum load was P = 500 mN and the loading rate 500 mN/min. The data obtained from the DSI tests allows one only to determine bone behoviour in terms of nanoindentation force vs. nanoindentation depth. However, it is more interesting and useful to know the characteristics of trabecular bone in the stress-strain domain. This allows one to simulate trabecular bone behaviour in a more realistic way. The stress-strain curves obtained in the study show relation between the age and the mechanical behaviour of trabecular bone. It was also observed that the bone matrix of trabecular tissue indicates an ability of energy absorption.

Keywords: constitutive model, mechanical behaviour, nanoindentation, trabecular bone

Procedia PDF Downloads 217
9483 Digitalization and High Audit Fees: An Empirical Study Applied to US Firms

Authors: Arpine Maghakyan

Abstract:

The purpose of this paper is to study the relationship between the level of industry digitalization and audit fees, especially, the relationship between Big 4 auditor fees and industry digitalization level. On the one hand, automation of business processes decreases internal control weakness and manual mistakes; increases work effectiveness and integrations. On the other hand, it may cause serious misstatements, high business risks or even bankruptcy, typically in early stages of automation. Incomplete automation can bring high audit risk especially if the auditor does not fully understand client’s business automation model. Higher audit risk consequently will cause higher audit fees. Higher audit fees for clients with high automation level are more highlighted in Big 4 auditor’s behavior. Using data of US firms from 2005-2015, we found that industry level digitalization is an interaction for the auditor quality on audit fees. Moreover, the choice of Big4 or non-Big4 is correlated with client’s industry digitalization level. Big4 client, which has higher digitalization level, pays more than one with low digitalization level. In addition, a high-digitalized firm that has Big 4 auditor pays higher audit fee than non-Big 4 client. We use audit fees and firm-specific variables from Audit Analytics and Compustat databases. We analyze collected data by using fixed effects regression methods and Wald tests for sensitivity check. We use fixed effects regression models for firms for determination of the connections between technology use in business and audit fees. We control for firm size, complexity, inherent risk, profitability and auditor quality. We chose fixed effects model as it makes possible to control for variables that have not or cannot be measured.

Keywords: audit fees, auditor quality, digitalization, Big4

Procedia PDF Downloads 295
9482 Allergenic Potential of Airborne Algae Isolated from Malaysia

Authors: Chu Wan-Loy, Kok Yih-Yih, Choong Siew-Ling

Abstract:

The human health risks due to poor air quality caused by a wide array of microorganisms have attracted much interest. Airborne algae have been reported as early as 19th century and they can be found in the air of tropic and warm atmospheres. Airborne algae normally originate from water surfaces, soil, trees, buildings and rock surfaces. It is estimated that at least 2880 algal cells are inhaled per day by human. However, there are relatively little data published on airborne algae and its related adverse health effects except sporadic reports of algae associated clinical allergenicity. A collection of airborne algae cultures has been established following a recent survey on the occurrence of airborne algae in indoor and outdoor environments in Kuala Lumpur. The aim of this study was to investigate the allergenic potential of the isolated airborne green and blue-green algae, namely Scenedesmus sp., Cylindrospermum sp. and Hapalosiphon sp.. The suspensions of freeze-dried airborne algae were adminstered into balb-c mice model through intra-nasal route to determine their allergenic potential. Results showed that Scenedesmus sp. (1 mg/mL) increased the systemic Ig E levels in mice by 3-8 fold compared to pre-treatment. On the other hand, Cylindrospermum sp. and Hapalosiphon sp. at similar concentration caused the Ig E to increase by 2-4 fold. The potential of airborne algae causing Ig E mediated type 1 hypersensitivity was elucidated using other immunological markers such as cytokine interleukin (IL)- 4, 5, 6 and interferon-ɣ. When we compared the amount of interleukins in mouse serum between day 0 and day 53 (day of sacrifice), Hapalosiphon sp. (1mg/mL) increased the expression of IL4 and 6 by 8 fold while the Cylindrospermum sp. (1mg/mL) increased the expression of IL4 and IFɣ by 8 and 2 fold respectively. In conclusion, repeated exposure to the three selected airborne algae may stimulate the immune response and generate Ig E in a mouse model.

Keywords: airborne algae, respiratory, allergenic, immune response, Malaysia

Procedia PDF Downloads 235
9481 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales

Authors: Philipp Sommer, Amgad Agoub

Abstract:

The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.

Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning

Procedia PDF Downloads 52
9480 Covid Medical Imaging Trial: Utilising Artificial Intelligence to Identify Changes on Chest X-Ray of COVID

Authors: Leonard Tiong, Sonit Singh, Kevin Ho Shon, Sarah Lewis

Abstract:

Investigation into the use of artificial intelligence in radiology continues to develop at a rapid rate. During the coronavirus pandemic, the combination of an exponential increase in chest x-rays and unpredictable staff shortages resulted in a huge strain on the department's workload. There is a World Health Organisation estimate that two-thirds of the global population does not have access to diagnostic radiology. Therefore, there could be demand for a program that could detect acute changes in imaging compatible with infection to assist with screening. We generated a conventional neural network and tested its efficacy in recognizing changes compatible with coronavirus infection. Following ethics approval, a deidentified set of 77 normal and 77 abnormal chest x-rays in patients with confirmed coronavirus infection were used to generate an algorithm that could train, validate and then test itself. DICOM and PNG image formats were selected due to their lossless file format. The model was trained with 100 images (50 positive, 50 negative), validated against 28 samples (14 positive, 14 negative), and tested against 26 samples (13 positive, 13 negative). The initial training of the model involved training a conventional neural network in what constituted a normal study and changes on the x-rays compatible with coronavirus infection. The weightings were then modified, and the model was executed again. The training samples were in batch sizes of 8 and underwent 25 epochs of training. The results trended towards an 85.71% true positive/true negative detection rate and an area under the curve trending towards 0.95, indicating approximately 95% accuracy in detecting changes on chest X-rays compatible with coronavirus infection. Study limitations include access to only a small dataset and no specificity in the diagnosis. Following a discussion with our programmer, there are areas where modifications in the weighting of the algorithm can be made in order to improve the detection rates. Given the high detection rate of the program, and the potential ease of implementation, this would be effective in assisting staff that is not trained in radiology in detecting otherwise subtle changes that might not be appreciated on imaging. Limitations include the lack of a differential diagnosis and application of the appropriate clinical history, although this may be less of a problem in day-to-day clinical practice. It is nonetheless our belief that implementing this program and widening its scope to detecting multiple pathologies such as lung masses will greatly assist both the radiology department and our colleagues in increasing workflow and detection rate.

Keywords: artificial intelligence, COVID, neural network, machine learning

Procedia PDF Downloads 85
9479 Development of a CFD Model for PCM Based Energy Storage in a Vertical Triplex Tube Heat Exchanger

Authors: Pratibha Biswal, Suyash Morchhale, Anshuman Singh Yadav, Shubham Sanjay Chobe

Abstract:

Energy demands are increasing whereas energy sources, especially non-renewable sources are limited. Due to the intermittent nature of renewable energy sources, it has become the need of the hour to find new ways to store energy. Out of various energy storage methods, latent heat thermal storage devices are becoming popular due to their high energy density per unit mass and volume at nearly constant temperature. This work presents a computational fluid dynamics (CFD) model using ANSYS FLUENT 19.0 for energy storage characteristics of a phase change material (PCM) filled in a vertical triplex tube thermal energy storage system. A vertical triplex tube heat exchanger, just like its name consists of three concentric tubes (pipe sections) for parting the device into three fluid domains. The PCM is filled in the middle domain with heat transfer fluids flowing in the outer and innermost domains. To enhance the heat transfer inside the PCM, eight fins have been incorporated between the internal and external tubes. These fins run radially outwards from the outer-wall of innermost tube to the inner-wall of the middle tube dividing the middle domain (between innermost and middle tube) into eight sections. These eight sections are then filled with a PCM. The validation is carried with earlier work and a grid independence test is also presented. Further studies on freezing and melting process were carried out. The results are presented in terms of pictorial representation of isotherms and liquid fraction

Keywords: heat exchanger, thermal energy storage, phase change material, CFD, latent heat

Procedia PDF Downloads 150