Search results for: Future of clinical science
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2415

Search results for: Future of clinical science

165 T Cell Immunity Profile in Pediatric Obesity and Asthma

Authors: Mustafa M. Donma, Erkut Karasu, Burcu Ozdilek, Burhan Turgut, Birol Topcu, Burcin Nalbantoglu, Orkide Donma

Abstract:

The mechanisms underlying the association between obesity and asthma may be related to a decreased immunological tolerance induced by a defective function of regulatory T cells (Tregs). The aim of this study is to establish the potential link between these diseases and CD4+, CD25+ FoxP3+ Tregs as well as T helper cells (Ths) in children. This is a prospective case control study. Obese (n:40), asthmatic (n:40), asthmatic obese (n:40) and healthy children (n:40), who don't have any acute or chronic diseases, were included in this study. Obese children were evaluated according to WHO criteria. Asthmatic patients were chosen based on GINA criteria. Parents were asked to fill up the questionnaire. Informed consent forms were taken. Blood samples were marked with CD4+, CD25+ and FoxP3+ in order to determine Tregs and Ths by flow cytometric method. Statistical analyses were performed. p≤0.05 was chosen as meaningful threshold. Tregs exhibiting anti-inflammatory nature were significantly lower in obese (0,16%; p≤0,001), asthmatic (0,25%; p≤0,01) and asthmatic obese (0,29%; p≤0,05) groups than the control group (0,38%). Ths were counted higher in asthma group than the control (p≤0,01) and obese (p≤0,001) groups. T cell immunity plays important roles in obesity and asthma pathogeneses. Decreased numbers of Tregs found in obese, asthmatic and asthmatic obese children may help to elucidate some questions in pathophysiology of these diseases. For HOMA-IR levels, any significant difference was not noted between control and obese groups, but statistically higher values were found for obese asthmatics. The values obtained in all groups were found to be below the critical cut off points. This finding has made the statistically significant difference observed between Tregs of obese, asthmatic, obese asthmatic and control groups much more valuable. These findings will be useful in diagnosis and treatment of these disorders and future studies are needed. The production and propagation of Tregs may be promising in alternative asthma and obesity treatments.

Keywords: Asthma, flow cytometry, pediatric obesity, T cells.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2245
164 Energy Saving Suction Hood

Authors: I.Daut, N. Gomesh, M. Irwanto, Y. M. Irwan

Abstract:

Public awareness towards green energy are on the rise and this can be prove by many product being manufactured or prerequired to be made as energy saving devices mainly to save consumer from spending more on utility billing. These schemes are popular nowadays and many homemade appliances are turned into energy saving gadget which attracts the attention of consumers. Knowing the public demands and pattern towards purchasing home appliances thus the idea of “energy saving suction hood (ESSH)" is proposed. The ESSH can be used in many places that require smoke ventilation or even to reduce the room temperature as many conventional suction hoods (CSH) do, but this device works automatically by the usage of sensors that detects the smoke/temperature and automatically spins the exhaust fan. As it turns, the mechanical rotation rotates the AC generator which is coupled together with the fan and then charges the battery. The innovation of this product is, it does not rely on the utility supply as it is also hook up with a solar panel which also charges the battery, Secondly, it generates energy as the exhaust fan mechanically rotates. Thirdly, an energy loop back feature is introduced to this system which will supply for the ventilator fan. Another major innovation is towards interfacing this device with an in house production of generator. This generator is produced by proper design on stator as well as rotor to reduce the losses. A comparison is made between the ESSH and the CSH and result shows that the ESSH saves 172.8kWh/year of utility supply which is used by CSH. This amount of energy can save RM 3.14 from monthly utility bill and a total of RM 37.67 per year. In fact this product can generate 175 Watt of power from generator(75W) and solar panel(100W) that can be used either to supply other household appliances and/or to loop back to supply the fans motor. The innovation of this system is essential for future production of other equipment by using the loopback power method and turning most equipment into a standalone system.

Keywords: Energy saving suction hood (ESSH), conventional suction hoods (CSH), energy, and power

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1846
163 Mapping the Digital Landscape: An Analysis of Party Differences between Conventional and Digital Policy Positions

Authors: Daniel Schwarz, Jan Fivaz, Alessia Neuroni

Abstract:

Although digitization is a buzzword in almost every election campaign, the political parties leave voters largely in the dark about their specific positions on digital issues. In the run-up to the 2019 elections in Switzerland, the ‘Digitization Monitor’ project (DMP) was launched in order to change this situation. Within the framework of the DMP, all 4,736 candidates were surveyed about their digital policy positions and values. The DMP is designed as a digital policy supplement to the existing ‘smartvote’ voting advice application. This enabled a direct comparison of the digital policy attitudes according to the DMP with the topics of the ‘smartvote’ questionnaire which are comprehensive in content but mainly related to conventional policy areas. This paper’s main research goal is to analyze and visualize possible differences between conventional and digital policy areas in terms of response patterns between and within political parties. The analysis is based on dimensionality reduction methods (multidimensional scaling and principal component analysis) for the visualization of inter-party differences, and on standard deviation as a measure of variation for the evaluation of intra-party unity. The results reveal that digital issues show a lower degree of inter-party polarization compared to conventional policy areas. Thus, the parties have more common ground in issues on digitization than in conventional policy areas. In contrast, the study reveals a mixed picture regarding intra-party unity. Homogeneous parties show a lower degree of unity in digitization issues whereas parties with heterogeneous positions in conventional areas have more united positions in digital areas. All things considered, the findings are encouraging as less polarized conditions apply to the debate on digital development compared to conventional politics. For the future, it would be desirable if in further countries similar projects to the DMP could emerge to broaden the basis for conclusions.

Keywords: Comparison of political issue dimensions, digital awareness of candidates, digital policy space, party positions on digital issues.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 578
162 The Onset of Ironing during Casing Expansion

Authors: W. Assaad, D. Wilmink, H. R. Pasaribu, H. J. M. Geijselaers

Abstract:

Shell has developed a mono-diameter well concept for oil and gas wells as opposed to the traditional telescopic well design. A Mono-diameter well design allows well to have a single inner diameter from the surface all the way down to reservoir to increase production capacity, reduce material cost and reduce environmental footprint. This is achieved by expansion of liners (casing string) concerned using an expansion tool (e.g. a cone). Since the well is drilled in stages and liners are inserted to support the borehole, overlap sections between consecutive liners exist which should be expanded. At overlap, the previously inserted casing which can be expanded or unexpanded is called the host casing and the newly inserted casing is called the expandable casing. When the cone enters the overlap section, an expandable casing is expanded against a host casing, a cured cement layer and formation. In overlap expansion, ironing or lengthening may appear instead of shortening in the expandable casing when the pressure exerted by the host casing, cured cement layer and formation exceeds a certain limit. This pressure is related to cement strength, thickness of cement layer, host casing material mechanical properties, host casing thickness, formation type and formation strength. Ironing can cause implications that hinder the deployment of the technology. Therefore, the understanding of ironing becomes essential. A physical model is built in-house to calculate expansion forces, stresses, strains and post expansion casing dimensions under different conditions. In this study, only free casing and overlap expansion of two casings are addressed while the cement and formation will be incorporated in future study. Since the axial strain can be predicted by the physical model, the onset of ironing can be confirmed. In addition, this model helps in understanding ironing and the parameters influencing it. Finally, the physical model is validated with Finite Element (FE) simulations and small-scale experiments. The results of the study confirm that high pressure leads to ironing when the casing is expanded in tension mode.

Keywords: Casing expansion, cement, formation, metal forming, plasticity, well design.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 722
161 Evaluation of Seismic Damage for Gisha Bridge in Tehran by HAZUS Methodology

Authors: Langroudi B., Salehi E., Keshani S., Baghersad M.

Abstract:

Transportation is of great importance in the current life of human beings. The transportation system plays many roles, from economical development to after-catastrophe aids such as rescue operation in the first hours and days after an earthquake. In after earthquakes response phase, transportation system acts as a basis for ground operations including rescue and relief operation, food providing for victims and etc. It is obvious that partial or complete obstruction of this system results in the stop of these operations. Bridges are one of the most important elements of transportation network. Failure of a bridge, in the most optimistic case, cuts the relation between two regions and in more developed countries, cuts the relation of numerous regions. In this paper, to evaluate the vulnerability and estimate the damage level of Tehran bridges, HAZUS method, developed by Federal Emergency Management Agency (FEMA) with the aid of National Institute of Building Science (NIBS), is used for the first time in Iran. In this method, to evaluate the collapse probability, fragility curves are used. Iran is located on seismic belt and thus, it is vulnerable to earthquakes. Thus, the study of the probability of bridge collapses, as an important part of transportation system, during earthquakes is of great importance. The purpose of this study is to provide fragility curves for Gisha Bridge, one of the longest steel bridges in Tehran, as an important lifeline element. Besides, the damage probability for this bridge during a specific earthquake, introduced as scenario earthquakes, is calculated. The fragility curves show that for the considered scenario, the probability of occurrence of complete collapse for the bridge is 8.6%.

Keywords: Bridge, Damage evaluation, Fragility curve, Lifelines, Seismic vulnerability.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2100
160 Optimization of Shale Gas Production by Advanced Hydraulic Fracturing

Authors: Fazl Ullah, Rahmat Ullah

Abstract:

This paper shows a comprehensive learning focused on the optimization of gas production in shale gas reservoirs through hydraulic fracturing. Shale gas has emerged as an important unconventional vigor resource, necessitating innovative techniques to enhance its extraction. The key objective of this study is to examine the influence of fracture parameters on reservoir productivity and formulate strategies for production optimization. A sophisticated model integrating gas flow dynamics and real stress considerations is developed for hydraulic fracturing in multi-stage shale gas reservoirs. This model encompasses distinct zones: a single-porosity medium region, a dual-porosity average region, and a hydraulic fracture region. The apparent permeability of the matrix and fracture system is modeled using principles like effective stress mechanics, porous elastic medium theory, fractal dimension evolution, and fluid transport apparatuses. The developed model is then validated using field data from the Barnett and Marcellus formations, enhancing its reliability and accuracy. By solving the partial differential equation by means of COMSOL software, the research yields valuable insights into optimal fracture parameters. The findings reveal the influence of fracture length, diversion capacity, and width on gas production. For reservoirs with higher permeability, extending hydraulic fracture lengths proves beneficial, while complex fracture geometries offer potential for low-permeability reservoirs. Overall, this study contributes to a deeper understanding of hydraulic cracking dynamics in shale gas reservoirs and provides essential guidance for optimizing gas production. The research findings are instrumental for energy industry professionals, researchers, and policymakers alike, shaping the future of sustainable energy extraction from unconventional resources.

Keywords: Fluid-solid coupling, apparent permeability, shale gas reservoir, fracture property, numerical simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 74
159 Simulation of Concrete Wall Subjected to Airblast by Developing an Elastoplastic Spring Model in Modelica Modelling Language

Authors: Leo Laine, Morgan Johansson

Abstract:

To meet the civilizations future needs for safe living and low environmental footprint, the engineers designing the complex systems of tomorrow will need efficient ways to model and optimize these systems for their intended purpose. For example, a civil defence shelter and its subsystem components needs to withstand, e.g. airblast and ground shock from decided design level explosion which detonates with a certain distance from the structure. In addition, the complex civil defence shelter needs to have functioning air filter systems to protect from toxic gases and provide clean air, clean water, heat, and electricity needs to also be available through shock and vibration safe fixtures and connections. Similar complex building systems can be found in any concentrated living or office area. In this paper, the authors use a multidomain modelling language called Modelica to model a concrete wall as a single degree of freedom (SDOF) system with elastoplastic properties with the implemented option of plastic hardening. The elastoplastic model was developed and implemented in the open source tool OpenModelica. The simulation model was tested on the case with a transient equivalent reflected pressure time history representing an airblast from 100 kg TNT detonating 15 meters from the wall. The concrete wall is approximately regarded as a concrete strip of 1.0 m width. This load represents a realistic threat on any building in a city like area. The OpenModelica model results were compared with an Excel implementation of a SDOF model with an elastic-plastic spring using simple fixed timestep central difference solver. The structural displacement results agreed very well with each other when it comes to plastic displacement magnitude, elastic oscillation displacement, and response times.

Keywords: Airblast from explosives, elastoplastic spring model, Modelica modelling language, SDOF, structural response of concrete structure.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 845
158 The Experience of Middle Grade Teachers in a Culture of Collaboration

Authors: Tamara Tallman

Abstract:

Collaboration is a powerful tool for professional development and central for creating opportunities for teachers to reflect on their practice. However, school districts continue to have difficulty both implementing and sustaining collaboration. The purpose of this research was to investigate the experience of the teacher in a creative, instructional collaboration. The teachers in this study found that teacher-initiated collaboration offered them trust and they were more open with their partners. An interpretative phenomenological analysis was used for this study as it told the story of the teacher’s experience. Interpretative Phenomenological Analysis was chosen for this study to capture the complex and contextual nature of the teacher experience from a creative, instructional collaborative experience. This study sought to answer the question of how teachers in a private, faith-based school experience collaboration. In particular, the researcher engaged the study’s participants in interviews where they shared their unique perspectives on their experiences in relation to this phenomenon. Through the use of interpretative phenomenological analysis, the researcher interpreted the experiences of each participant in an attempt to gain deeper insight into how teachers made sense of their understanding of collaboration. In addition to the researcher’s interpreting the meaning of this construct for each research participant, this study gave a voice to the individual experiences and positionality of each participant at the research site. Moreover, the key findings presented in this study shed light on how teachers within this particular context participated in and made sense of their experience of creating an instructional collaborative. The research presented the findings that speak to the meaning that each research participant experienced in their relation to participating in building a collaborative culture and its effect on professional and personal growth. The researcher provided recommendations for future practice and research possibilities. The research findings demonstrated the unique experiences of each participant as well as a connection to the literature within the field of teacher professional development. The results also supported the claim that teacher collaboration can facilitate school reform. Participating teachers felt less isolation and developed more teacher knowledge.

Keywords: Collaboration, professional development, teacher, growth.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 542
157 Computer Modeling and Plant-Wide Dynamic Simulation for Industrial Flare Minimization

Authors: Sujing Wang, Song Wang, Jian Zhang, Qiang Xu

Abstract:

Flaring emissions during abnormal operating conditions such as plant start-ups, shut-downs, and upsets in chemical process industries (CPI) are usually significant. Flare minimization can help to save raw material and energy for CPI plants, and to improve local environmental sustainability. In this paper, a systematic methodology based on plant-wide dynamic simulation is presented for CPI plant flare minimizations under abnormal operating conditions. Since off-specification emission sources are inevitable during abnormal operating conditions, to significantly reduce flaring emission in a CPI plant, they must be either recycled to the upstream process for online reuse, or stored somewhere temporarily for future reprocessing, when the CPI plant manufacturing returns to stable operation. Thus, the off-spec products could be reused instead of being flared. This can be achieved through the identification of viable design and operational strategies during normal and abnormal operations through plant-wide dynamic scheduling, simulation, and optimization. The proposed study includes three stages of simulation works: (i) developing and validating a steady-state model of a CPI plant; (ii) transiting the obtained steady-state plant model to the dynamic modeling environment; and refining and validating the plant dynamic model; and (iii) developing flare minimization strategies for abnormal operating conditions of a CPI plant via a validated plant-wide dynamic model. This cost-effective methodology has two main merits: (i) employing large-scale dynamic modeling and simulations for industrial flare minimization, which involves various unit models for modeling hundreds of CPI plant facilities; (ii) dealing with critical abnormal operating conditions of CPI plants such as plant start-up and shut-down. Two virtual case studies on flare minimizations for start-up operation (over 50% of emission savings) and shut-down operation (over 70% of emission savings) of an ethylene plant have been employed to demonstrate the efficacy of the proposed study.

Keywords: Flare minimization, large-scale modeling and simulation, plant shut-down, plant start-up.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1699
156 Evaluating the Understanding of the University Students (Basic Sciences and Engineering) about the Numerical Representation of the Average Rate of Change

Authors: Saeid Haghjoo, Ebrahim Reyhani, Fahimeh Kolahdouz

Abstract:

The present study aimed to evaluate the understanding of the students in Tehran universities (Iran) about the numerical representation of the average rate of change based on the Structure of Observed Learning Outcomes (SOLO). In the present descriptive-survey research, the statistical population included undergraduate students (basic sciences and engineering) in the universities of Tehran. The samples were 604 students selected by random multi-stage clustering. The measurement tool was a task whose face and content validity was confirmed by math and mathematics education professors. Using Cronbach's Alpha criterion, the reliability coefficient of the task was obtained 0.95, which verified its reliability. The collected data were analyzed by descriptive statistics and inferential statistics (chi-squared and independent t-tests) under SPSS-24 software. According to the SOLO model in the prestructural, unistructural, and multistructural levels, basic science students had a higher percentage of understanding than that of engineering students, although the outcome was inverse at the relational level. However, there was no significant difference in the average understanding of both groups. The results indicated that students failed to have a proper understanding of the numerical representation of the average rate of change, in addition to missconceptions when using physics formulas in solving the problem. In addition, multiple solutions were derived along with their dominant methods during the qualitative analysis. The current research proposed to focus on the context problems with approximate calculations and numerical representation, using software and connection common relations between math and physics in the teaching process of teachers and professors.

Keywords: Average rate of change, context problems, derivative, numerical representation, SOLO taxonomy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 699
155 Field Trial of Resin-Based Composite Materials for the Treatment of Surface Collapses Associated with Former Shallow Coal Mining

Authors: Philip T. Broughton, Mark P. Bettney, Isla L. Smail

Abstract:

Effective treatment of ground instability is essential when managing the impacts associated with historic mining. A field trial was undertaken by the Coal Authority to investigate the geotechnical performance and potential use of composite materials comprising resin and fill or stone to safely treat surface collapses, such as crown-holes, associated with shallow mining. Test pits were loosely filled with various granular fill materials. The fill material was injected with commercially available silicate and polyurethane resin foam products. In situ and laboratory testing was undertaken to assess the geotechnical properties of the resultant composite materials. The test pits were subsequently excavated to assess resin permeation. Drilling and resin injection was easiest through clean limestone fill materials. Recycled building waste fill material proved difficult to inject with resin; this material is thus considered unsuitable for use in resin composites. Incomplete resin permeation in several of the test pits created irregular ‘blocks’ of composite. Injected resin foams significantly improve the stiffness and resistance (strength) of the un-compacted fill material. The stiffness of the treated fill material appears to be a function of the stone particle size, its associated compaction characteristics (under loose tipping) and the proportion of resin foam matrix. The type of fill material is more critical than the type of resin to the geotechnical properties of the composite materials. Resin composites can effectively support typical design imposed loads. Compared to other traditional treatment options, such as cement grouting, the use of resin composites is potentially less disruptive, particularly for sites with limited access, and thus likely to achieve significant reinstatement cost savings. The use of resin composites is considered a suitable option for the future treatment of shallow mining collapses.

Keywords: Composite material, ground improvement, mining legacy, resin.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1489
154 High Securing Cover-File of Hidden Data Using Statistical Technique and AES Encryption Algorithm

Authors: A. A. Zaidan, Anas Majeed, B. B. Zaidan

Abstract:

Nowadays, the rapid development of multimedia and internet allows for wide distribution of digital media data. It becomes much easier to edit, modify and duplicate digital information Besides that, digital documents are also easy to copy and distribute, therefore it will be faced by many threatens. It-s a big security and privacy issue with the large flood of information and the development of the digital format, it become necessary to find appropriate protection because of the significance, accuracy and sensitivity of the information. Nowadays protection system classified with more specific as hiding information, encryption information, and combination between hiding and encryption to increase information security, the strength of the information hiding science is due to the non-existence of standard algorithms to be used in hiding secret messages. Also there is randomness in hiding methods such as combining several media (covers) with different methods to pass a secret message. In addition, there are no formal methods to be followed to discover the hidden data. For this reason, the task of this research becomes difficult. In this paper, a new system of information hiding is presented. The proposed system aim to hidden information (data file) in any execution file (EXE) and to detect the hidden file and we will see implementation of steganography system which embeds information in an execution file. (EXE) files have been investigated. The system tries to find a solution to the size of the cover file and making it undetectable by anti-virus software. The system includes two main functions; first is the hiding of the information in a Portable Executable File (EXE), through the execution of four process (specify the cover file, specify the information file, encryption of the information, and hiding the information) and the second function is the extraction of the hiding information through three process (specify the steno file, extract the information, and decryption of the information). The system has achieved the main goals, such as make the relation of the size of the cover file and the size of information independent and the result file does not make any conflict with anti-virus software.

Keywords: Cryptography, Steganography, Portable ExecutableFile.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1756
153 Trapping Efficiency of Diesel Particles Through a Square Duct

Authors: Francis William S, Imtiaz Ahmed Choudhury, Ananda Kumar Eriki, A. John Rajan

Abstract:

Diesel Engines emit complex mixtures of inorganic and organic compounds in the form of both solid and vapour phase particles. Most of the particulates released are ultrafine nanoparticles which are detrimental to human health and can easily enter the body by respiration. The emissions standards on particulate matter release from diesel engines are constantly upgraded within the European Union and with future regulations based on the particles numbers released instead of merely mass, the need for effective aftertreatment devices will increase. Standard particulate filters in the form of wall flow filters can have problems with high soot accumulation, producing a large exhaust backpressure. A potential solution would be to combine the standard filter with a flow through filter to reduce the load on the wall flow filter. In this paper soot particle trapping has been simulated in different continuous flow filters of monolithic structure including the use of promoters, at laminar flow conditions. An Euler Lagrange model, the discrete phase model in Ansys used with user defined functions for forces acting on particles. A method to quickly screen trapping of 5 nm and 10 nm particles in different catalysts designs with tracers was also developed. Simulations of square duct monoliths with promoters show that the strength of the vortices produced are not enough to give a high amount of particle deposition on the catalyst walls. The smallest particles in the simulations, 5 and 10 nm particles were trapped to a higher extent, than larger particles up to 1000 nm, in all studied geometries with the predominant deposition mechanism being Brownian diffusion. The comparison of the different filters designed with a wall flow filter does show that the options for altering a design of a flow through filter, without imposing a too large pressure drop penalty are good.

Keywords: Diesel Engine trap, thermophoresis, Exhaust pipe, PM-Simulation modeling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1960
152 Improved Computational Efficiency of Machine Learning Algorithms Based on Evaluation Metrics to Control the Spread of Coronavirus in the UK

Authors: Swathi Ganesan, Nalinda Somasiri, Rebecca Jeyavadhanam, Gayathri Karthick

Abstract:

The COVID-19 crisis presents a substantial and critical hazard to worldwide health. Since the occurrence of the disease in late January 2020 in the UK, the number of infected people confirmed to acquire the illness has increased tremendously across the country, and the number of individuals affected is undoubtedly considerably high. The purpose of this research is to figure out a predictive machine learning (ML) archetypal that could forecast the COVID-19 cases within the UK. This study concentrates on the statistical data collected from 31st January 2020 to 31st March 2021 in the United Kingdom. Information on total COVID-19 cases registered, new cases encountered on a daily basis, total death registered, and patients’ death per day due to Coronavirus is collected from World Health Organization (WHO). Data preprocessing is carried out to identify any missing values, outliers, or anomalies in the dataset. The data are split into 8:2 ratio for training and testing purposes to forecast future new COVID-19 cases. Support Vector Machine (SVM), Random Forest (RF), and linear regression (LR) algorithms are chosen to study the model performance in the prediction of new COVID-19 cases. From the evaluation metrics such as r-squared value and mean squared error, the statistical performance of the model in predicting the new COVID-19 cases is evaluated. RF outperformed the other two ML algorithms with a training accuracy of 99.47% and testing accuracy of 98.26% when n = 30. The mean square error obtained for RF is 4.05e11, which is lesser compared to the other predictive models used for this study. From the experimental analysis, RF algorithm can perform more effectively and efficiently in predicting the new COVID-19 cases, which could help the health sector to take relevant control measures for the spread of the virus.

Keywords: COVID-19, machine learning, supervised learning, unsupervised learning, linear regression, support vector machine, random forest.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 104
151 Present Status, Driving Forces and Pattern Optimization of Territory in Hubei Province, China

Authors: Tingke Wu, Man Yuan

Abstract:

“National Territorial Planning (2016-2030)” was issued by the State Council of China in 2017. As an important initiative of putting it into effect, territorial planning at provincial level makes overall arrangement of territorial development, resources and environment protection, comprehensive renovation and security system construction. Hubei province, as the pivot of the “Rise of Central China” national strategy, is now confronted with great opportunities and challenges in territorial development, protection, and renovation. Territorial spatial pattern experiences long time evolution, influenced by multiple internal and external driving forces. It is not clear what are the main causes of its formation and what are effective ways of optimizing it. By analyzing land use data in 2016, this paper reveals present status of territory in Hubei. Combined with economic and social data and construction information, driving forces of territorial spatial pattern are then analyzed. Research demonstrates that the three types of territorial space aggregate distinctively. The four aspects of driving forces include natural background which sets the stage for main functions, population and economic factors which generate agglomeration effect, transportation infrastructure construction which leads to axial expansion and significant provincial strategies which encourage the established path. On this basis, targeted strategies for optimizing territory spatial pattern are then put forward. Hierarchical protection pattern should be established based on development intensity control as respect for nature. By optimizing the layout of population and industry and improving the transportation network, polycentric network-based development pattern could be established. These findings provide basis for Hubei Territorial Planning, and reference for future territorial planning in other provinces.

Keywords: Driving forces, Hubei, optimizing strategies, spatial pattern, territory.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 536
150 A 1H NMR-Linked PCR Modelling Strategy for Tracking the Fatty Acid Sources of Aldehydic Lipid Oxidation Products in Culinary Oils Exposed to Simulated Shallow-Frying Episodes

Authors: Martin Grootveld, Benita Percival, Sarah Moumtaz, Kerry L. Grootveld

Abstract:

Objectives/Hypotheses: The adverse health effect potential of dietary lipid oxidation products (LOPs) has evoked much clinical interest. Therefore, we employed a 1H NMR-linked Principal Component Regression (PCR) chemometrics modelling strategy to explore relationships between data matrices comprising (1) aldehydic LOP concentrations generated in culinary oils/fats when exposed to laboratory-simulated shallow frying practices, and (2) the prior saturated (SFA), monounsaturated (MUFA) and polyunsaturated fatty acid (PUFA) contents of such frying media (FM), together with their heating time-points at a standard frying temperature (180 oC). Methods: Corn, sunflower, extra virgin olive, rapeseed, linseed, canola, coconut and MUFA-rich algae frying oils, together with butter and lard, were heated according to laboratory-simulated shallow-frying episodes at 180 oC, and FM samples were collected at time-points of 0, 5, 10, 20, 30, 60, and 90 min. (n = 6 replicates per sample). Aldehydes were determined by 1H NMR analysis (Bruker AV 400 MHz spectrometer). The first (dependent output variable) PCR data matrix comprised aldehyde concentration scores vectors (PC1* and PC2*), whilst the second (predictor) one incorporated those from the fatty acid content/heating time variables (PC1-PC4) and their first-order interactions. Results: Structurally complex trans,trans- and cis,trans-alka-2,4-dienals, 4,5-epxy-trans-2-alkenals and 4-hydroxy-/4-hydroperoxy-trans-2-alkenals (group I aldehydes predominantly arising from PUFA peroxidation) strongly and positively loaded on PC1*, whereas n-alkanals and trans-2-alkenals (group II aldehydes derived from both MUFA and PUFA hydroperoxides) strongly and positively loaded on PC2*. PCR analysis of these scores vectors (SVs) demonstrated that PCs 1 (positively-loaded linoleoylglycerols and [linoleoylglycerol]:[SFA] content ratio), 2 (positively-loaded oleoylglycerols and negatively-loaded SFAs), 3 (positively-loaded linolenoylglycerols and [PUFA]:[SFA] content ratios), and 4 (exclusively orthogonal sampling time-points) all powerfully contributed to aldehydic PC1* SVs (p 10-3 to < 10-9), as did all PC1-3 x PC4 interaction ones (p 10-5 to < 10-9). PC2* was also markedly dependent on all the above PC SVs (PC2 > PC1 and PC3), and the interactions of PC1 and PC2 with PC4 (p < 10-9 in each case), but not the PC3 x PC4 contribution. Conclusions: NMR-linked PCR analysis is a valuable strategy for (1) modelling the generation of aldehydic LOPs in heated cooking oils and other FM, and (2) tracking their unsaturated fatty acid (UFA) triacylglycerol sources therein.

Keywords: Frying oils, frying episodes, lipid oxidation products, cytotoxic/genotoxic aldehydes, chemometrics, principal component regression, NMR Analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 835
149 Gender Differences in Morbid Obese Children: Clinical Significance of Two Diagnostic Obesity Notation Model Assessment Indices

Authors: Mustafa M. Donma, Orkide Donma, Murat Aydin, Muhammet Demirkol, Burcin Nalbantoglu, Aysin Nalbantoglu, Birol Topcu

Abstract:

Childhood obesity is an ever increasing global health problem, affecting both developed and developing countries. Accurate evaluation of obesity in children requires difficult and detailed investigation. In our study, obesity in children was evaluated using new body fat ratios and indices. Assessment of anthropometric measurements, as well as some ratios, is important because of the evaluation of gender differences particularly during the late periods of obesity. A total of 239 children; 168 morbid obese (MO) (81 girls and 87 boys) and 71 normal weight (NW) (40 girls and 31 boys) children, participated in the study. Informed consent forms signed by the parents were obtained. Ethics Committee approved the study protocol. Mean ages (years)±SD calculated for MO group were 10.8±2.9 years in girls and 10.1±2.4 years in boys. The corresponding values for NW group were 9.0±2.0 years in girls and 9.2±2.1 years in boys. Mean body mass index (BMI)±SD values for MO group were 29.1±5.4 kg/m2 and 27.2±3.9 kg/m2 in girls and boys, respectively. These values for NW group were calculated as 15.5±1.0 kg/m2 in girls and 15.9±1.1 kg/m2 in boys. Groups were constituted based upon BMI percentiles for age-and-sex values recommended by WHO. Children with percentiles >99 were grouped as MO and children with percentiles between 85 and 15 were considered NW. The anthropometric measurements were recorded and evaluated along with the new ratios such as trunk-to-appendicular fat ratio, as well as indices such as Index-I and Index-II. The body fat percent values were obtained by bio-electrical impedance analysis. Data were entered into a database for analysis using SPSS/PASW 18 Statistics for Windows statistical software. Increased waist-to-hip circumference (C) ratios, decreased head-to-neck C, height ‘to’ ‘two’-‘to’-waist C and height ‘to’ ‘two’-‘to’-hip C ratios were observed in parallel with the development of obesity (p≤0.001). Reference value for height ‘to’ ‘two’-‘to’-hip ratio was detected as approximately 1.0. Index-II, based upon total body fat mass, showed much more significant differences between the groups than Index-I based upon weight. There was not any difference between trunk-to-appendicular fat ratios of NW girls and NW boys (p≥0.05). However, significantly increased values for MO girls in comparison with MO boys were observed (p≤0.05). This parameter showed no difference between NW and MO states in boys (p≥0.05). However, statistically significant increase was noted in MO girls compared to their NW states (p≤0.001). Trunk-to-appendicular fat ratio was the only fat-based parameter, which showed gender difference between NW and MO groups. This study has revealed that body ratios and formula based upon body fat tissue are more valuable parameters than those based on weight and height values for the evaluation of morbid obesity in children.

Keywords: Anthropometry, childhood obesity, gender, Morbid obesity.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 912
148 Gender Justice and Feminist Self-Management Practices in the Solidarity Economy: A Quantitative Analysis of the Factors that Impact Enterprises Formed by Women in Brazil

Authors: Maria de Nazaré Moraes Soares, Silvia Maria Dias Pedro Rebouças, José Carlos Lázaro

Abstract:

The Solidarity Economy (SE) acts in the re-articulation of the economic field to the other spheres of social action. The significant participation of women in SE resulted in the formation of a national network of self-managed enterprises in Brazil: The Solidarity and Feminist Economy Network (SFEN). The objective of the research is to identify factors of gender justice and feminist self-management practices that adhere to the reality of women in SE enterprises. The conceptual apparatus related to feminist studies in this research covers Nancy Fraser approaches on gender justice, and Patricia Yancey Martin approaches on feminist management practices, and authors of postcolonial feminism such as Mohanty and Maria Lugones, who lead the discussion to peripheral contexts, a necessary perspective when observing the women’s movement in SE. The research has a quantitative nature in the phases of data collection and analysis. The data collection was performed through two data sources: the database mapped in Brazil in 2010-2013 by the National Information System in Solidary Economy and 150 questionnaires with women from 16 enterprises in SFEN, in a state of Brazilian northeast. The data were analyzed using the multivariate statistical technique of Factor Analysis. The results show that the factors that define gender justice and feminist self-management practices in SE are interrelated in several levels, proving statistically the intersectional condition of the issue of women. The evidence from the quantitative analysis allowed us to understand the dimensions of gender justice and feminist management practices intersectionality; in this sense, the non-distribution of domestic work interferes in non-representation of women in public spaces, especially in peripheral contexts. The study contributes with important reflections to the studies of this area and can be complemented in the future with a qualitative research that approaches the perspective of women in the context of the SE self-management paradigm.

Keywords: Feminist management practices, gender justice, self-management, solidarity economy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 551
147 An Introduction to the Concept of Environmental Audit: Indian Context

Authors: Pradip Kumar Das

Abstract:

Phenomenal growth of population and industry exploits the environment in varied ways. Consequently, the greenhouse effect and other allied problems are threatening mankind the world over. Protection and up gradation of environment have, therefore, become the prime necessity all of mankind for the sustainable development of environment. People in humbler walks of life including the corporate citizens have become aware of the impacts of environmental pollution. Governments of various nations have entered the picture with laws and regulations to correct and cure the effects of present and past violations of environmental practices and to obstruct future violations of good environmental disciplines. In this perspective, environmental audit directs verification and validation to ensure that the various environmental laws are complied with and adequate care has been taken towards environmental protection and preservation. The discipline of environmental audit has experienced expressive development throughout the world. It examines the positive and negative effects of the activities of an enterprise on environment and provides an in-depth study of the company processes any growth in realizing long-term strategic goals. Environmental audit helps corporations assess its achievement, correct deficiencies and reduce risk to the health and improving safety. Environmental audit being a strong management tool should be administered by industry for its own self-assessment. Developed countries all over the globe have gone ahead in environment quantification; but unfortunately, there is a lack of awareness about pollution and environmental hazards among the common people in India. In the light of this situation, the conceptual analysis of this study is concerned with the rationale of environmental audit on the industry and the society as a whole and highlights the emerging dimensions in the auditing theory and practices. A modest attempt has been made to throw light on the recent development in environmental audit in developing nations like India and the problems associated with the implementation of environmental audit. The conceptual study also reflects that despite different obstacles, environmental audit is becoming an increasing aspect within the corporate sectors in India and lastly, conclusions along with suggestions have been offered to improve the current scenario.

Keywords: Environmental audit, environmental hazards, environmental laws, environmental protection, environmental preservation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2655
146 Image-Based UAV Vertical Distance and Velocity Estimation Algorithm during the Vertical Landing Phase Using Low-Resolution Images

Authors: Seyed-Yaser Nabavi-Chashmi, Davood Asadi, Karim Ahmadi, Eren Demir

Abstract:

The landing phase of a UAV is very critical as there are many uncertainties in this phase, which can easily entail a hard landing or even a crash. In this paper, the estimation of relative distance and velocity to the ground, as one of the most important processes during the landing phase, is studied. Using accurate measurement sensors as an alternative approach can be very expensive for sensors like LIDAR, or with a limited operational range, for sensors like ultrasonic sensors. Additionally, absolute positioning systems like GPS or IMU cannot provide distance to the ground independently. The focus of this paper is to determine whether we can measure the relative distance and velocity of UAV and ground in the landing phase using just low-resolution images taken by a monocular camera. The Lucas-Konda feature detection technique is employed to extract the most suitable feature in a series of images taken during the UAV landing. Two different approaches based on Extended Kalman Filters (EKF) have been proposed, and their performance in estimation of the relative distance and velocity are compared. The first approach uses the kinematics of the UAV as the process and the calculated optical flow as the measurement. On the other hand, the second approach uses the feature’s projection on the camera plane (pixel position) as the measurement while employing both the kinematics of the UAV and the dynamics of variation of projected point as the process to estimate both relative distance and relative velocity. To verify the results, a sequence of low-quality images taken by a camera that is moving on a specifically developed testbed has been used to compare the performance of the proposed algorithm. The case studies show that the quality of images results in considerable noise, which reduces the performance of the first approach. On the other hand, using the projected feature position is much less sensitive to the noise and estimates the distance and velocity with relatively high accuracy. This approach also can be used to predict the future projected feature position, which can drastically decrease the computational workload, as an important criterion for real-time applications.

Keywords: Automatic landing, multirotor, nonlinear control, parameters estimation, optical flow.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 430
145 Simulation and Design of an Aerospace Mission Powered by “Candy” Type Fuel Engines

Authors: N. Hernández Huertas, F. Rojas Mora

Abstract:

Sounding rockets are aerospace vehicles that were developed in the mid-20th century, and since then numerous investigations have been executed with the aim of innovate in this type of technology. However, the costs associated to the production of this type of technology are usually quite high, and therefore the challenge that exists today is to be able to reduce them. In this way, the main objective of this document is to present the design process of a Colombian aerospace mission capable to reach the thermosphere using low-cost “Candy” type solid fuel engines. This mission is the latest development of the Uniandes Aerospace Project (PUA for its Spanish acronym), which is an undergraduate and postgraduate research group at Universidad de los Andes (Bogotá, Colombia), dedicated to incurring in this type of technology. In this way, the investigations that have been carried out on Candy-type solid fuel, which is a compound of potassium nitrate and sorbitol, have allowed the production of engines powerful enough to reach space, and which represents a unique technological advance in Latin America and an important development in experimental rocketry. In this way, following the engineering iterative design methodology was possible to design a 2-stage sounding rocket with 1 solid fuel engine in each one, which was then simulated in RockSim V9.0 software and reached an apogee of approximately 150 km above sea level. Similarly, a speed equal to 5 Mach was obtained, which after performing a finite element analysis, it was shown that the rocket is strong enough to be able to withstand such speeds. Under these premises, it was demonstrated that it is possible to build a high-power aerospace mission at low cost, using Candy-type solid fuel engines. For this reason, the feasibility of carrying out similar missions clearly depends on the ability to replicate the engines in the best way, since as mentioned above, the design of the rocket is adequate to reach supersonic speeds and reach space. Consequently, with a team of at least 3 members, the mission can be obtained in less than 3 months. Therefore, when publishing this project, it is intended to be a reference for future research in this field and benefit the industry.

Keywords: Aerospace missions, candy type solid propellant engines, design of solid rockets, experimental rocketry, low costs missions.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 730
144 A Literature Review on the Effect of Industrial Clusters and the Absorptive Capacity on Innovation

Authors: Enrique Claver Cortés, Bartolomé Marco Lajara, Eduardo Sánchez García, Pedro Seva Larrosa, Encarnación Manresa Marhuenda, Lorena Ruiz Fernández, Esther Poveda Pareja

Abstract:

In recent decades, the analysis of the effects of clustering as an essential factor for the development of innovations and the competitiveness of enterprises has raised great interest in different areas. Nowadays, companies have access to almost all tangible and intangible resources located and/or developed in any country in the world. However, despite the obvious advantages that this situation entails for companies, their geographical location has shown itself, increasingly clearly, to be a fundamental factor that positively influences their innovative performance and competitiveness. Industrial clusters could represent a unique level of analysis, positioned between the individual company and the industry, which makes them an ideal unit of analysis to determine the effects derived from company membership of a cluster. Also, the absorptive capacity (hereinafter 'AC') can mediate the process of innovation development by companies located in a cluster. The transformation and exploitation of knowledge could have a mediating effect between knowledge acquisition and innovative performance. The main objective of this work is to determine the key factors that affect the degree of generation and use of knowledge from the environment by companies and, consequently, their innovative performance and competitiveness. The elements analyzed are the companies' membership of a cluster and the AC. To this end, 30 most relevant papers published on this subject in the "Web of Science" database have been reviewed. Our findings show that, within a cluster, the knowledge coming from the companies' environment can significantly influence their innovative performance and competitiveness, although in this relationship, the degree of access and exploitation of the companies to this knowledge plays a fundamental role, which depends on a series of elements both internal and external to the company.

Keywords: Absorptive capacity, clusters, innovation, knowledge.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 838
143 An Investigation into the Potential of Industrial Low Grade Heat in Membrane Distillation for Freshwater Production

Authors: Yehia Manawi, Ahmad Kayvani Fard

Abstract:

Membrane distillation is an emerging technology which has been used to produce freshwater and purify different types of aqueous mixtures. Qatar is an arid country where almost 100% of its freshwater demand is supplied through the energy-intensive thermal desalination process. The country’s need for water has reached an all-time high which stipulates finding an alternative way to augment freshwater without adding any drastic affect to the environment. The objective of this paper was to investigate the potential of using the industrial low grade waste heat to produce freshwater using membrane distillation. The main part of this work was conducting a heat audit on selected Qatari chemical industries to estimate the amounts of freshwater produced if such industrial waste heat were to be recovered. By the end of this work, the main objective was met and the heat audit conducted on the Qatari chemical industries enabled us to estimate both the amounts of waste heat which can be potentially recovered in addition to the amounts of freshwater which can be produced if such waste heat were to be recovered.

By the end, the heat audit showed that around 605 Mega Watts of waste heat can be recovered from the studied Qatari chemical industries which resulted in a total daily production of 5078.7 cubic meter of freshwater.

This water can be used in a wide variety of applications such as human consumption or industry. The amount of produced freshwater may look small when compared to that produced through thermal desalination plants; however, one must bear in mind that this water comes from waste and can be used to supply water for small cities or remote areas which are not connected to the water grid. The idea of producing freshwater from the two widely-available wastes (thermal rejected brine and waste heat) seems promising as less environmental and economic impacts will be associated with freshwater production which may in the near future augment the conventional way of producing freshwater currently being thermal desalination. This work has shown that low grade waste heat in the chemical industries in Qatar and perhaps the rest of the world can contribute to additional production of freshwater using membrane distillation without significantly adding to the environmental impact.

Keywords: Membrane distillation, desalination, heat recovery, environment.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1919
142 Creating Smart and Healthy Cities by Exploring the Potentials of Emerging Technologies and Social Innovation for Urban Efficiency: Lessons from the Innovative City of Boston

Authors: Mohammed Agbali, Claudia Trillo, Yusuf Arayici, Terrence Fernando

Abstract:

The wide-spread adoption of the Smart City concept has introduced a new era of computing paradigm with opportunities for city administrators and stakeholders in various sectors to re-think the concept of urbanization and development of healthy cities. With the world population rapidly becoming urban-centric especially amongst the emerging economies, social innovation will assist greatly in deploying emerging technologies to address the development challenges in core sectors of the future cities. In this context, sustainable health-care delivery and improved quality of life of the people is considered at the heart of the healthy city agenda. This paper examines the Boston innovation landscape from the perspective of smart services and innovation ecosystem for sustainable development, especially in transportation and healthcare. It investigates the policy implementation process of the Healthy City agenda and eHealth economy innovation based on the experience of Massachusetts’s City of Boston initiatives. For this purpose, three emerging areas are emphasized, namely the eHealth concept, the innovation hubs, and the emerging technologies that drive innovation. This was carried out through empirical analysis on results of public sector and industry-wide interviews/survey about Boston’s current initiatives and the enabling environment. The paper highlights few potential research directions for service integration and social innovation for deploying emerging technologies in the healthy city agenda. The study therefore suggests the need to prioritize social innovation as an overarching strategy to build sustainable Smart Cities in order to avoid technology lock-in. Finally, it concludes that the Boston example of innovation economy is unique in view of the existing platforms for innovation and proper understanding of its dynamics, which is imperative in building smart and healthy cities where quality of life of the citizenry can be improved.

Keywords: Smart city, social innovation, eHealth, innovation hubs, emerging technologies, equitable healthcare, healthy cities.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1675
141 A Comprehensive Survey on Machine Learning Techniques and User Authentication Approaches for Credit Card Fraud Detection

Authors: Niloofar Yousefi, Marie Alaghband, Ivan Garibay

Abstract:

With the increase of credit card usage, the volume of credit card misuse also has significantly increased, which may cause appreciable financial losses for both credit card holders and financial organizations issuing credit cards. As a result, financial organizations are working hard on developing and deploying credit card fraud detection methods, in order to adapt to ever-evolving, increasingly sophisticated defrauding strategies and identifying illicit transactions as quickly as possible to protect themselves and their customers. Compounding on the complex nature of such adverse strategies, credit card fraudulent activities are rare events compared to the number of legitimate transactions. Hence, the challenge to develop fraud detection that are accurate and efficient is substantially intensified and, as a consequence, credit card fraud detection has lately become a very active area of research. In this work, we provide a survey of current techniques most relevant to the problem of credit card fraud detection. We carry out our survey in two main parts. In the first part, we focus on studies utilizing classical machine learning models, which mostly employ traditional transnational features to make fraud predictions. These models typically rely on some static physical characteristics, such as what the user knows (knowledge-based method), or what he/she has access to (object-based method). In the second part of our survey, we review more advanced techniques of user authentication, which use behavioral biometrics to identify an individual based on his/her unique behavior while he/she is interacting with his/her electronic devices. These approaches rely on how people behave (instead of what they do), which cannot be easily forged. By providing an overview of current approaches and the results reported in the literature, this survey aims to drive the future research agenda for the community in order to develop more accurate, reliable and scalable models of credit card fraud detection.

Keywords: credit card fraud detection, user authentication, behavioral biometrics, machine learning, literature survey

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 464
140 Seismic Protection of Automated Stocker System by Customized Viscous Fluid Dampers

Authors: Y. P. Wang, J. K. Chen, C. H. Lee, G. H. Huang, M. C. Wang, S. W. Chen, Y. T. Kuan, H. C. Lin, C. Y. Huang, W. H. Liang, W. C. Lin, H. C. Yu

Abstract:

The hi-tech industries in the Science Park at southern Taiwan were heavily damaged by a strong earthquake early 2016. The financial loss in this event was attributed primarily to the automated stocker system handling fully processed products, and recovery of the automated stocker system from the aftermath proved to contribute major lead time. Therefore, development of effective means for protection of stockers against earthquakes has become the highest priority for risk minimization and business continuity. This study proposes to mitigate the seismic response of the stockers by introducing viscous fluid dampers in between the ceiling and the top of the stockers. The stocker is expected to vibrate less violently with a passive control force on top. Linear damper is considered in this application with an optimal damping coefficient determined from a preliminary parametric study. The damper is small in size in comparison with those adopted for building or bridge applications. Component test of the dampers has been carried out to make sure they meet the design requirement. Shake table tests have been further conducted to verify the proposed scheme under realistic earthquake conditions. Encouraging results have been achieved by effectively reducing the seismic responses of up to 60% and preventing the FOUPs from falling off the shelves that would otherwise be the case if left unprotected. Effectiveness of adopting a viscous fluid damper for seismic control of the stocker on top against the ceiling has been confirmed. This technique has been adopted by Macronix International Co., LTD for seismic retrofit of existing stockers. Demonstrative projects on the application of the proposed technique are planned underway for other companies in the display industry as well.

Keywords: Hi-tech industries, seismic protection, automated stocker system, viscous fluid damper.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 923
139 Evaluation of Optimum Performance of Lateral Intakes

Authors: Mohammad Reza Pirestani, Hamid Reza Vosoghifar, Pegah Jazayeri

Abstract:

In designing river intakes and diversion structures, it is paramount that the sediments entering the intake are minimized or, if possible, completely separated. Due to high water velocity, sediments can significantly damage hydraulic structures especially when mechanical equipment like pumps and turbines are used. This subsequently results in wasting water, electricity and further costs. Therefore, it is prudent to investigate and analyze the performance of lateral intakes affected by sediment control structures. Laboratory experiments, despite their vast potential and benefits, can face certain limitations and challenges. Some of these include: limitations in equipment and facilities, space constraints, equipment errors including lack of adequate precision or mal-operation, and finally, human error. Research has shown that in order to achieve the ultimate goal of intake structure design – which is to design longlasting and proficient structures – the best combination of sediment control structures (such as sill and submerged vanes) along with parameters that increase their performance (such as diversion angle and location) should be determined. Cost, difficulty of execution and environmental impacts should also be included in evaluating the optimal design. This solution can then be applied to similar problems in the future. Subsequently, the model used to arrive at the optimal design requires high level of accuracy and precision in order to avoid improper design and execution of projects. Process of creating and executing the design should be as comprehensive and applicable as possible. Therefore, it is important that influential parameters and vital criteria is fully understood and applied at all stages of choosing the optimal design. In this article, influential parameters on optimal performance of the intake, advantages and disadvantages, and efficiency of a given design are studied. Then, a multi-criterion decision matrix is utilized to choose the optimal model that can be used to determine the proper parameters in constructing the intake.

Keywords: Diversion structures lateral intake, multi criteria decision making, optimal design, sediment control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2174
138 Client Satisfaction: Does Private or Public Health Sector Make a Difference? Results from Secondary Data Analysis in Sindh, Pakistan

Authors: Wajiha Javed, Arsalan Jabbar, Nelofer Mehboob, Muhammad Tafseer, Zahid Memon

Abstract:

Introduction: Researchers globally have strived to explore diverse factors that augment the continuation and uptake of family planning methods. Clients’ satisfaction is one of the core determinants facilitating continuation of family planning methods. There is a major debate yet scanty evidence to contrast public and private sectors with respect to client satisfaction. The objective of this study is to compare quality-of-care provided by public and private sectors of Pakistan through a client satisfaction lens. Methods: We used Pakistan Demographic Heath Survey 2012-13 dataset on 3133 women. Ten different multivariate models were made. to explore the relationship between client satisfaction and dependent outcome after adjusting for all known confounding factors and results are presented as OR and AOR (95% CI). Results: Multivariate analyses showed that clients were less satisfied in contraceptive provision from private sector as compared to public sector (AOR 0.92, 95% CI 0.63-1.68) even though the result was not statistically significant. Clients were more satisfied from private sector as compared to the public sector with respect to other determinants of quality-of-care follow-up care (AOR 3.29, 95% CI 1.95-5.55), infection prevention (AOR 2.41, 95% CI 1.60-3.62), counseling services (AOR 2.01, 95% CI 1.27-3.18, timely treatment (AOR 3.37, 95% CI 2.20-5.15), attitude of staff (AOR 2.23, 95% CI 1.50-3.33), punctuality of staff (AOR 2.28, 95% CI 1.92-4.13), timely referring (AOR 2.34, 95% CI 1.63-3.35), staff cooperation (AOR 1.75, 95% CI 1.22-2.51) and complications handling (AOR 2.27, 95% CI 1.56-3.29). Discussion: Public sector has successfully attained substantial satisfaction levels with respect to provision of contraceptives, but it contrasts previous literature from a multi country studies. Our study though in is concordance with a study from Tanzania where public sector was more likely to offer family planning services to clients as compared to private facilities. Conclusion: In majority of the developing countries, public sector is more involved in FP service provision; however, in Pakistan clients’ satisfaction in private sector is more, which opens doors for public-private partnerships and collaboration in the near future. 

Keywords: Client satisfaction, Family Planning, Public private partnership, Quality of care

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1970
137 Case-Based Reasoning Application to Predict Geological Features at Site C Dam Construction Project

Authors: S. Behnam Malekzadeh, I. Kerr, T. Kaempffer, T. Harper, A Watson

Abstract:

The Site C Hydroelectric dam is currently being constructed in north-eastern British Columbia on sub-horizontal sedimentary strata that dip approximately 15 meters from one bank of the Peace River to the other. More than 615 pressure sensors (Vibrating Wire Piezometers) have been installed on bedding planes (BPs) since construction began, with over 80 more planned before project completion. These pressure measurements are essential to monitor the stability of the rock foundation during and after construction and for dam safety purposes. BPs are identified by their clay gouge infilling, which varies in thickness from less than 1 to 20 mm and can be challenging to identify as the core drilling process often disturbs or washes away the gouge material. Without the use of depth predictions from nearby boreholes, stratigraphic markers, and downhole geophysical data, it is difficult to confidently identify BP targets for the sensors. In this paper, a Case-Based Reasoning (CBR) method was used to develop an empirical model called the Bedding Plane Elevation Prediction (BPEP) to help geologists and geotechnical engineers to predict geological features and BPs at new locations in a fast and accurate manner. To develop CBR, a database was developed based on 64 pressure sensors already installed on key bedding planes BP25, BP28, and BP31 on the Right Bank, including BP elevations and coordinates. 13 (20%) of the most recent cases were selected to validate and evaluate the accuracy of the developed model, while the similarity was defined as the distance between previous cases and recent cases to predict the depth of significant BPs. The average difference between actual BP elevations and predicted elevations for above BPs was ± 55 cm, while the actual results showed that 69% of predicted elevations were within ± 79 cm of actual BP elevations while 100% of predicted elevations for new cases were within ± 99 cm range. Eventually, the actual results will be used to develop the database and improve BPEP to perform as a learning machine to predict more accurate BP elevations for future sensor installations.

Keywords: Case-Based Reasoning, CBR, geological feature, geology, piezometer, pressure sensor, core logging, dam construction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 87
136 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling

Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal

Abstract:

Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.

Keywords: Benchmark collection, program educational objectives, student outcomes, ABET, Accreditation, machine learning, supervised multiclass classification, text mining.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 788