Search results for: intensive unit scoring system
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 19624

Search results for: intensive unit scoring system

9634 Application of Combined Cluster and Discriminant Analysis to Make the Operation of Monitoring Networks More Economical

Authors: Norbert Magyar, Jozsef Kovacs, Peter Tanos, Balazs Trasy, Tamas Garamhegyi, Istvan Gabor Hatvani

Abstract:

Water is one of the most important common resources, and as a result of urbanization, agriculture, and industry it is becoming more and more exposed to potential pollutants. The prevention of the deterioration of water quality is a crucial role for environmental scientist. To achieve this aim, the operation of monitoring networks is necessary. In general, these networks have to meet many important requirements, such as representativeness and cost efficiency. However, existing monitoring networks often include sampling sites which are unnecessary. With the elimination of these sites the monitoring network can be optimized, and it can operate more economically. The aim of this study is to illustrate the applicability of the CCDA (Combined Cluster and Discriminant Analysis) to the field of water quality monitoring and optimize the monitoring networks of a river (the Danube), a wetland-lake system (Kis-Balaton & Lake Balaton), and two surface-subsurface water systems on the watershed of Lake Neusiedl/Lake Fertő and on the Szigetköz area over a period of approximately two decades. CCDA combines two multivariate data analysis methods: hierarchical cluster analysis and linear discriminant analysis. Its goal is to determine homogeneous groups of observations, in our case sampling sites, by comparing the goodness of preconceived classifications obtained from hierarchical cluster analysis with random classifications. The main idea behind CCDA is that if the ratio of correctly classified cases for a grouping is higher than at least 95% of the ratios for the random classifications, then at the level of significance (α=0.05) the given sampling sites don’t form a homogeneous group. Due to the fact that the sampling on the Lake Neusiedl/Lake Fertő was conducted at the same time at all sampling sites, it was possible to visualize the differences between the sampling sites belonging to the same or different groups on scatterplots. Based on the results, the monitoring network of the Danube yields redundant information over certain sections, so that of 12 sampling sites, 3 could be eliminated without loss of information. In the case of the wetland (Kis-Balaton) one pair of sampling sites out of 12, and in the case of Lake Balaton, 5 out of 10 could be discarded. For the groundwater system of the catchment area of Lake Neusiedl/Lake Fertő all 50 monitoring wells are necessary, there is no redundant information in the system. The number of the sampling sites on the Lake Neusiedl/Lake Fertő can decrease to approximately the half of the original number of the sites. Furthermore, neighbouring sampling sites were compared pairwise using CCDA and the results were plotted on diagrams or isoline maps showing the location of the greatest differences. These results can help researchers decide where to place new sampling sites. The application of CCDA proved to be a useful tool in the optimization of the monitoring networks regarding different types of water bodies. Based on the results obtained, the monitoring networks can be operated more economically.

Keywords: combined cluster and discriminant analysis, cost efficiency, monitoring network optimization, water quality

Procedia PDF Downloads 335
9633 The Problem of the Use of Learning Analytics in Distance Higher Education: An Analytical Study of the Open and Distance University System in Mexico

Authors: Ismene Ithai Bras-Ruiz

Abstract:

Learning Analytics (LA) is employed by universities not only as a tool but as a specialized ground to enhance students and professors. However, not all the academic programs apply LA with the same goal and use the same tools. In fact, LA is formed by five main fields of study (academic analytics, action research, educational data mining, recommender systems, and personalized systems). These fields can help not just to inform academic authorities about the situation of the program, but also can detect risk students, professors with needs, or general problems. The highest level applies Artificial Intelligence techniques to support learning practices. LA has adopted different techniques: statistics, ethnography, data visualization, machine learning, natural language process, and data mining. Is expected that any academic program decided what field wants to utilize on the basis of his academic interest but also his capacities related to professors, administrators, systems, logistics, data analyst, and the academic goals. The Open and Distance University System (SUAYED in Spanish) of the University National Autonomous of Mexico (UNAM), has been working for forty years as an alternative to traditional programs; one of their main supports has been the employ of new information and communications technologies (ICT). Today, UNAM has one of the largest network higher education programs, twenty-six academic programs in different faculties. This situation means that every faculty works with heterogeneous populations and academic problems. In this sense, every program has developed its own Learning Analytic techniques to improve academic issues. In this context, an investigation was carried out to know the situation of the application of LA in all the academic programs in the different faculties. The premise of the study it was that not all the faculties have utilized advanced LA techniques and it is probable that they do not know what field of study is closer to their program goals. In consequence, not all the programs know about LA but, this does not mean they do not work with LA in a veiled or, less clear sense. It is very important to know the grade of knowledge about LA for two reasons: 1) This allows to appreciate the work of the administration to improve the quality of the teaching and, 2) if it is possible to improve others LA techniques. For this purpose, it was designed three instruments to determinate the experience and knowledge in LA. These were applied to ten faculty coordinators and his personnel; thirty members were consulted (academic secretary, systems manager, or data analyst, and coordinator of the program). The final report allowed to understand that almost all the programs work with basic statistics tools and techniques, this helps the administration only to know what is happening inside de academic program, but they are not ready to move up to the next level, this means applying Artificial Intelligence or Recommender Systems to reach a personalized learning system. This situation is not related to the knowledge of LA, but the clarity of the long-term goals.

Keywords: academic improvements, analytical techniques, learning analytics, personnel expertise

Procedia PDF Downloads 115
9632 Stroke Rehabilitation via Electroencephalogram Sensors and an Articulated Robot

Authors: Winncy Du, Jeremy Nguyen, Harpinder Dhillon, Reinardus Justin Halim, Clayton Haske, Trent Hughes, Marissa Ortiz, Rozy Saini

Abstract:

Stroke often causes death or cerebro-vascular (CV) brain damage. Most patients with CV brain damage lost their motor control on their limbs. This paper focuses on developing a reliable, safe, and non-invasive EEG-based robot-assistant stroke rehabilitation system to help stroke survivors to rapidly restore their motor control functions for their limbs. An electroencephalogram (EEG) recording device (EPOC Headset) and was used to detect a patient’s brain activities. The EEG signals were then processed, classified, and interpreted to the motion intentions, and then converted to a series of robot motion commands. A six-axis articulated robot (AdeptSix 300) was employed to provide the intended motions based on these commends. To ensure the EEG device, the computer, and the robot can communicate to each other, an Arduino microcontroller is used to physically execute the programming codes to a series output pins’ status (HIGH or LOW). Then these “hardware” commends were sent to a 24 V relay to trigger the robot’s motion. A lookup table for various motion intensions and the associated EEG signal patterns were created (through training) and installed in the microcontroller. Thus, the motion intention can be direct determined by comparing the EEG patterns obtaibed from the patient with the look-up table’s EEG patterns; and the corresponding motion commends are sent to the robot to provide the intended motion without going through feature extraction and interpretation each time (a time-consuming process). For safety sake, an extender was designed and attached to the robot’s end effector to ensure the patient is beyond the robot’s workspace. The gripper is also designed to hold the patient’s limb. The test results of this rehabilitation system show that it can accurately interpret the patient’s motion intension and move the patient’s arm to the intended position.

Keywords: brain waves, EEG sensor, motion control, robot-assistant stroke rehabilitation

Procedia PDF Downloads 368
9631 Muscle Neurotrophins Family Response to Resistance Exercise

Authors: Rasoul Eslami, Reza Gharakhanlou

Abstract:

NT-4/5 and TrkB have been proposed to be involved in the coordinated adaptations of the neuromuscular system to elevated level of activity. Despite the persistence of this neurotrophin and its receptor expression in adult skeletal muscle, little attention has been paid to the functional significance of this complex in the mature neuromuscular system. Therefore, the purpose of this research was to study the effect of one session of resistance exercise on mRNA expression of NT4/5 and TrkB proteins in slow and fast muscles of Wistar Rats. Male Wistar rats (10 mo of age, preparation of Pasteur Institute) were housed under similar living conditions in cages (in groups of four) at room temperature under a controlled light/dark (12-h) cycle with ad libitum access to food and water. A number of sixteen rats were randomly divided to two groups (resistance exercise (T) and control (C); n=8 for each group). The resistance training protocol consisted of climbing a 1-meter–long ladder, with a weight attached to a tail sleeve. Twenty-four hours following the main training session, rats of T and C groups were anaesthetized and the right soleus and flexor hallucis longus (FHL) muscles were removed under sterile conditions via an incision on the dorsolateral aspect of the hind limb. For NT-4/5 and TrkB expression, quantitative real time RT-PCR was used. SPSS software and independent-samples t-test were used for data analysis. The level of significance was set at P < 0.05. Data indicate that resistance training significantly (P<0.05) decreased mRNA expression of NT4/5 in soleus muscle. However, no significant alteration was detected in FHL muscle (P>0.05). Our results also indicate that no significant alterations were detected for TrkB mRNA expression in soleus and FHL muscles (P>0.05). Decrease in mRNA expression of NT4/5 in soleus muscle may be as result of post-translation regulation following resistance training. Also, non-alteration in TrkB mRNA expression was indicated in probable roll of P75 receptor.

Keywords: neurotrophin-4/5 (NT-4/5), TrkB receptor, resistance training, slow and fast muscles

Procedia PDF Downloads 430
9630 Effect of Ethanolic Extract of Keladi Tikus (Typhonium flagelliforme) on the Level of Ifn Γ (Interferon Gamma), Vascular Endothelial Growth Factor (VEGF) and Caspase 3 Expression

Authors: Chodidjah, Edi Dharmana, Hardhono, Sarjadi

Abstract:

Breast cancer treatment options including surgery, radiation therapy, chemotherapy, and immunotherapy have not been effective. Besides, they have side effects. Keladi Tikus (Typhonium flagelliforme) has been shown to improve immune system, suppress tumor growth and induce apoptosis. One of the parameters for immune system, tumor growth and apoptosis is IFNγ (Interferon γ), VEGF (Vascular Endothelial Growth Factor) and Caspase 3 respectively. The aim of this study was to examine the effect of the administration of Keladi Tikus tuber extract at the dose of 200 mg/kgBW, 400 mg/KgBW, and 800 mg/kgBW on the level of IFNγ, VEGF and caspase 3 expression. In this experimental study using post test randomized control group design, 24 CH3 mice with tumor were randomly divided into 4 groups including control group and treated groups: Treated with 0.2 cc extract of Keladi Tikus at the dose of 200 mg/kgBW, 400 mg/kgBW, 800 mg/kgBW, respectively for 30 days. On day 31 the lymphatic tissue was taken and evaluated for its level of IFNγ, using ELISA. The tumor tissue was taken and subjected to immunohistochemistry staining for VEGF and caspase 3 expression evaluation. The data on IFNγ, VEGF and Caspase 3 expression were analyzed using One Way Anova with significant level of 0.05. One Way Anova resulted in p<0.05. LSD test showed that the level of IFNγ and Caspase 3 for control group was different from that of treated groups. There was no significant different between the treated group of 400 mg/KgBW and 800mg/KgBW. VEGF expressions for all the treated groups were significant. In conclusion, the oral administration of ethanolic extract of Keladi Tikus (Typhonium flagelliforme) at the dose of 200mg/kgBW, 400 mg/kgBW,800 mg/kgBW increases IFNγ, Caspase 3 and decreases VEGF expression in C3H mice with adenocarsinoma mamma.

Keywords: Typhonium flagelliforme, IFNγ, caspase 3, VEGF

Procedia PDF Downloads 411
9629 Fast and Non-Invasive Patient-Specific Optimization of Left Ventricle Assist Device Implantation

Authors: Huidan Yu, Anurag Deb, Rou Chen, I-Wen Wang

Abstract:

The use of left ventricle assist devices (LVADs) in patients with heart failure has been a proven and effective therapy for patients with severe end-stage heart failure. Due to the limited availability of suitable donor hearts, LVADs will probably become the alternative solution for patient with heart failure in the near future. While the LVAD is being continuously improved toward enhanced performance, increased device durability, reduced size, a better understanding of implantation management becomes critical in order to achieve better long-term blood supplies and less post-surgical complications such as thrombi generation. Important issues related to the LVAD implantation include the location of outflow grafting (OG), the angle of the OG, the combination between LVAD and native heart pumping, uniform or pulsatile flow at OG, etc. We have hypothesized that an optimal implantation of LVAD is patient specific. To test this hypothesis, we employ a novel in-house computational modeling technique, named InVascular, to conduct a systematic evaluation of cardiac output at aortic arch together with other pertinent hemodynamic quantities for each patient under various implantation scenarios aiming to get an optimal implantation strategy. InVacular is a powerful computational modeling technique that integrates unified mesoscale modeling for both image segmentation and fluid dynamics with the cutting-edge GPU parallel computing. It first segments the aortic artery from patient’s CT image, then seamlessly feeds extracted morphology, together with the velocity wave from Echo Ultrasound image of the same patient, to the computation model to quantify 4-D (time+space) velocity and pressure fields. Using one NVIDIA Tesla K40 GPU card, InVascular completes a computation from CT image to 4-D hemodynamics within 30 minutes. Thus it has the great potential to conduct massive numerical simulation and analysis. The systematic evaluation for one patient includes three OG anastomosis (ascending aorta, descending thoracic aorta, and subclavian artery), three combinations of LVAD and native heart pumping (1:1, 1:2, and 1:3), three angles of OG anastomosis (inclined upward, perpendicular, and inclined downward), and two LVAD inflow conditions (uniform and pulsatile). The optimal LVAD implantation is suggested through a comprehensive analysis of the cardiac output and related hemodynamics from the simulations over the fifty-four scenarios. To confirm the hypothesis, 5 random patient cases will be evaluated.

Keywords: graphic processing unit (GPU) parallel computing, left ventricle assist device (LVAD), lumped-parameter model, patient-specific computational hemodynamics

Procedia PDF Downloads 124
9628 Soybean Seed Composition Prediction From Standing Crops Using Planet Scope Satellite Imagery and Machine Learning

Authors: Supria Sarkar, Vasit Sagan, Sourav Bhadra, Meghnath Pokharel, Felix B.Fritschi

Abstract:

Soybean and their derivatives are very important agricultural commodities around the world because of their wide applicability in human food, animal feed, biofuel, and industries. However, the significance of soybean production depends on the quality of the soybean seeds rather than the yield alone. Seed composition is widely dependent on plant physiological properties, aerobic and anaerobic environmental conditions, nutrient content, and plant phenological characteristics, which can be captured by high temporal resolution remote sensing datasets. Planet scope (PS) satellite images have high potential in sequential information of crop growth due to their frequent revisit throughout the world. In this study, we estimate soybean seed composition while the plants are in the field by utilizing PlanetScope (PS) satellite images and different machine learning algorithms. Several experimental fields were established with varying genotypes and different seed compositions were measured from the samples as ground truth data. The PS images were processed to extract 462 hand-crafted vegetative and textural features. Four machine learning algorithms, i.e., partial least squares (PLSR), random forest (RFR), gradient boosting machine (GBM), support vector machine (SVM), and two recurrent neural network architectures, i.e., long short-term memory (LSTM) and gated recurrent unit (GRU) were used in this study to predict oil, protein, sucrose, ash, starch, and fiber of soybean seed samples. The GRU and LSTM architectures had two separate branches, one for vegetative features and the other for textures features, which were later concatenated together to predict seed composition. The results show that sucrose, ash, protein, and oil yielded comparable prediction results. Machine learning algorithms that best predicted the six seed composition traits differed. GRU worked well for oil (R-Squared: of 0.53) and protein (R-Squared: 0.36), whereas SVR and PLSR showed the best result for sucrose (R-Squared: 0.74) and ash (R-Squared: 0.60), respectively. Although, the RFR and GBM provided comparable performance, the models tended to extremely overfit. Among the features, vegetative features were found as the most important variables compared to texture features. It is suggested to utilize many vegetation indices for machine learning training and select the best ones by using feature selection methods. Overall, the study reveals the feasibility and efficiency of PS images and machine learning for plot-level seed composition estimation. However, special care should be given while designing the plot size in the experiments to avoid mixed pixel issues.

Keywords: agriculture, computer vision, data science, geospatial technology

Procedia PDF Downloads 121
9627 A Fast Multi-Scale Finite Element Method for Geophysical Resistivity Measurements

Authors: Mostafa Shahriari, Sergio Rojas, David Pardo, Angel Rodriguez- Rozas, Shaaban A. Bakr, Victor M. Calo, Ignacio Muga

Abstract:

Logging-While Drilling (LWD) is a technique to record down-hole logging measurements while drilling the well. Nowadays, LWD devices (e.g., nuclear, sonic, resistivity) are mostly used commercially for geo-steering applications. Modern borehole resistivity tools are able to measure all components of the magnetic field by incorporating tilted coils. The depth of investigation of LWD tools is limited compared to the thickness of the geological layers. Thus, it is a common practice to approximate the Earth’s subsurface with a sequence of 1D models. For a 1D model, we can reduce the dimensionality of the problem using a Hankel transform. We can solve the resulting system of ordinary differential equations (ODEs) either (a) analytically, which results in a so-called semi-analytic method after performing a numerical inverse Hankel transform, or (b) numerically. Semi-analytic methods are used by the industry due to their high performance. However, they have major limitations, namely: -The analytical solution of the aforementioned system of ODEs exists only for piecewise constant resistivity distributions. For arbitrary resistivity distributions, the solution of the system of ODEs is unknown by today’s knowledge. -In geo-steering, we need to solve inverse problems with respect to the inversion variables (e.g., the constant resistivity value of each layer and bed boundary positions) using a gradient-based inversion method. Thus, we need to compute the corresponding derivatives. However, the analytical derivatives of cross-bedded formation and the analytical derivatives with respect to the bed boundary positions have not been published to the best of our knowledge. The main contribution of this work is to overcome the aforementioned limitations of semi-analytic methods by solving each 1D model (associated with each Hankel mode) using an efficient multi-scale finite element method. The main idea is to divide our computations into two parts: (a) offline computations, which are independent of the tool positions and we precompute only once and use them for all logging positions, and (b) online computations, which depend upon the logging position. With the above method, (a) we can consider arbitrary resistivity distributions along the 1D model, and (b) we can easily and rapidly compute the derivatives with respect to any inversion variable at a negligible additional cost by using an adjoint state formulation. Although the proposed method is slower than semi-analytic methods, its computational efficiency is still high. In the presentation, we shall derive the mathematical variational formulation, describe the proposed multi-scale finite element method, and verify the accuracy and efficiency of our method by performing a wide range of numerical experiments and comparing the numerical solutions to semi-analytic ones when the latest are available.

Keywords: logging-While-Drilling, resistivity measurements, multi-scale finite elements, Hankel transform

Procedia PDF Downloads 374
9626 Sustainable Refrigerated Transport Engineering

Authors: A. A, F. Belmir, A. El Bouari, Y. Abboud

Abstract:

This article presents a study of the thermal performance of a new solar mobile refrigeration prototype for the preservation of perishable foods. The simulation of the refrigeration cycle and the calculation of the thermal balances made it possible to estimate its consumption and to evaluate the capacity of each photovoltaic component necessary for the production of energy. The study provides a description of the refrigerator construction and operation, including an energy balance analysis of the refrigerator performance under typical loads. The photovoltaic system requirements are also detailed.

Keywords: composite, material, photovoltaic, refrigeration, thermal

Procedia PDF Downloads 226
9625 Fostering Student Interest in Senior Secondary Two Biology Using Prior Knowledge of Behavioural Objectives and Assertive Questioning Strategies in Benue State, Nigeria

Authors: John Odo Ogah

Abstract:

The study investigated ways of fostering students’ interest in senior secondary two Biology, using prior knowledge of behavioural objectives and assertive questioning strategies in Benue State of Nigeria. A quasi-experimental research design was adopted; the population comprised 8,571 senior Secondary two students. The sample consisted of 265 SSII biology students selected from six government schools in the study area using a multi-staged sampling technique. Data was generated using the Biology Interest Inventory (BII). The instrument was validated and subjected to reliability analysis using Cronbach’s Alpha formula, which yielded a coefficient of 0.73. Three research questions guided the study, while three hypotheses were formulated and tested. Data collected were analyzed using means, bar graphs, and standard deviations to answer the research questions, while analysis of covariance (ANCOVA) was employed in testing the hypotheses at 0.05 level of significance. The finding revealed that there is a significant difference in the mean interest ratings of students taught cellular respiration and excretory system using assertive questioning strategy, prior knowledge of behavioural objectives strategy and lecture method (p=0.000˂0.05). There is no significant difference in the mean interest ratings of male and female students taught cellular respiration and excretory systems using an assertive questioning strategy (p=0.790>0.05). There is significant difference in the mean interest ratings of male and female students taught cellular respiration and execratory system using prior knowledge of behavioural objectives strategy (p=0.028˂0.05). It was recommended, among others, that teachers should endeavor to utilize prior knowledge of behavioral objectives strategy in teaching biology in order to harness its benefits as it enhances students’ interest.

Keywords: interest, assertive, questioning, prior, knowledge

Procedia PDF Downloads 33
9624 Experimental Study of Boost Converter Based PV Energy System

Authors: T. Abdelkrim, K. Ben Seddik, B. Bezza, K. Benamrane, Aeh. Benkhelifa

Abstract:

This paper proposes an implementation of boost converter for a resistive load using photovoltaic energy as a source. The model of photovoltaic cell and operating principle of boost converter are presented. A PIC micro controller is used in the close loop control to generate pulses for controlling the converter circuit. To performance evaluation of boost converter, a variation of output voltage of PV panel is done by shading one and two cells.

Keywords: boost converter, microcontroller, photovoltaic power generation, shading cells

Procedia PDF Downloads 860
9623 Intelligent Process and Model Applied for E-Learning Systems

Authors: Mafawez Alharbi, Mahdi Jemmali

Abstract:

E-learning is a developing area especially in education. E-learning can provide several benefits to learners. An intelligent system to collect all components satisfying user preferences is so important. This research presents an approach that it capable to personalize e-information and give the user their needs following their preferences. This proposal can make some knowledge after more evaluations made by the user. In addition, it can learn from the habit from the user. Finally, we show a walk-through to prove how intelligent process work.

Keywords: artificial intelligence, architecture, e-learning, software engineering, processing

Procedia PDF Downloads 175
9622 Evaluation of Batch Splitting in the Context of Load Scattering

Authors: S. Wesebaum, S. Willeke

Abstract:

Production companies are faced with an increasingly turbulent business environment, which demands very high production volumes- and delivery date flexibility. If a decoupling by storage stages is not possible (e.g. at a contract manufacturing company) or undesirable from a logistical point of view, load scattering effects the production processes. ‘Load’ characterizes timing and quantity incidence of production orders (e.g. in work content hours) to workstations in the production, which results in specific capacity requirements. Insufficient coordination between load (demand capacity) and capacity supply results in heavy load scattering, which can be described by deviations and uncertainties in the input behavior of a capacity unit. In order to respond to fluctuating loads, companies try to implement consistent and realizable input behavior using the capacity supply available. For example, a uniform and high level of equipment capacity utilization keeps production costs down. In contrast, strong load scattering at workstations leads to performance loss or disproportionately fluctuating WIP, whereby the logistics objectives are affected negatively. Options for reducing load scattering are e.g. shifting the start and end dates of orders, batch splitting and outsourcing of operations or shifting to other workstations. This leads to an adjustment of load to capacity supply, and thus to a reduction of load scattering. If the adaptation of load to capacity cannot be satisfied completely, possibly flexible capacity must be used to ensure that the performance of a workstation does not decrease for a given load. Where the use of flexible capacities normally raises costs, an adjustment of load to capacity supply reduces load scattering and, in consequence, costs. In the literature you mostly find qualitative statements for describing load scattering. Quantitative evaluation methods that describe load mathematically are rare. In this article the authors discuss existing approaches for calculating load scattering and their various disadvantages such as lack of opportunity for normalization. These approaches are the basis for the development of our mathematical quantification approach for describing load scattering that compensates the disadvantages of the current quantification approaches. After presenting our mathematical quantification approach, the method of batch splitting will be described. Batch splitting allows the adaptation of load to capacity to reduce load scattering. After describing the method, it will be explicitly analyzed in the context of the logistic curve theory by Nyhuis using the stretch factor α1 in order to evaluate the impact of the method of batch splitting on load scattering and on logistic curves. The conclusion of this article will be to show how the methods and approaches presented can help companies in a turbulent environment to quantify the occurring work load scattering accurately and apply an efficient method for adjusting work load to capacity supply. In this way, the achievements of the logistical objectives are increased without causing additional costs.

Keywords: batch splitting, production logistics, production planning and control, quantification, load scattering

Procedia PDF Downloads 386
9621 Transforming Health Information from Manual to Digital (Electronic) World: A Reference and Guide

Authors: S. Karthikeyan, Naveen Bindra

Abstract:

Introduction: To update ourselves and understand the concept of latest electronic formats available for Health care providers and how it could be used and developed as per standards. The idea is to correlate between the patients Manual Medical Records keeping and maintaining patients Electronic Information in a Health care setup in this world. Furthermore this stands with adapting to the right technology depending upon the organization and improve our quality and quantity of Healthcare providing skills. Objective: The concept and theory is to explain the terms of Electronic Medical Record (EMR), Electronic Health Record (EHR) and Personal Health Record (PHR) and selecting the best technical among the available Electronic sources and software before implementing. It is to guide and make sure the technology used by the end users without any doubts and difficulties. The idea is to evaluate is to admire the uses and barriers of EMR-EHR-PHR. Aim and Scope: The target is to achieve the health care providers like Physicians, Nurses, Therapists, Medical Bill reimbursements, Insurances and Government to assess the patient’s information on easy and systematic manner without diluting the confidentiality of patient’s information. Method: Health Information Technology can be implemented with the help of Organisations providing with legal guidelines and help to stand by the health care provider. The main objective is to select the correct embedded and affordable database management software and generating large-scale data. The parallel need is to know how the latest software available in the market. Conclusion: The question lies here is implementing the Electronic information system with healthcare providers and organisation. The clinicians are the main users of the technology and manage us to ‘go paperless’. The fact is that day today changing technologically is very sound and up to date. Basically the idea is to tell how to store the data electronically safe and secure. All three exemplifies the fact that an electronic format has its own benefit as well as barriers.

Keywords: medical records, digital records, health information, electronic record system

Procedia PDF Downloads 442
9620 Modeling of Timing in a Cyber Conflict to Inform Critical Infrastructure Defense

Authors: Brian Connett, Bryan O'Halloran

Abstract:

Systems assets within critical infrastructures were seemingly safe from the exploitation or attack by nefarious cyberspace actors. Now, critical infrastructure is a target and the resources to exploit the cyber physical systems exist. These resources are characterized in terms of patience, stealth, replication-ability and extraordinary robustness. System owners are obligated to maintain a high level of protection measures. The difficulty lies in knowing when to fortify a critical infrastructure against an impending attack. Models currently exist that demonstrate the value of knowing the attacker’s capabilities in the cyber realm and the strength of the target. The shortcomings of these models are that they are not designed to respond to the inherent fast timing of an attack, an impetus that can be derived based on open-source reporting, common knowledge of exploits of and the physical architecture of the infrastructure. A useful model will inform systems owners how to align infrastructure architecture in a manner that is responsive to the capability, willingness and timing of the attacker. This research group has used an existing theoretical model for estimating parameters, and through analysis, to develop a decision tool for would-be target owners. The continuation of the research develops further this model by estimating the variable parameters. Understanding these parameter estimations will uniquely position the decision maker to posture having revealed the vulnerabilities of an attacker’s, persistence and stealth. This research explores different approaches to improve on current attacker-defender models that focus on cyber threats. An existing foundational model takes the point of view of an attacker who must decide what cyber resource to use and when to use it to exploit a system vulnerability. It is valuable for estimating parameters for the model, and through analysis, develop a decision tool for would-be target owners.

Keywords: critical infrastructure, cyber physical systems, modeling, exploitation

Procedia PDF Downloads 182
9619 Application of Industrial Ecology to the INSPIRA Zone: Territory Planification and New Activities

Authors: Mary Hanhoun, Jilla Bamarni, Anne-Sophie Bougard

Abstract:

INSPIR’ECO is a 18-month research and innovation project that aims to specify and develop a tool to offer new services for industrials and territorial planners/managers based on Industrial Ecology Principles. This project is carried out on the territory of Salaise Sablons and the services are designed to be deployed on other territories. Salaise-Sablons area is located in the limit of 5 departments on a major European economic axis multimodal traffic (river, rail and road). The perimeter of 330 ha includes 90 hectares occupied by 20 companies, with a total of 900 jobs, and represents a significant potential basin of development. The project involves five multi-disciplinary partners (Syndicat Mixte INSPIRA, ENGIE, IDEEL, IDEAs Laboratory and TREDI). INSPIR’ECO project is based on the principles that local stakeholders need services to pool, share their activities/equipment/purchases/materials. These services aims to : 1. initiate and promote exchanges between existing companies and 2. identify synergies between pre-existing industries and future companies that could be implemented in INSPIRA. These eco-industrial synergies can be related to: the recovery / exchange of industrial flows (industrial wastewater, waste, by-products, etc.); the pooling of business services (collective waste management, stormwater collection and reuse, transport, etc.); the sharing of equipments (boiler, steam production, wastewater treatment unit, etc.) or resources (splitting jobs cost, etc.); and the creation of new activities (interface activities necessary for by-product recovery, development of products or services from a newly identified resource, etc.). These services are based on IT tool used by the interested local stakeholders that intends to allow local stakeholders to take decisions. Thus, this IT tool: - include an economic and environmental assessment of each implantation or pooling/sharing scenarios for existing or further industries; - is meant for industrial and territorial manager/planners - is designed to be used for each new industrial project. - The specification of the IT tool is made through an agile process all along INSPIR’ECO project fed with: - Users expectations thanks to workshop sessions where mock-up interfaces are displayed; - Data availability based on local and industrial data inventory. These input allow to specify the tool not only with technical and methodological constraints (notably the ones from economic and environmental assessments) but also with data availability and users expectations. A feedback on innovative resource management initiatives in port areas has been realized in the beginning of the project to feed the designing services step.

Keywords: development opportunities, INSPIR’ECO, INSPIRA, industrial ecology, planification, synergy identification

Procedia PDF Downloads 146
9618 Developments in Performance of Autistic Students in the Egyptian School System

Authors: Magy Atef Awad Attia

Abstract:

The objective of this study was to study the effect of social stories on social interaction of students with autism. The sample was at level 5 student with autism, Another University Demonstration School student, who was diagnosed by the Physician as High Functioning Autism since he was able to read, write, calculate and was studying in inclusive classroom. However, he still had disability in social interaction to participate in social activity group and communication. He could not learn how to develop friendship or create relationship. He had inappropriate behavior in social context. He did not understand complex social situations. In addition, he did seemed to not know time and place. He was not able to understand feeling of oneself as well as the others. Consequently, he could not express his emotion appropriately. He did not understand or express his non-verbal language for communicating with friends. He lacked of common interest or emotion with nearby persons. He greeted inappropriately or was not interested in greeting. In addition, he did not have eye contact. He used inadequate language etc. He was elected by Purposive Sampling. His parents were willing to allow them to participate in this study. The research instruments were the lesson plan of social stories, and the picture book of social stories. The instruments used for data collection, were the social interaction evaluation of autistic students. This research was Experimental Research as One Group Pre-test, Post-test Design. For the Pre-test, the experiment was conducted by social stories. Then, the Post-test was implemented. The statistic used for data analysis. The research results were shown by scale. The results revealed that the autistic students taught by social stories indicated better social reaction after being taught by social stories.

Keywords: autism, autistic behavior, stability, harsh environments, techniques, thermal, properties, materials, applications, brittleness, fragility, disadvantages, bank, branches, profitability, setting prediction, effective target, measurement, evaluation, performance, commercial, business, sustainability, financial, system.

Procedia PDF Downloads 23
9617 Thermodynamic Phase Equilibria and Formation Kinetics of Cyclopentane, Cyclopentanone and Cyclopentanol Hydrates in the Presence of Gaseous Guest Molecules including Methane and Carbon Dioxide

Authors: Sujin Hong, Seokyoon Moon, Heejoong Kim, Yunseok Lee, Youngjune Park

Abstract:

Gas hydrate is an inclusion compound in which a low-molecular-weight gas or organic molecule is trapped inside a three-dimensional lattice structure created by water-molecule via intermolecular hydrogen bonding. It is generally formed at low temperature and high pressure, and exists as crystal structures of cubic systems − structure I, structure II, and hexagonal system − structure H. Many efforts have been made to apply them to various energy and environmental fields such as gas transportation and storage, CO₂ capture and separation, and desalination of seawater. Particularly, studies on the behavior of gas hydrates by new organic materials for CO₂ storage and various applications are underway. In this study, thermodynamic and spectroscopic analyses of the gas hydrate system were performed focusing on cyclopentanol, an organic molecule that forms gas hydrate at relatively low pressure. The thermodynamic equilibria of CH₄ and CO₂ hydrate systems including cyclopentanol were measured and spectroscopic analyses of XRD and Raman were performed. The differences in thermodynamic systems and formation kinetics of CO₂ added cyclopentane, cyclopentanol and cyclopentanone hydrate systems were compared. From the thermodynamic point of view, cyclopentanol was found to be a hydrate promotor. Spectroscopic analyses showed that cyclopentanol formed a hydrate crystal structure of cubic structure II in the presence of CH₄ and CO₂. It was found that the differences in the functional groups among the organic guest molecules significantly affected the rate of hydrate formation and the total amounts of CO₂ stored in the hydrate systems. The total amount of CO₂ stored in the cyclopentanone hydrate was found to be twice that of the amount of CO₂ stored in the cyclopentane and the cyclopentanol hydrates. The findings are expected to open up new opportunity to develop the gas hydrate based wastewater desalination technology.

Keywords: gas hydrate, CO₂, separation, desalination, formation kinetics, thermodynamic equilibria

Procedia PDF Downloads 249
9616 Interface Fracture of Sandwich Composite Influenced by Multiwalled Carbon Nanotube

Authors: Alak Kumar Patra, Nilanjan Mitra

Abstract:

Higher strength to weight ratio is the main advantage of sandwich composite structures. Interfacial delamination between the face sheet and core is a major problem in these structures. Many research works are devoted to improve the interfacial fracture toughness of composites majorities of which are on nano and laminated composites. Work on influence of multiwalled carbon nano-tubes (MWCNT) dispersed resin system on interface fracture of glass-epoxy PVC core sandwich composite is extremely limited. Finite element study is followed by experimental investigation on interface fracture toughness of glass-epoxy (G/E) PVC core sandwich composite with and without MWCNT. Results demonstrate an improvement in interface fracture toughness values (Gc) of samples with a certain percentages of MWCNT. In addition, dispersion of MWCNT in epoxy resin through sonication followed by mixing of hardener and vacuum resin infusion (VRI) technology used in this study is an easy and cost effective methodology in comparison to previously adopted other methods limited to laminated composites. The study also identifies the optimum weight percentage of MWCNT addition in the resin system for maximum performance gain in interfacial fracture toughness. The results agree with finite element study, high-resolution transmission electron microscope (HRTEM) analysis and fracture micrograph of field emission scanning electron microscope (FESEM) investigation. Interface fracture toughness (GC) of the DCB sandwich samples is calculated using the compliance calibration (CC) method considering the modification due to shear. Compliance (C) vs. crack length (a) data of modified sandwich DCB specimen is fitted to a power function of crack length. The calculated mean value of the exponent n from the plots of experimental results is 2.22 and is different from the value (n=3) prescribed in ASTM D5528-01for mode 1 fracture toughness of laminate composites (which is the basis for modified compliance calibration method). Differentiating C with respect to crack length (a) and substituting it in the expression GC provides its value. The research demonstrates improvement of 14.4% in peak load carrying capacity and 34.34% in interface fracture toughness GC for samples with 1.5 wt% MWCNT (weight % being taken with respect to weight of resin) in comparison to samples without MWCNT. The paper focuses on significant improvement in experimentally determined interface fracture toughness of sandwich samples with MWCNT over the samples without MWCNT using much simpler method of sonication. Good dispersion of MWCNT was observed in HRTEM with 1.5 wt% MWCNT addition in comparison to other percentages of MWCNT. FESEM studies have also demonstrated good dispersion and fiber bridging of MWCNT in resin system. Ductility is also observed to be higher for samples with MWCNT in comparison to samples without.

Keywords: carbon nanotube, epoxy resin, foam, glass fibers, interfacial fracture, sandwich composite

Procedia PDF Downloads 292
9615 BIM4Cult Leveraging BIM and IoT for Enhancing Fire Safety in Historical Buildings

Authors: Anastasios Manos, Despina Elisabeth Filippidou

Abstract:

Introduction: Historical buildings are an inte-gral part of the cultural heritage of every place, and beyond the obvious need for protection against risks, they have specific requirements regarding the handling of hazards and disasters such as fire, floods, earthquakes, etc. Ensuring high levels of protection and safety for these buildings is impera-tive for two distinct but interconnected reasons: a) they themselves constitute cultural heritage, and b) they are often used as museums/cultural spaces, necessitating the protection of both human life (vis-itors and workers) and the cultural treasures they house. However, these buildings present serious constraints in implementing the necessary measures to protect them from destruction due to their unique architecture, construction methods, and/or the structural materials used in the past, which have created an existing condition that is sometimes challenging to reshape and operate within the framework of modern regulations and protection measures. One of the most devastating risks that threaten historical buildings is fire. Catastrophic fires demonstrate the need for timely evaluation of fire safety measures in historical buildings. Recog-nizing the criticality of protecting historical build-ings from the risk of fire, the Confederation of Fire Protection Associations in Europe (CFPA E) issued specific guidelines in 2013 (CFPA-E Guideline No 30:2013 F) for the fire protection of historical buildings at the European level. However, until now, few actions have been implemented towards leveraging modern technologies in the field of con-struction and maintenance of buildings, such as Building Information Modeling (BIM) and the Inter-net of Things (IoT), for the protection of historical buildings from risks like fires, floods, etc. The pro-ject BIM4Cult has bee developed in order to fill this gap. It is a tool for timely assessing and monitoring of the fire safety level of historical buildings using BIM and IoT technologies in an integrated manner. The tool serves as a decision support expert system for improving the fire safety of historical buildings by continuously monitoring, controlling and as-sessing critical risk factors for fire.

Keywords: Iot, fire, BIM, expert system

Procedia PDF Downloads 51
9614 Life Cycle Assessment of a Parabolic Solar Cooker

Authors: Bastien Sanglard, Lou Magnat, Ligia Barna, Julian Carrey, Sébastien Lachaize

Abstract:

Cooking is a primary need for humans, several techniques being used around the globe based on different sources of energy: electricity, solid fuel (wood, coal...), fuel or liquefied petroleum gas. However, all of them leads to direct or indirect greenhouse gas emissions and sometimes health damage in household. Therefore, the solar concentrated power represent a great option to lower the damages because of a cleaner using phase. Nevertheless, the construction phase of the solar cooker still requires primary energy and materials, which leads to environmental impacts. The aims of this work is to analyse the ecological impacts of a commercialaluminium parabola and to compare it with other means of cooking, taking the boiling of 2 litres of water three times a day during 40 years as the functional unit. Life cycle assessment was performed using the software Umberto and the EcoInvent database. Calculations were realized over more than 13 criteria using two methods: the international panel on climate change method and the ReCiPe method. For the reflector itself, different aluminium provenances were compared, as well as the use of recycled aluminium. For the structure, aluminium was compared to iron (primary and recycled) and wood. Results show that climate impacts of the studied parabola was 0.0353 kgCO2eq/kWh when built with Chinese aluminium and can be reduced by 4 using aluminium from Canada. Assessment also showed that using 32% of recycled aluminium would reduce the impact by 1.33 and 1.43 compared to the use of primary Canadian aluminium and primary Chinese aluminium, respectively. The exclusive use of recycled aluminium lower the impact by 17. Besides, the use of iron (recycled or primary) or wood for the structure supporting the reflector significantly lowers the impact. The impact categories of the ReCiPe method show that the parabola made from Chinese aluminium has the heaviest impact - except for metal resource depletion - compared to aluminium from Canada, recycled aluminium or iron. Impact of solar cooking was then compared to gas stove and induction. The gas stove model was a cast iron tripod that supports the cooking pot, and the induction plate was as well a single spot plate. Results show the parabolic solar cooker has the lowest ecological impact over the 13 criteria of the ReCiPe method and over the global warming potential compared to the two other technologies. The climate impact of gas cooking is 0.628kgCO2/kWh when used with natural gas and 0.723 kgCO2/kWh when used with a bottle of gas. In each case, the main part of emissions came from gas burning. Induction cooking has a global warming potential of 0.12 kgCO2eq/kWh with the electricity mix of France, 96.3% of the impact being due to electricity production. Therefore, the electricity mix is a key factor for this impact: for instance, with the electricity mix of Germany and Poland, impacts are 0.81kgCO2eq/kWh and 1.39 kgCO2eq/kWh, respectively. Therefore, the parabolic solar cooker has a real ecological advantages compared to both gas stove and induction plate.

Keywords: life cycle assessement, solar concentration, cooking, sustainability

Procedia PDF Downloads 161
9613 A Prototype of an Information and Communication Technology Based Intervention Tool for Children with Dyslexia

Authors: Rajlakshmi Guha, Sajjad Ansari, Shazia Nasreen, Hirak Banerjee, Jiaul Paik

Abstract:

Dyslexia is a neurocognitive disorder, affecting around fifteen percent of the Indian population. The symptoms include difficulty in reading alphabet, words, and sentences. This can be difficult at the phonemic or recognition level and may further affect lexical structures. Therapeutic intervention of dyslexic children post assessment is generally done by special educators and psychologists through one on one interaction. Considering the large number of children affected and the scarcity of experts, access to care is limited in India. Moreover, unavailability of resources and timely communication with caregivers add on to the problem of proper intervention. With the development of Educational Technology and its use in India, access to information and care has been improved in such a large and diverse country. In this context, this paper proposes an ICT enabled home-based intervention program for dyslexic children which would support the child, and provide an interactive interface between expert, parents, and students. The paper discusses the details of the database design and system layout of the program. Along with, it also highlights the development of different technical aids required to build out personalized android applications for the Indian dyslexic population. These technical aids include speech database creation for children, automatic speech recognition system, serious game development, and color coded fonts. The paper also emphasizes the games developed to assist the dyslexic child on cognitive training primarily for attention, working memory, and spatial reasoning. In addition, it talks about the specific elements of the interactive intervention tool that makes it effective for home based intervention of dyslexia.

Keywords: Android applications, cognitive training, dyslexia, intervention

Procedia PDF Downloads 281
9612 Human Beta Defensin 1 as Potential Antimycobacterial Agent against Active and Dormant Tubercle Bacilli

Authors: Richa Sharma, Uma Nahar, Sadhna Sharma, Indu Verma

Abstract:

Counteracting the deadly pathogen Mycobacterium tuberculosis (M. tb) effectively is still a global challenge. Scrutinizing alternative weapons like antimicrobial peptides to strengthen existing tuberculosis artillery is urgently required. Considering the antimycobacterial potential of Human Beta Defensin 1 (HBD-1) along with isoniazid, the present study was designed to explore the ability of HBD-1 to act against active and dormant M. tb. HBD-1 was screened in silico using antimicrobial peptide prediction servers to identify its short antimicrobial motif. The activity of both HBD-1 and its selected motif (Pep B) was determined at different concentrations against actively growing M. tb in vitro and ex vivo in monocyte derived macrophages (MDMs). Log phase M. tb was grown along with HBD-1 and Pep B for 7 days. M. tb infected MDMs were treated with HBD-1 and Pep B for 72 hours. Thereafter, colony forming unit (CFU) enumeration was performed to determine activity of both peptides against actively growing in vitro and intracellular M. tb. The dormant M. tb models were prepared by following two approaches and treated with different concentrations of HBD-1 and Pep B. Firstly, 20-22 days old M. tbH37Rv was grown in potassium deficient Sauton media for 35 days. The presence of dormant bacilli was confirmed by Nile red staining. Dormant bacilli were further treated with rifampicin, isoniazid, HBD-1 and its motif for 7 days. The effect of both peptides on latent bacilli was assessed by colony forming units (CFU) and most probable number (MPN) enumeration. Secondly, human PBMC granuloma model was prepared by infecting PBMCs seeded on collagen matrix with M. tb(MOI 0.1) for 10 days. Histopathology was done to confirm granuloma formation. The granuloma thus formed was incubated for 72 hours with rifampicin, HBD-1 and Pep B individually. Difference in bacillary load was determined by CFU enumeration. The minimum inhibitory concentrations of HBD-1 and Pep B restricting growth of mycobacteria in vitro were 2μg/ml and 20μg/ml respectively. The intracellular mycobacterial load was reduced significantly by HBD-1 and Pep B at 1μg/ml and 5μg/ml respectively. Nile red positive bacterial population, high MPN/ low CFU count and tolerance to isoniazid, confirmed the formation of potassium deficienybaseddormancy model. HBD-1 (8μg/ml) showed 96% and 99% killing and Pep B (40μg/ml) lowered dormant bacillary load by 68.89% and 92.49% based on CFU and MPN enumeration respectively. Further, H&E stained aggregates of macrophages and lymphocytes, acid fast bacilli surrounded by cellular aggregates and rifampicin resistance, indicated the formation of human granuloma dormancy model. HBD-1 (8μg/ml) led to 81.3% reduction in CFU whereas its motif Pep B (40μg/ml) showed only 54.66% decrease in bacterial load inside granuloma. Thus, the present study indicated that HBD-1 and its motif are effective antimicrobial players against both actively growing and dormant M. tb. They should be further explored to tap their potential to design a powerful weapon for combating tuberculosis.

Keywords: antimicrobial peptides, dormant, human beta defensin 1, tuberculosis

Procedia PDF Downloads 250
9611 Vertebrate Model to Examine the Biological Effectiveness of Different Radiation Qualities

Authors: Rita Emília Szabó, Róbert Polanek, Tünde Tőkés, Zoltán Szabó, Szabolcs Czifrus, Katalin Hideghéty

Abstract:

Purpose: Several feature of zebrafish are making them amenable for investigation on therapeutic approaches such as ionizing radiation. The establishment of zebrafish model for comprehensive radiobiological research stands in the focus of our investigation, comparing the radiation effect curves of neutron and photon irradiation. Our final aim is to develop an appropriate vertebrate model in order to investigate the relative biological effectiveness of laser driven ionizing radiation. Methods and Materials: After careful dosimetry series of viable zebrafish embryos were exposed to a single fraction whole-body neutron-irradiation (1,25; 1,875; 2; 2,5 Gy) at the research reactor of the Technical University of Budapest and to conventional 6 MeV photon beam at 24 hour post-fertilization (hpf). The survival and morphologic abnormalities (pericardial edema, spine curvature) of each embryo were assessed for each experiment at 24-hour intervals from the point of fertilization up to 168 hpf (defining the dose lethal for 50% (LD50)). Results: In the zebrafish embryo model LD50 at 20 Gy dose level was defined and the same lethality were found at 2 Gy dose from the reactor neutron beam resulting RBE of 10. Dose-dependent organ perturbations were detected on macroscopic (shortening of the body length, spine curvature, microcephaly, micro-ophthalmia, micrognathia, pericardial edema, and inhibition of yolk sac resorption) and microscopic (marked cellular changes in skin, cardiac, gastrointestinal system) with the same magnitude of dose difference. Conclusion: In our observations, we found that zebrafish embryo model can be used for investigating the effects of different type of ionizing radiation and this system proved to be highly efficient vertebrate model for preclinical examinations.

Keywords: ionizing radiation, LD50, relative biological effectiveness, zebrafish embryo

Procedia PDF Downloads 295
9610 Optimization of Manufacturing Process Parameters: An Empirical Study from Taiwan's Tech Companies

Authors: Chao-Ton Su, Li-Fei Chen

Abstract:

The parameter design is crucial to improving the uniformity of a product or process. In the product design stage, parameter design aims to determine the optimal settings for the parameters of each element in the system, thereby minimizing the functional deviations of the product. In the process design stage, parameter design aims to determine the operating settings of the manufacturing processes so that non-uniformity in manufacturing processes can be minimized. The parameter design, trying to minimize the influence of noise on the manufacturing system, plays an important role in the high-tech companies. Taiwan has many well-known high-tech companies, which show key roles in the global economy. Quality remains the most important factor that enables these companies to sustain their competitive advantage. In Taiwan however, many high-tech companies face various quality problems. A common challenge is related to root causes and defect patterns. In the R&D stage, root causes are often unknown, and defect patterns are difficult to classify. Additionally, data collection is not easy. Even when high-volume data can be collected, data interpretation is difficult. To overcome these challenges, high-tech companies in Taiwan use more advanced quality improvement tools. In addition to traditional statistical methods and quality tools, the new trend is the application of powerful tools, such as neural network, fuzzy theory, data mining, industrial engineering, operations research, and innovation skills. In this study, several examples of optimizing the parameter settings for the manufacturing process in Taiwan’s tech companies will be presented to illustrate proposed approach’s effectiveness. Finally, a discussion of using traditional experimental design versus the proposed approach for process optimization will be made.

Keywords: quality engineering, parameter design, neural network, genetic algorithm, experimental design

Procedia PDF Downloads 131
9609 Designing Energy Efficient Buildings for Seasonal Climates Using Machine Learning Techniques

Authors: Kishor T. Zingre, Seshadhri Srinivasan

Abstract:

Energy consumption by the building sector is increasing at an alarming rate throughout the world and leading to more building-related CO₂ emissions into the environment. In buildings, the main contributors to energy consumption are heating, ventilation, and air-conditioning (HVAC) systems, lighting, and electrical appliances. It is hypothesised that the energy efficiency in buildings can be achieved by implementing sustainable technologies such as i) enhancing the thermal resistance of fabric materials for reducing heat gain (in hotter climates) and heat loss (in colder climates), ii) enhancing daylight and lighting system, iii) HVAC system and iv) occupant localization. Energy performance of various sustainable technologies is highly dependent on climatic conditions. This paper investigated the use of machine learning techniques for accurate prediction of air-conditioning energy in seasonal climates. The data required to train the machine learning techniques is obtained using the computational simulations performed on a 3-story commercial building using EnergyPlus program plugged-in with OpenStudio and Google SketchUp. The EnergyPlus model was calibrated against experimental measurements of surface temperatures and heat flux prior to employing for the simulations. It has been observed from the simulations that the performance of sustainable fabric materials (for walls, roof, and windows) such as phase change materials, insulation, cool roof, etc. vary with the climate conditions. Various renewable technologies were also used for the building flat roofs in various climates to investigate the potential for electricity generation. It has been observed that the proposed technique overcomes the shortcomings of existing approaches, such as local linearization or over-simplifying assumptions. In addition, the proposed method can be used for real-time estimation of building air-conditioning energy.

Keywords: building energy efficiency, energyplus, machine learning techniques, seasonal climates

Procedia PDF Downloads 105
9608 Optimal Design of Wind Turbine Blades Equipped with Flaps

Authors: I. Kade Wiratama

Abstract:

As a result of the significant growth of wind turbines in size, blade load control has become the main challenge for large wind turbines. Many advanced techniques have been investigated aiming at developing control devices to ease blade loading. Amongst them, trailing edge flaps have been proven as effective devices for load alleviation. The present study aims at investigating the potential benefits of flaps in enhancing the energy capture capabilities rather than blade load alleviation. A software tool is especially developed for the aerodynamic simulation of wind turbines utilising blades equipped with flaps. As part of the aerodynamic simulation of these wind turbines, the control system must be also simulated. The simulation of the control system is carried out via solving an optimisation problem which gives the best value for the controlling parameter at each wind turbine run condition. Developing a genetic algorithm optimisation tool which is especially designed for wind turbine blades and integrating it with the aerodynamic performance evaluator, a design optimisation tool for blades equipped with flaps is constructed. The design optimisation tool is employed to carry out design case studies. The results of design case studies on wind turbine AWT 27 reveal that, as expected, the location of flap is a key parameter influencing the amount of improvement in the power extraction. The best location for placing a flap is at about 70% of the blade span from the root of the blade. The size of the flap has also significant effect on the amount of enhancement in the average power. This effect, however, reduces dramatically as the size increases. For constant speed rotors, adding flaps without re-designing the topology of the blade can improve the power extraction capability as high as of about 5%. However, with re-designing the blade pretwist the overall improvement can be reached as high as 12%.

Keywords: flaps, design blade, optimisation, simulation, genetic algorithm, WTAero

Procedia PDF Downloads 325
9607 An Ergonomic Evaluation of Three Load Carriage Systems for Reducing Muscle Activity of Trunk and Lower Extremities during Giant Puppet Performing Tasks

Authors: Cathy SW. Chow, Kristina Shin, Faming Wang, B. C. L. So

Abstract:

During some dynamic giant puppet performances, an ergonomically designed load carrier system is necessary for the puppeteers to carry a giant puppet body’s heavy load with minimum muscle stress. A load carrier (i.e. prototype) was designed with two small wheels on the foot; and a hybrid spring device on the knee in order to assist the sliding and knee bending movements respectively. Thus, the purpose of this study was to evaluate the effect of three load carriers including two other commercially available load mounting systems, Tepex and SuitX, and the prototype. Ten male participants were recruited for the experiment. Surface electromyography (sEMG) was used to collect the participants’ muscle activities during forward moving and bouncing and with and without load of 11.1 kg that was 60 cm above the shoulder. Five bilateral muscles including the lumbar erector spinae (LES), rectus femoris (RF), bicep femoris (BF), tibialis anterior (TA), and gastrocnemius (GM) were selected for data collection. During forward moving task, the sEMG data showed smallest muscle activities by Tepex harness which exhibited consistently the lowest, compared with the prototype and SuitX which were significantly higher on left LES 68.99% and 64.99%, right LES 26.57% and 82.45%; left RF 87.71% and 47.61%, right RF 143.57% and 24.28%; left BF 80.21% and 22.23%, right BF 96.02% and 21.83%; right TA 6.32% and 4.47%; left GM 5.89% and 12.35% respectively. The result above reflected mobility was highly restricted by tested exoskeleton devices. On the other hand, the sEMG data from bouncing task showed the smallest muscle activities by prototype which exhibited consistently the lowest, compared with the Tepex harness and SuitX which were significantly lower on lLES 6.65% and 104.93, rLES 23.56% and 92.19%; lBF 33.21% and 93.26% and rBF 24.70% and 81.16%; lTA 46.51% and 191.02%; rTA 12.75% and 125.76%; IGM 31.54% and 68.36%; rGM 95.95% and 96.43% respectively.

Keywords: exoskeleton, giant puppet performers, load carriage system, surface electromyography

Procedia PDF Downloads 93
9606 Predicting Loss of Containment in Surface Pipeline using Computational Fluid Dynamics and Supervised Machine Learning Model to Improve Process Safety in Oil and Gas Operations

Authors: Muhammmad Riandhy Anindika Yudhy, Harry Patria, Ramadhani Santoso

Abstract:

Loss of containment is the primary hazard that process safety management is concerned within the oil and gas industry. Escalation to more serious consequences all begins with the loss of containment, starting with oil and gas release from leakage or spillage from primary containment resulting in pool fire, jet fire and even explosion when reacted with various ignition sources in the operations. Therefore, the heart of process safety management is avoiding loss of containment and mitigating its impact through the implementation of safeguards. The most effective safeguard for the case is an early detection system to alert Operations to take action prior to a potential case of loss of containment. The detection system value increases when applied to a long surface pipeline that is naturally difficult to monitor at all times and is exposed to multiple causes of loss of containment, from natural corrosion to illegal tapping. Based on prior researches and studies, detecting loss of containment accurately in the surface pipeline is difficult. The trade-off between cost-effectiveness and high accuracy has been the main issue when selecting the traditional detection method. The current best-performing method, Real-Time Transient Model (RTTM), requires analysis of closely positioned pressure, flow and temperature (PVT) points in the pipeline to be accurate. Having multiple adjacent PVT sensors along the pipeline is expensive, hence generally not a viable alternative from an economic standpoint.A conceptual approach to combine mathematical modeling using computational fluid dynamics and a supervised machine learning model has shown promising results to predict leakage in the pipeline. Mathematical modeling is used to generate simulation data where this data is used to train the leak detection and localization models. Mathematical models and simulation software have also been shown to provide comparable results with experimental data with very high levels of accuracy. While the supervised machine learning model requires a large training dataset for the development of accurate models, mathematical modeling has been shown to be able to generate the required datasets to justify the application of data analytics for the development of model-based leak detection systems for petroleum pipelines. This paper presents a review of key leak detection strategies for oil and gas pipelines, with a specific focus on crude oil applications, and presents the opportunities for the use of data analytics tools and mathematical modeling for the development of robust real-time leak detection and localization system for surface pipelines. A case study is also presented.

Keywords: pipeline, leakage, detection, AI

Procedia PDF Downloads 172
9605 A Proposed Optimized and Efficient Intrusion Detection System for Wireless Sensor Network

Authors: Abdulaziz Alsadhan, Naveed Khan

Abstract:

In recent years intrusions on computer network are the major security threat. Hence, it is important to impede such intrusions. The hindrance of such intrusions entirely relies on its detection, which is primary concern of any security tool like Intrusion Detection System (IDS). Therefore, it is imperative to accurately detect network attack. Numerous intrusion detection techniques are available but the main issue is their performance. The performance of IDS can be improved by increasing the accurate detection rate and reducing false positive. The existing intrusion detection techniques have the limitation of usage of raw data set for classification. The classifier may get jumble due to redundancy, which results incorrect classification. To minimize this problem, Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), and Local Binary Pattern (LBP) can be applied to transform raw features into principle features space and select the features based on their sensitivity. Eigen values can be used to determine the sensitivity. To further classify, the selected features greedy search, back elimination, and Particle Swarm Optimization (PSO) can be used to obtain a subset of features with optimal sensitivity and highest discriminatory power. These optimal feature subset used to perform classification. For classification purpose, Support Vector Machine (SVM) and Multilayer Perceptron (MLP) used due to its proven ability in classification. The Knowledge Discovery and Data mining (KDD’99) cup dataset was considered as a benchmark for evaluating security detection mechanisms. The proposed approach can provide an optimal intrusion detection mechanism that outperforms the existing approaches and has the capability to minimize the number of features and maximize the detection rates.

Keywords: Particle Swarm Optimization (PSO), Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA), Local Binary Pattern (LBP), Support Vector Machine (SVM), Multilayer Perceptron (MLP)

Procedia PDF Downloads 350