Search results for: STEP fault
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3437

Search results for: STEP fault

2627 Adjustment of the Whole-Body Center of Mass during Trunk-Flexed Walking across Uneven Ground

Authors: Soran Aminiaghdam, Christian Rode, Reinhard Blickhan, Astrid Zech

Abstract:

Despite considerable studies on the impact of imposed trunk posture on human walking, less is known about such locomotion while negotiating changes in ground level. The aim of this study was to investigate the behavior of the VBCOM in response to a two-fold expected perturbation, namely alterations in body posture and in ground level. To this end, the kinematic data and ground reaction forces of twelve able participants were collected. We analyzed the vertical position of the body center of mass (VBCOM) from the ground determined by the body segmental analysis method relative to the laboratory coordinate system at touchdown and toe-off instants during walking across uneven ground — characterized by perturbation contact (a 10-cm visible drop) and pre- and post-perturbation contacts — in comparison to unperturbed level contact while maintaining three postures (regular erect, ~30° and ~50° of trunk flexion from the vertical). The VBCOM was normalized to the distance between the greater trochanter marker and the lateral malleoli marker at the instant of TD. Moreover, we calculated the backward rotation during step-down as the difference of the maximum of the trunk angle in the pre-perturbation contact and the minimal trunk angle in the perturbation contact. Two-way repeated measures ANOVAs revealed contact-specific effects of posture on the VBCOM at touchdown (F = 5.96, p = 0.00). As indicated by the analysis of simple main effects, during unperturbed level and pre-perturbation contacts, no between-posture differences for the VBCOM at touchdown were found. In the perturbation contact, trunk-flexed gaits showed a significant increase of VBCOM as compared to the pre-perturbation contact. In the post-perturbation contact, the VBCOM demonstrated a significant decrease in all gait postures relative to the preceding corresponding contacts with no between-posture differences. Main effects of posture revealed that the VBCOM at toe-off significantly decreased in trunk-flexed gaits relative to the regular erect gait. For the main effect of contact, the VBCOM at toe-off demonstrated changes across perturbation and post-perturbation contacts as compared to the unperturbed level contact. Furthermore, participants exhibited a backward trunk rotation during step-down possibly to control the angular momentum of their whole body. A more pronounced backward trunk rotation (2- to 3-fold compared with level contacts) in trunk-flexed walking contributed to the observed elevated VBCOM during the step-down which may have facilitated drop negotiation. These results may shed light on the interaction between posture and locomotion in able gait, and specifically on the behavior of the body center of mass during perturbed locomotion.

Keywords: center of mass, perturbation, posture, uneven ground, walking

Procedia PDF Downloads 167
2626 Anesthetic Considerations for Carotid Endarterectomy: Prospective Study Based on Clinical Trials

Authors: Ahmed Yousef A. Al Sultan

Abstract:

Introduction: The aim of this review is based on clinical research that studies the changes in middle cerebral artery velocity using Transcranial Doppler (TCD) and cerebral oxygen saturation using cerebral oximetry in patients undergoing carotid endarterectomy (CEA) surgery under local anesthesia (LA). Patients with or without neurological symptoms during the surgery are taking a role in this study using triplet method of cerebral oximetry, transcranial doppler and awake test in detecting any cerebral ischemic symptoms. Methods: about one hundred patients took part during their CEA surgeries under local anesthesia, using triple assessment mentioned method, Patients requiring general anesthesia be excluded from analysis. All data were recorded at eight surgery stages separately to serve this study. Results: In total regional cerebral oxygen saturation (rSO2), middle cerebral artery (MCA) velocity, and pulsatility index were significantly decreased during carotid artery clamping step in CEA procedures on the targeted carotid side. With most observed changes in MCA velocity during the study. Discussion: Cerebral oxygen saturation and middle cerebral artery velocity were significantly decreased during clamping step of the procedures on the targeted side. The team with neurological symptoms during the procedures showed higher changes of rSO2 and MCA velocity than the team without neurological symptoms. Cerebral rSO2 and MCA velocity significantly increased directly after de-clamping of the internal carotid artery on the affected side.

Keywords: awake testing, carotid endarterectomy, cerebral oximetry, Tanscranial Doppler

Procedia PDF Downloads 154
2625 Selecting the Best Sub-Region Indexing the Images in the Case of Weak Segmentation Based on Local Color Histograms

Authors: Mawloud Mosbah, Bachir Boucheham

Abstract:

Color Histogram is considered as the oldest method used by CBIR systems for indexing images. In turn, the global histograms do not include the spatial information; this is why the other techniques coming later have attempted to encounter this limitation by involving the segmentation task as a preprocessing step. The weak segmentation is employed by the local histograms while other methods as CCV (Color Coherent Vector) are based on strong segmentation. The indexation based on local histograms consists of splitting the image into N overlapping blocks or sub-regions, and then the histogram of each block is computed. The dissimilarity between two images is reduced, as consequence, to compute the distance between the N local histograms of the both images resulting then in N*N values; generally, the lowest value is taken into account to rank images, that means that the lowest value is that which helps to designate which sub-region utilized to index images of the collection being asked. In this paper, we make under light the local histogram indexation method in the hope to compare the results obtained against those given by the global histogram. We address also another noteworthy issue when Relying on local histograms namely which value, among N*N values, to trust on when comparing images, in other words, which sub-region among the N*N sub-regions on which we base to index images. Based on the results achieved here, it seems that relying on the local histograms, which needs to pose an extra overhead on the system by involving another preprocessing step naming segmentation, does not necessary mean that it produces better results. In addition to that, we have proposed here some ideas to select the local histogram on which we rely on to encode the image rather than relying on the local histogram having lowest distance with the query histograms.

Keywords: CBIR, color global histogram, color local histogram, weak segmentation, Euclidean distance

Procedia PDF Downloads 351
2624 Evaluating the Effectiveness of Plantar Sensory Insoles and Remote Patient Monitoring for Early Intervention in Diabetic Foot Ulcer Prevention in Patients with Peripheral Neuropathy

Authors: Brock Liden, Eric Janowitz

Abstract:

Introduction: Diabetic peripheral neuropathy (DPN) affects 70% of individuals with diabetes1. DPN causes a loss of protective sensation, which can lead to tissue damage and diabetic foot ulcer (DFU) formation2. These ulcers can result in infections and lower-extremity amputations of toes, the entire foot, and the lower leg. Even after a DFU is healed, recurrence is common, with 49% of DFU patients developing another ulcer within a year and 68% within 5 years3. This case series examines the use of sensory insoles and newly available plantar data (pressure, temperature, step count, adherence) and remote patient monitoring in patients at risk of DFU. Methods: Participants were provided with custom-made sensory insoles to monitor plantar pressure, temperature, step count, and daily use and were provided with real-time cues for pressure offloading as they went about their daily activities. The sensory insoles were used to track subject compliance, ulceration, and response to feedback from real-time alerts. Patients were remotely monitored by a qualified healthcare professional and were contacted when areas of concern were seen and provided coaching on reducing risk factors and overall support to improve foot health. Results: Of the 40 participants provided with the sensory insole system, 4 presented with a DFU. Based on flags generated from the available plantar data, patients were contacted by the remote monitor to address potential concerns. A standard clinical escalation protocol detailed when and how concerns should be escalated to the provider by the remote monitor. Upon escalation to the provider, patients were brought into the clinic as needed, allowing for any issues to be addressed before more serious complications might arise. Conclusion: This case series explores the use of innovative sensory technology to collect plantar data (pressure, temperature, step count, and adherence) for DFU detection and early intervention. The results from this case series suggest the importance of sensory technology and remote patient monitoring in providing proactive, preventative care for patients at risk of DFU. This robust plantar data, with the addition of remote patient monitoring, allow for patients to be seen in the clinic when concerns arise, giving providers the opportunity to intervene early and prevent more serious complications, such as wounds, from occurring.

Keywords: diabetic foot ulcer, DFU prevention, digital therapeutics, remote patient monitoring

Procedia PDF Downloads 65
2623 Research the Causes of Defects and Injuries of Reinforced Concrete and Stone Construction

Authors: Akaki Qatamidze

Abstract:

Implementation of the project will be a step forward in terms of reliability in Georgia and the improvement of the construction and the development of construction. Completion of the project is expected to result in a complete knowledge, which is expressed in concrete and stone structures of assessing the technical condition of the processing. This method is based on a detailed examination of the structure, in order to establish the injuries and the elimination of the possibility of changing the structural scheme of the new requirements and architectural preservationists. Reinforced concrete and stone structures research project carried out in a systematic analysis of the important approach is to optimize the process of research and development of new knowledge in the neighboring areas. In addition, the problem of physical and mathematical models of rational consent, the main pillar of the physical (in-situ) data and mathematical calculation models and physical experiments are used only for the calculation model specification and verification. Reinforced concrete and stone construction defects and failures the causes of the proposed research to enhance the effectiveness of their maximum automation capabilities and expenditure of resources to reduce the recommended system analysis of the methodological concept-based approach, as modern science and technology major particularity of one, it will allow all family structures to be identified for the same work stages and procedures, which makes it possible to exclude subjectivity and addresses the problem of the optimal direction. It discussed the methodology of the project and to establish a major step forward in the construction trades and practical assistance to engineers, supervisors, and technical experts in the construction of the settlement of the problem.

Keywords: building, reinforced concrete, expertise, stone structures

Procedia PDF Downloads 323
2622 A Randomized Control Trial Intervention to Combat Childhood Obesity in Negeri Sembilan: The Hebat! Program

Authors: Siti Sabariah Buhari, Ruzita Abdul Talib, Poh Bee Koon

Abstract:

This study aims to develop and evaluate an intervention to improve eating habits, active lifestyle and weight status of overweight and obese children in Negeri Sembilan. The H.E.B.A.T! Program involved children, parents, and school and focused on behaviour and environment modification to achieve its goal. The intervention consists of H.E.B.A.T! Camp, parent’s workshop and school-based activities. A total of 21 children from intervention school and 22 children from control school who had BMI for age Z-score ≥ +1SD participated in the study. Mean age of subjects was 10.8 ± 0.3 years old. Four phases were included in the development of the intervention. Evaluation of intervention was conducted through process, impact and outcome evaluation. Process evaluation found that intervention program was implemented successfully with minimal modification and without having any technical problems. Impact and outcome evaluation was assessed based on dietary intake, average step counts, BMI for age z-score, body fat percentage and waist circumference at pre-intervention (T0), post-intervention 1 (T1) and post-intervention 2 (T2). There was significant reduction in energy (14.8%) and fat (21.9%) intakes (at p < 0.05) at post-intervention 1 (T1) in intervention group. By controlling for sex as covariate, there was significant intervention effect for average step counts, BMI for age z-score and waist circumference (p < 0.05). In conclusion, the intervention made an impact on positive behavioural intentions and improves weight status of the children. It is expected that the HEBAT! Program could be adopted and implemented by the government and private sector as well as policy-makers in formulating childhood obesity intervention.

Keywords: childhood obesity, diet, obesity intervention, physical activity

Procedia PDF Downloads 283
2621 In-Silico Fusion of Bacillus Licheniformis Chitin Deacetylase with Chitin Binding Domains from Chitinases

Authors: Keyur Raval, Steffen Krohn, Bruno Moerschbacher

Abstract:

Chitin, the biopolymer of the N-acetylglucosamine, is the most abundant biopolymer on the planet after cellulose. Industrially, chitin is isolated and purified from the shell residues of shrimps. A deacetylated derivative of chitin i.e. chitosan has more market value and applications owing to it solubility and overall cationic charge compared to the parent polymer. This deacetylation on an industrial scale is performed chemically using alkalis like sodium hydroxide. This reaction not only is hazardous to the environment owing to negative impact on the marine ecosystem. A greener option to this process is the enzymatic process. In nature, the naïve chitin is converted to chitosan by chitin deacetylase (CDA). This enzymatic conversion on the industrial scale is however hampered by the crystallinity of chitin. Thus, this enzymatic action requires the substrate i.e. chitin to be soluble which is technically difficult and an energy consuming process. We in this project wanted to address this shortcoming of CDA. In lieu of this, we have modeled a fusion protein with CDA and an auxiliary protein. The main interest being to increase the accessibility of the enzyme towards crystalline chitin. A similar fusion work with chitinases had improved the catalytic ability towards insoluble chitin. In the first step, suitable partners were searched through the protein data bank (PDB) wherein the domain architecture were sought. The next step was to create the models of the fused product using various in silico techniques. The models were created by MODELLER and evaluated for properties such as the energy or the impairment of the binding sites. A fusion PCR has been designed based on the linker sequences generated by MODELLER and would be tested for its activity towards insoluble chitin.

Keywords: chitin deacetylase, modeling, chitin binding domain, chitinases

Procedia PDF Downloads 234
2620 Sports Business Services Model: A Research Model Study in Reginal Sport Authority of Thailand

Authors: Siriraks Khawchaimaha, Sangwian Boonto

Abstract:

Sport Authority of Thailand (SAT) is the state enterprise, promotes and supports all sports kind both professional and athletes for competitions, and administer under government policy and government officers and therefore, all financial supports whether cash inflows and cash outflows are strictly committed to government budget and limited to the planned projects at least 12 to 16 months ahead of reality, as results of ineffective in sport events, administration and competitions. In order to retain in the sports challenges around the world, SAT need to has its own sports business services model by each stadium, region and athletes’ competencies. Based on the HMK model of Khawchaimaha, S. (2007), this research study is formalized into each 10 regional stadiums to details into the characteristics root of fans, athletes, coaches, equipments and facilities, and stadiums. The research designed is firstly the evaluation of external factors: hardware whereby competition or practice of stadiums, playground, facilities, and equipments. Secondly, to understand the software of the organization structure, staffs and management, administrative model, rules and practices. In addition, budget allocation and budget administration with operating plan and expenditure plan. As results for the third step, issues and limitations which require action plan for further development and support, or to cease that unskilled sports kind. The final step, based on the HMK model and modeling canvas by Alexander O and Yves P (2010) are those of template generating Sports Business Services Model for each 10 SAT’s regional stadiums.

Keywords: HMK model, not for profit organization, sport business model, sport services model

Procedia PDF Downloads 295
2619 Aluminum Matrix Composites Reinforced by Glassy Carbon-Titanium Spatial Structure

Authors: B. Hekner, J. Myalski, P. Wrzesniowski

Abstract:

This study presents aluminum matrix composites reinforced by glassy carbon (GC) and titanium (Ti). In the first step, the heterophase (GC+Ti), spatial form (similar to skeleton) of reinforcement was obtained via own method. The polyurethane foam (with spatial, open-cells structure) covered by suspension of Ti particles in phenolic resin was pyrolyzed. In the second step, the prepared heterogeneous foams were infiltrated by aluminium alloy. The manufactured composites are designated to industrial application, especially as a material used in tribological field. From this point of view, the glassy carbon was applied to stabilise a coefficient of friction on the required value 0.6 and reduce wear. Furthermore, the wear can be limited due to titanium phase application, which reveals high mechanical properties. Moreover, fabrication of thin titanium layer on the carbon skeleton leads to reduce contact between aluminium alloy and carbon and thus aluminium carbide phase creation. However, the main modification involves the manufacturing of reinforcement in the form of 3D, skeleton foam. This kind on reinforcement reveals a few important advantages compared to classical form of reinforcement-particles: possibility to control homogeneity of reinforcement phase in composite material; low-advanced technique of composite manufacturing- infiltration; possibility to application the reinforcement only in required places of material; strict control of phase composition; High quality of bonding between components of material. This research is founded by NCN in the UMO-2016/23/N/ST8/00994.

Keywords: metal matrix composites, MMC, glassy carbon, heterophase composites, tribological application

Procedia PDF Downloads 109
2618 Thermodynamics of Water Condensation on an Aqueous Organic-Coated Aerosol Aging via Chemical Mechanism

Authors: Yuri S. Djikaev

Abstract:

A large subset of aqueous aerosols can be initially (immediately upon formation) coated with various organic amphiphilic compounds whereof the hydrophilic moieties are attached to the aqueous aerosol core while the hydrophobic moieties are exposed to the air thus forming a hydrophobic coating thereupon. We study the thermodynamics of water condensation on such an aerosol whereof the hydrophobic organic coating is being concomitantly processed by chemical reactions with atmospheric reactive species. Such processing (chemical aging) enables the initially inert aerosol to serve as a nucleating center for water condensation. The most probable pathway of such aging involves atmospheric hydroxyl radicals that abstract hydrogen atoms from hydrophobic moieties of surface organics (first step), the resulting radicals being quickly oxidized by ubiquitous atmospheric oxygen molecules to produce surface-bound peroxyl radicals (second step). Taking these two reactions into account, we derive an expression for the free energy of formation of an aqueous droplet on an organic-coated aerosol. The model is illustrated by numerical calculations. The results suggest that the formation of aqueous cloud droplets on such aerosols is most likely to occur via Kohler activation rather than via nucleation. The model allows one to determine the threshold parameters necessary for their Kohler activation. Numerical results also corroborate previous suggestions that one can neglect some details of aerosol chemical composition in investigating aerosol effects on climate.

Keywords: aqueous aerosols, organic coating, chemical aging, cloud condensation nuclei, Kohler activation, cloud droplets

Procedia PDF Downloads 381
2617 Reinforced Concrete Foundation for Turbine Generators

Authors: Siddhartha Bhattacharya

Abstract:

Steam Turbine-Generators (STG) and Combustion Turbine-Generator (CTG) are used in almost all modern petrochemical, LNG plants and power plant facilities. The reinforced concrete table top foundations are required to support these high speed rotating heavy machineries and is one of the most critical and challenging structures on any industrial project. The paper illustrates through a practical example, the step by step procedure adopted in designing a table top foundation supported on piles for a steam turbine generator with operating speed of 60 Hz. Finite element model of a table top foundation is generated in ANSYS. Piles are modeled as springs-damper elements (COMBIN14). Basic loads are adopted in analysis and design of the foundation based on the vendor requirements, industry standards, and relevant ASCE & ACI codal provisions. Static serviceability checks are performed with the help of Misalignment Tolerance Matrix (MTM) method in which the percentage of misalignment at a given bearing due to displacement at another bearing is calculated and kept within the stipulated criteria by the vendor so that the machine rotor can sustain the stresses developed due to this misalignment. Dynamic serviceability checks are performed through modal and forced vibration analysis where the foundation is checked for resonance and allowable amplitudes, as stipulated by the machine manufacturer. Reinforced concrete design of the foundation is performed by calculating the axial force, bending moment and shear at each of the critical sections. These values are calculated through area integral of the element stresses at these critical locations. Design is done as per ACI 318-05.

Keywords: steam turbine generator foundation, finite element, static analysis, dynamic analysis

Procedia PDF Downloads 282
2616 Massive Open Online Course about Content Language Integrated Learning: A Methodological Approach for Content Language Integrated Learning Teachers

Authors: M. Zezou

Abstract:

This paper focuses on the design of a Massive Open Online Course (MOOC) about Content Language Integrated Learning (CLIL) and more specifically about how teachers can use CLIL as an educational approach incorporating technology in their teaching as well. All the four weeks of the MOOC will be presented and a step-by-step analysis of each lesson will be offered. Additionally, the paper includes detailed lesson plans about CLIL lessons with proposed CLIL activities and games in which technology plays a central part. The MOOC is structured based on certain criteria, in order to ensure success, as well as a positive experience that the learners need to have after completing this MOOC. It addresses to all language teachers who would like to implement CLIL into their teaching. In other words, it presents the methodology that needs to be followed so as to successfully carry out a CLIL lesson and achieve the learning objectives set at the beginning of the course. Firstly, in this paper, it is very important to give the definitions of MOOCs and LMOOCs, as well as to explore the difference between a structure-based MOOC (xMOOC) and a connectivist MOOC (cMOOC) and present the criteria of a successful MOOC. Moreover, the notion of CLIL will be explored, as it is necessary to fully understand this concept before moving on to the design of the MOOC. Onwards, the four weeks of the MOOC will be introduced as well as lesson plans will be presented: The type of the activities, the aims of each activity and the methodology that teachers have to follow. Emphasis will be placed on the role of technology in foreign language learning and on the ways in which we can involve technology in teaching a foreign language. Final remarks will be made and a summary of the main points will be offered at the end.

Keywords: CLIL, cMOOC, lesson plan, LMOOC, MOOC criteria, MOOC, technology, xMOOC

Procedia PDF Downloads 180
2615 Don't Just Guess and Slip: Estimating Bayesian Knowledge Tracing Parameters When Observations Are Scant

Authors: Michael Smalenberger

Abstract:

Intelligent tutoring systems (ITS) are computer-based platforms which can incorporate artificial intelligence to provide step-by-step guidance as students practice problem-solving skills. ITS can replicate and even exceed some benefits of one-on-one tutoring, foster transactivity in collaborative environments, and lead to substantial learning gains when used to supplement the instruction of a teacher or when used as the sole method of instruction. A common facet of many ITS is their use of Bayesian Knowledge Tracing (BKT) to estimate parameters necessary for the implementation of the artificial intelligence component, and for the probability of mastery of a knowledge component relevant to the ITS. While various techniques exist to estimate these parameters and probability of mastery, none directly and reliably ask the user to self-assess these. In this study, 111 undergraduate students used an ITS in a college-level introductory statistics course for which detailed transaction-level observations were recorded, and users were also routinely asked direct questions that would lead to such a self-assessment. Comparisons were made between these self-assessed values and those obtained using commonly used estimation techniques. Our findings show that such self-assessments are particularly relevant at the early stages of ITS usage while transaction level data are scant. Once a user’s transaction level data become available after sufficient ITS usage, these can replace the self-assessments in order to eliminate the identifiability problem in BKT. We discuss how these findings are relevant to the number of exercises necessary to lead to mastery of a knowledge component, the associated implications on learning curves, and its relevance to instruction time.

Keywords: Bayesian Knowledge Tracing, Intelligent Tutoring System, in vivo study, parameter estimation

Procedia PDF Downloads 161
2614 Tectono-Stratigraphic Architecture, Depositional Systems and Salt Tectonics to Strike-Slip Faulting in Kribi-Campo-Cameroon Atlantic Margin with an Unsupervised Machine Learning Approach (West African Margin)

Authors: Joseph Bertrand Iboum Kissaaka, Charles Fonyuy Ngum Tchioben, Paul Gustave Fowe Kwetche, Jeannette Ngo Elogan Ntem, Joseph Binyet Njebakal, Ribert Yvan Makosso-Tchapi, François Mvondo Owono, Marie Joseph Ntamak-Nida

Abstract:

Located in the Gulf of Guinea, the Kribi-Campo sub-basin belongs to the Aptian salt basins along the West African Margin. In this paper, we investigated the tectono-stratigraphic architecture of the basin, focusing on the role of salt tectonics and strike-slip faults along the Kribi Fracture Zone with implications for reservoir prediction. Using 2D seismic data and well data interpreted through sequence stratigraphy with integrated seismic attributes analysis with Python Programming and unsupervised Machine Learning, at least six second-order sequences, indicating three main stages of tectono-stratigraphic evolution, were determined: pre-salt syn-rift, post-salt rift climax and post-rift stages. The pre-salt syn-rift stage with KTS1 tectonosequence (Barremian-Aptian) reveals a transform rifting along NE-SW transfer faults associated with N-S to NNE-SSW syn-rift longitudinal faults bounding a NW-SE half-graben filled with alluvial to lacustrine-fan delta deposits. The post-salt rift-climax stage (Lower to Upper Cretaceous) includes two second-order tectonosequences (KTS2 and KTS3) associated with the salt tectonics and Campo High uplift. During the rift-climax stage, the growth of salt diapirs developed syncline withdrawal basins filled by early forced regression, mid transgressive and late normal regressive systems tracts. The early rift climax underlines some fine-grained hangingwall fans or delta deposits and coarse-grained fans from the footwall of fault scarps. The post-rift stage (Paleogene to Neogene) contains at least three main tectonosequences KTS4, KTS5 and KTS6-7. The first one developed some turbiditic lobe complexes considered as mass transport complexes and feeder channel-lobe complexes cutting the unstable shelf edge of the Campo High. The last two developed submarine Channel Complexes associated with lobes towards the southern part and braided delta to tidal channels towards the northern part of the Kribi-Campo sub-basin. The reservoir distribution in the Kribi-Campo sub-basin reveals some channels, fan lobes reservoirs and stacked channels reaching up to the polygonal fault systems.

Keywords: tectono-stratigraphic architecture, Kribi-Campo sub-basin, machine learning, pre-salt sequences, post-salt sequences

Procedia PDF Downloads 33
2613 Analyzing Transit Network Design versus Urban Dispersion

Authors: Hugo Badia

Abstract:

This research answers which is the most suitable transit network structure to serve specific demand requirements in an increasing urban dispersion process. Two main approaches of network design are found in the literature. On the one hand, a traditional answer, widespread in our cities, that develops a high number of lines to connect most of origin-destination pairs by direct trips; an approach based on the idea that users averse to transfers. On the other hand, some authors advocate an alternative design characterized by simple networks where transfer is essential to complete most of trips. To answer which of them is the best option, we use a two-step methodology. First, by means of an analytical model, three basic network structures are compared: a radial scheme, starting point for the other two structures, a direct trip-based network, and a transfer-based one, which represent the two alternative transit network designs. The model optimizes the network configuration with regard to the total cost for each structure. For a scenario of dispersion, the best alternative is the structure with the minimum cost. This dispersion degree is defined in a simple way considering that only a central area attracts all trips. If this area is small, we have a high concentrated mobility pattern; if this area is too large, the city is highly decentralized. In this first step, we can determine the area of applicability for each structure in function to that urban dispersion degree. The analytical results show that a radial structure is suitable when the demand is so centralized, however, when this demand starts to scatter, new transit lines should be implemented to avoid transfers. If the urban dispersion advances, the introduction of more lines is no longer a good alternative, in this case, the best solution is a change of structure, from direct trips to a network based on transfers. The area of applicability of each network strategy is not constant, it depends on the characteristics of demand, city and transport technology. In the second step, we translate analytical results to a real case study by the relationship between the parameters of dispersion of the model and direct measures of dispersion in a real city. Two dimensions of the urban sprawl process are considered: concentration, defined by Gini coefficient, and centralization by area based centralization index. Once it is estimated the real dispersion degree, we are able to identify in which area of applicability the city is located. In summary, from a strategic point of view, we can obtain with this methodology which is the best network design approach for a city, comparing the theoretical results with the real dispersion degree.

Keywords: analytical network design model, network structure, public transport, urban dispersion

Procedia PDF Downloads 221
2612 Integrating Deterministic and Probabilistic Safety Assessment to Decrease Risk & Energy Consumption in a Typical PWR

Authors: Ebrahim Ghanbari, Mohammad Reza Nematollahi

Abstract:

Integrating deterministic and probabilistic safety assessment (IDPSA) is one of the most commonly used issues in the field of safety analysis of power plant accident. It has also been recognized today that the role of human error in creating these accidents is not less than systemic errors, so the human interference and system errors in fault and event sequences are necessary. The integration of these analytical topics will be reflected in the frequency of core damage and also the study of the use of water resources in an accident such as the loss of all electrical power of the plant. In this regard, the SBO accident was simulated for the pressurized water reactor in the deterministic analysis issue, and by analyzing the operator's behavior in controlling the accident, the results of the combination of deterministic and probabilistic assessment were identified. The results showed that the best performance of the plant operator would reduce the risk of an accident by 10%, as well as a decrease of 6.82 liters/second of the water sources of the plant.

Keywords: IDPSA, human error, SBO, risk

Procedia PDF Downloads 119
2611 Microfluidic Plasmonic Bio-Sensing of Exosomes by Using a Gold Nano-Island Platform

Authors: Srinivas Bathini, Duraichelvan Raju, Simona Badilescu, Muthukumaran Packirisamy

Abstract:

A bio-sensing method, based on the plasmonic property of gold nano-islands, has been developed for detection of exosomes in a clinical setting. The position of the gold plasmon band in the UV-Visible spectrum depends on the size and shape of gold nanoparticles as well as on the surrounding environment. By adsorbing various chemical entities, or binding them, the gold plasmon band will shift toward longer wavelengths and the shift is proportional to the concentration. Exosomes transport cargoes of molecules and genetic materials to proximal and distal cells. Presently, the standard method for their isolation and quantification from body fluids is by ultracentrifugation, not a practical method to be implemented in a clinical setting. Thus, a versatile and cutting-edge platform is required to selectively detect and isolate exosomes for further analysis at clinical level. The new sensing protocol, instead of antibodies, makes use of a specially synthesized polypeptide (Vn96), to capture and quantify the exosomes from different media, by binding the heat shock proteins from exosomes. The protocol has been established and optimized by using a glass substrate, in order to facilitate the next stage, namely the transfer of the protocol to a microfluidic environment. After each step of the protocol, the UV-Vis spectrum was recorded and the position of gold Localized Surface Plasmon Resonance (LSPR) band was measured. The sensing process was modelled, taking into account the characteristics of the nano-island structure, prepared by thermal convection and annealing. The optimal molar ratios of the most important chemical entities, involved in the detection of exosomes were calculated as well. Indeed, it was found that the results of the sensing process depend on the two major steps: the molar ratios of streptavidin to biotin-PEG-Vn96 and, the final step, the capture of exosomes by the biotin-PEG-Vn96 complex. The microfluidic device designed for sensing of exosomes consists of a glass substrate, sealed by a PDMS layer that contains the channel and a collecting chamber. In the device, the solutions of linker, cross-linker, etc., are pumped over the gold nano-islands and an Ocean Optics spectrometer is used to measure the position of the Au plasmon band at each step of the sensing. The experiments have shown that the shift of the Au LSPR band is proportional to the concentration of exosomes and, thereby, exosomes can be accurately quantified. An important advantage of the method is the ability to discriminate between exosomes having different origins.

Keywords: exosomes, gold nano-islands, microfluidics, plasmonic biosensing

Procedia PDF Downloads 161
2610 Electrical Power Distribution Reliability Improvement by Retrofitting 4.16 kV Vacuum Contactor in Badak LNG Plant

Authors: David Hasurungan

Abstract:

This paper objective is to assess the power distribution reliability improvement by retrofitting obsolete vacuum contactor. The case study in Badak Liquefied Natural Gas (LNG) plant is presented in this paper. To support plant operational, Badak LNG is equipped with 4.16 kV switchgear for supplying the storage and loading facilities, utilities facilities, and train facilities. However, there is a problem in two switch gears of sixteen switch gears. The problem is the obsolescence issue in its vacuum contactor. Not only that, but the same switchgear also has suffered from electrical fault due to contact fingering misalignment. In order to improve the reliability in switchgear, the vacuum contactor retrofit project is done. The retrofit will introduce new vacuum contactor design. The comparison between existing design and the new design is presented in this paper. Meanwhile, The reliability assessment and calculation are performed using software Reliasoft 7.

Keywords: reliability, obsolescence, retrofit, vacuum contactor

Procedia PDF Downloads 283
2609 One Step Further: Pull-Process-Push Data Processing

Authors: Romeo Botes, Imelda Smit

Abstract:

In today’s modern age of technology vast amounts of data needs to be processed in real-time to keep users satisfied. This data comes from various sources and in many formats, including electronic and mobile devices such as GPRS modems and GPS devices. They make use of different protocols including TCP, UDP, and HTTP/s for data communication to web servers and eventually to users. The data obtained from these devices may provide valuable information to users, but are mostly in an unreadable format which needs to be processed to provide information and business intelligence. This data is not always current, it is mostly historical data. The data is not subject to implementation of consistency and redundancy measures as most other data usually is. Most important to the users is that the data are to be pre-processed in a readable format when it is entered into the database. To accomplish this, programmers build processing programs and scripts to decode and process the information stored in databases. Programmers make use of various techniques in such programs to accomplish this, but sometimes neglect the effect some of these techniques may have on database performance. One of the techniques generally used,is to pull data from the database server, process it and push it back to the database server in one single step. Since the processing of the data usually takes some time, it keeps the database busy and locked for the period of time that the processing takes place. Because of this, it decreases the overall performance of the database server and therefore the system’s performance. This paper follows on a paper discussing the performance increase that may be achieved by utilizing array lists along with a pull-process-push data processing technique split in three steps. The purpose of this paper is to expand the number of clients when comparing the two techniques to establish the impact it may have on performance of the CPU storage and processing time.

Keywords: performance measures, algorithm techniques, data processing, push data, process data, array list

Procedia PDF Downloads 232
2608 Development of an Implicit Physical Influence Upwind Scheme for Cell-Centered Finite Volume Method

Authors: Shidvash Vakilipour, Masoud Mohammadi, Rouzbeh Riazi, Scott Ormiston, Kimia Amiri, Sahar Barati

Abstract:

An essential component of a finite volume method (FVM) is the advection scheme that estimates values on the cell faces based on the calculated values on the nodes or cell centers. The most widely used advection schemes are upwind schemes. These schemes have been developed in FVM on different kinds of structured and unstructured grids. In this research, the physical influence scheme (PIS) is developed for a cell-centered FVM that uses an implicit coupled solver. Results are compared with the exponential differencing scheme (EDS) and the skew upwind differencing scheme (SUDS). Accuracy of these schemes is evaluated for a lid-driven cavity flow at Re = 1000, 3200, and 5000 and a backward-facing step flow at Re = 800. Simulations show considerable differences between the results of EDS scheme with benchmarks, especially for the lid-driven cavity flow at high Reynolds numbers. These differences occur due to false diffusion. Comparing SUDS and PIS schemes shows relatively close results for the backward-facing step flow and different results in lid-driven cavity flow. The poor results of SUDS in the lid-driven cavity flow can be related to its lack of sensitivity to the pressure difference between cell face and upwind points, which is critical for the prediction of such vortex dominant flows.

Keywords: cell-centered finite volume method, coupled solver, exponential differencing scheme (EDS), physical influence scheme (PIS), pressure weighted interpolation method (PWIM), skew upwind differencing scheme (SUDS)

Procedia PDF Downloads 271
2607 Towards Binder-Free and Self Supporting Flexible Supercapacitor from Carbon Nano-Onions and Their Composite with CuO Nanoparticles

Authors: Debananda Mohapatra, Subramanya Badrayyana, Smrutiranjan Parida

Abstract:

Recognizing the upcoming era of carbon nanostructures and their revolutionary applications, we investigated the formation and supercapacitor application of highly pure and hydrophilic carbon nano-onions (CNOs) by economical one-step flame-synthesis procedure. The facile and scalable method uses easily available organic carbon source such as clarified butter, restricting the use of any catalyst, sophisticated instrumentation, high vacuum and post processing purification procedure. The active material was conformally coated onto a locally available cotton wipe by “sonicating and drying” process to obtain novel, lightweight, inexpensive, flexible, binder-free electrodes with strong adhesion between nanoparticles and porous wipe. This interesting electrode with CNO as the active material delivers a specific capacitance of 102.16 F/g, the energy density of 14.18 Wh/kg and power density of 2448 W/kg which are the highest values reported so far in symmetrical two electrode cell configuration with 1M Na2SO4 as an electrolyte. Incorporation of CuO nanoparticles to these functionalized CNOs by one-step hydrothermal method add up to a significant specific capacitance of 420 F/g with deliverable energy and power density at 58.33 Wh/kg and 4228 W/kg, respectively. The free standing CNOs, as well as CNO-CuO composite electrode, showed an excellent cyclic performance and stability retaining 95 and 90% initial capacitance even after 5000 charge-discharge cycles at a current density of 5 A/g. This work presents a new platform for high performance supercapacitors for next generation wearable electronic devices.

Keywords: binder-free, flame synthesis, flexible, carbon nano-onion

Procedia PDF Downloads 184
2606 Development and Evaluation of a Psychological Adjustment and Adaptation Status Scale for Breast Cancer Survivors

Authors: Jing Chen, Jun-E Liu, Peng Yue

Abstract:

Objective: The objective of this study was to develop a psychological adjustment and adaptation status scale for breast cancer survivors, and to examine the reliability and validity of the scale. Method: 37 breast cancer survivors were recruited in qualitative research; a five-subject theoretical framework and an item pool of 150 items of the scale were derived from the interview data. In order to evaluate and select items and reach a preliminary validity and reliability for the original scale, the suggestions of study group members, experts and breast cancer survivors were taken, and statistical methods were used step by step in a sample of 457 breast cancer survivors. Results: An original 24-item scale was developed. The five dimensions “domestic affections”, “interpersonal relationship”, “attitude of life”, “health awareness”, “self-control/self-efficacy” explained 58.053% of the total variance. The content validity was assessed by experts, the CVI was 0.92. The construct validity was examined in a sample of 264 breast cancer survivors. The fitting indexes of confirmatory factor analysis (CFA) showed good fitting of the five dimensions model. The criterion-related validity of the total scale with PTGI was satisfactory (r=0.564, p<0.001). The internal consistency reliability and test-retest reliability were tested. Cronbach’s alpha value (0.911) showed a good internal consistency reliability, and the intraclass correlation coefficient (ICC=0.925, p<0.001) showed a satisfactory test-retest reliability. Conclusions: The scale was brief and easy to understand, was suitable for breast cancer patients whose physical strength and energy were limited.

Keywords: breast cancer survivors, rehabilitation, psychological adaption and adjustment, development of scale

Procedia PDF Downloads 503
2605 Vibration Analysis of Stepped Nanoarches with Defects

Authors: Jaan Lellep, Shahid Mubasshar

Abstract:

A numerical solution is developed for simply supported nanoarches based on the non-local theory of elasticity. The nanoarch under consideration has a step-wise variable cross-section and is weakened by crack-like defects. It is assumed that the cracks are stationary and the mechanical behaviour of the nanoarch can be modeled by Eringen’s non-local theory of elasticity. The physical and thermal properties are sensitive with respect to changes of dimensions in the nano level. The classical theory of elasticity is unable to describe such changes in material properties. This is because, during the development of the classical theory of elasticity, the speculation of molecular objects was avoided. Therefore, the non-local theory of elasticity is applied to study the vibration of nanostructures and it has been accepted by many researchers. In the non-local theory of elasticity, it is assumed that the stress state of the body at a given point depends on the stress state of each point of the structure. However, within the classical theory of elasticity, the stress state of the body depends only on the given point. The system of main equations consists of equilibrium equations, geometrical relations and constitutive equations with boundary and intermediate conditions. The system of equations is solved by using the method of separation of variables. Consequently, the governing differential equations are converted into a system of algebraic equations whose solution exists if the determinant of the coefficients of the matrix vanishes. The influence of cracks and steps on the natural vibration of the nanoarches is prescribed with the aid of additional local compliance at the weakened cross-section. An algorithm to determine the eigenfrequencies of the nanoarches is developed with the help of computer software. The effects of various physical and geometrical parameters are recorded and drawn graphically.

Keywords: crack, nanoarches, natural frequency, step

Procedia PDF Downloads 120
2604 A Randomized, Controlled Trial to Test Behavior Change Techniques to Improve Low Intensity Physical Activity in Older Adults

Authors: Ciaran Friel, Jerry Suls, Mark Butler, Patrick Robles, Samantha Gordon, Frank Vicari, Karina W. Davidson

Abstract:

Physical activity guidelines focus on increasing moderate-intensity activity for older adults, but adherence to recommendations remains low. This is despite the fact that scientific evidence supports that any increase in physical activity is positively correlated with health benefits. Behavior change techniques (BCTs) have demonstrated effectiveness in reducing sedentary behavior and promoting physical activity. This pilot study uses a Personalized Trials (N-of-1) design to evaluate the efficacy of using four BCTs to promote an increase in low-intensity physical activity (2,000 steps of walking per day) in adults aged 45-75 years old. The 4 BCTs tested were goal setting, action planning, feedback, and self-monitoring. BCTs were tested in random order and delivered by text message prompts requiring participant engagement. The study recruited health system employees in the target age range, without mobility restrictions and demonstrating interest in increasing their daily activity by a minimum of 2,000 steps per day for a minimum of five days per week. Participants were sent a Fitbit® fitness tracker with an established study account and password. Participants were recommended to wear the Fitbit device 24/7 but were required to wear it for a minimum of ten hours per day. Baseline physical activity was measured by Fitbit for two weeks. In the 8-week intervention phase of the study, participants received each of the four BCTs, in random order, for a two-week period. Text message prompts were delivered daily each morning at a consistent time. All prompts required participant engagement to acknowledge receipt of the BCT message. Engagement is dependent upon the BCT message and may have included recording that a detailed plan for walking has been made or confirmed a daily step goal (action planning, goal setting). Additionally, participants may have been directed to a study dashboard to view their step counts or compare themselves to their baseline average step count (self-monitoring, feedback). At the end of each two-week testing interval, participants were asked to complete the Self-Efficacy for Walking Scale (SEW_Dur), a validated measure that assesses the participant’s confidence in walking incremental distances, and a survey measuring their satisfaction with the individual BCT that they tested. At the end of their trial, participants received a personalized summary of their step data in response to each individual BCT. The analysis will examine the novel individual-level heterogeneity of treatment effect made possible by N-of-1 design and pool results across participants to efficiently estimate the overall efficacy of the selected behavioral change techniques in increasing low-intensity walking by 2,000 steps, five days per week. Self-efficacy will be explored as the likely mechanism of action prompting behavior change. This study will inform the providers and demonstrate the feasibility of an N-of-1 study design to effectively promote physical activity as a component of healthy aging.

Keywords: aging, exercise, habit, walking

Procedia PDF Downloads 82
2603 Quasi-Federal Structure of India: Fault-Lines Exposed in COVID-19 Pandemic

Authors: Shatakshi Garg

Abstract:

As the world continues to grapple with the COVID-19 pandemic, India, one of the most populous democratic federal developing nation, continues to report the highest active cases and deaths, as well as struggle to let its health infrastructure not succumb to the exponentially growing requirements of hospital beds, ventilators, oxygen to save thousands of lives daily at risk. In this context, the paper outlines the handling of the COVID-19 pandemic since it first hit India in January 2020 – the policy decisions taken by the Union and the State governments from the larger perspective of its federal structure. The Constitution of India adopted in 1950 enshrined the federal relations between the Union and the State governments by way of the constitutional division of revenue-raising and expenditure responsibilities. By way of the 72nd and 73rd Amendments in the Constitution, powers and functions were devolved further to the third tier, namely the local governments, with the intention of further strengthening the federal structure of the country. However, with time, several constitutional amendments have shifted the scales in favour of the union government. The paper briefly traces some of these major amendments as well as some policy decisions which made the federal relations asymmetrical. As a result, data on key fiscal parameters helps establish how the union government gained upper hand at the expense of weak state governments, reducing the local governments to mere constitutional bodies without adequate funds and fiscal autonomy to carry out the assigned functions. This quasi-federal structure of India with the union government amassing the majority of power in terms of ‘funds, functions and functionaries’ exposed the perils of weakening sub-national governments post COVID-19 pandemic. With a complex quasi-federal structure and a heterogeneous population of over 1.3 billion, the announcement of a sudden nationwide lockdown by the union government was followed by a plight of migrants struggling to reach homes safely in the absence of adequate arrangements for travel and safety-net made by the union government. With limited autonomy enjoyed by the states, they were mostly dictated by the union government on most aspects of handling the pandemic, including protocols for lockdown, re-opening post lockdown, and vaccination drive. The paper suggests that certain policy decisions like demonetization, the introduction of GST, etc., taken by the incumbent government since 2014 when they first came to power, have further weakened the states and local governments, which have amounted to catastrophic losses, both economic and human. The role of the executive, legislature and judiciary are explored to establish how all these three arms of the government have worked simultaneously to further weaken and expose the fault-lines of the federal structure of India, which has lent the nation incapacitated to handle this pandemic. The paper then suggests the urgency of re-looking at the federal structure of the country and undertaking measures that strengthen the sub-national governments and restore the federal spirit as was enshrined in the constitution to avoid mammoth human and economic losses from a pandemic of this sort.

Keywords: COVID-19 pandemic, India, federal structure, economic losses

Procedia PDF Downloads 161
2602 Thermal Securing of Electrical Contacts inside Oil Power Transformers

Authors: Ioan Rusu

Abstract:

In the operation of power transformers of 110 kV/MV from substations, these are traveled by fault current resulting from MV line damage. Defect electrical contacts are heated when they are travelled from fault currents. In the case of high temperatures when 135 °C is reached, the electrical insulating oil in the vicinity of the electrical faults comes into contact with these contacts releases gases, and activates the electrical protection. To avoid auto-flammability of electro-insulating oil, we designed a security system thermal of electrical contact defects by pouring fire-resistant polyurethane foam, mastic or mortar fire inside a cardboard electro-insulating cylinder. From practical experience, in the exploitation of power transformers of 110 kV/MT in oil electro-insulating were recorded some passing disconnecting commanded by the gas protection at internal defects. In normal operation and in the optimal load, nominal currents do not require thermal secure contacts inside electrical transformers, contacts are made at the fabrication according to the projects or to repair by solder. In the case of external short circuits close to the substation, the contacts inside electrical transformers, even if they are well made in sizes of Rcontact = 10‑6 Ω, are subjected to short-circuit currents of the order of 10 kA-20 kA which lead to the dissipation of some significant second-order electric powers, 100 W-400 W, on contact. At some internal or external factors which action on electrical contacts, including electrodynamic efforts at short-circuits, these factors could be degraded over time to values in the range of 10-4 Ω to 10-5 Ω and if the action time of protection is great, on the order of seconds, power dissipation on electrical contacts achieve high values of 1,0 kW to 40,0 kW. This power leads to strong local heating, hundreds of degrees Celsius and can initiate self-ignition and burning oil in the vicinity of electro-insulating contacts with action the gas relay. Degradation of electrical contacts inside power transformers may not be limited for the duration of their operation. In order to avoid oil burn with gas release near electrical contacts, at short-circuit currents 10 kA-20 kA, we have outlined the following solutions: covering electrical contacts in fireproof materials that would avoid direct burn oil at short circuit and transmission of heat from electrical contact along the conductors with heat dissipation gradually over time, in a large volume of cooling. Flame retardant materials are: polyurethane foam, mastic, cement (concrete). In the normal condition of operation of transformer, insulating of conductors coils is with paper and insulating oil. Ignition points of its two components respectively are approximated: 135 °C heat for oil and 200 0C for paper. In the case of a faulty electrical contact, about 10-3 Ω, at short-circuit; the temperature can reach for a short time, a value of 300 °C-400 °C, which ignite the paper and also the oil. By burning oil, there are local gases that disconnect the power transformer. Securing thermal electrical contacts inside the transformer, in cardboard tube with polyurethane foams, mastik or cement, ensures avoiding gas release and also gas protection working.

Keywords: power transformer, oil insulatation, electric contacts, Bucholtz relay

Procedia PDF Downloads 144
2601 Mega Development Projects Problems and Challenges From a Social Science Perspective: A Critical Review

Authors: Shakir Ullah

Abstract:

This article reviews social science understanding to explore the challenges megaprojects face before and after implementation. It also sheds light on the problems directly and indirectly caused by mega development projects in the project implemented areas. By Using a qualitative approach such as thematic analysis, the article uses recent literature such as published articles, government reports, and books to cite examples of different mega projects worldwide. The study report that mega development projects are a necessary element of the modern-day infrastructural development process as they represent the perfect example of urban socioeconomic development. They are introduced and implemented by multinational companies with the support of state authorities to produce the common good. However, they are not devoid of their critical challenges and bring implicit and explicit problems to the targeted localities. The article takes insights from social science research for suggestions on how to reduce the challenges faced by project implementers and problems received by local people due to the fault lines of such projects.

Keywords: development, mega-projects, challenges, problems

Procedia PDF Downloads 92
2600 Hardware Error Analysis and Severity Characterization in Linux-Based Server Systems

Authors: Nikolaos Georgoulopoulos, Alkis Hatzopoulos, Konstantinos Karamitsios, Konstantinos Kotrotsios, Alexandros I. Metsai

Abstract:

In modern server systems, business critical applications run in different types of infrastructure, such as cloud systems, physical machines and virtualization. Often, due to high load and over time, various hardware faults occur in servers that translate to errors, resulting to malfunction or even server breakdown. CPU, RAM and hard drive (HDD) are the hardware parts that concern server administrators the most regarding errors. In this work, selected RAM, HDD and CPU errors, that have been observed or can be simulated in kernel ring buffer log files from two groups of Linux servers, are investigated. Moreover, a severity characterization is given for each error type. Better understanding of such errors can lead to more efficient analysis of kernel logs that are usually exploited for fault diagnosis and prediction. In addition, this work summarizes ways of simulating hardware errors in RAM and HDD, in order to test the error detection and correction mechanisms of a Linux server.

Keywords: hardware errors, Kernel logs, Linux servers, RAM, hard disk, CPU

Procedia PDF Downloads 145
2599 Numerical Simulation of a Combined Impact of Cooling and Ventilation on the Indoor Environmental Quality

Authors: Matjaz Prek

Abstract:

Impact of three different combinations of cooling and ventilation systems on the indoor environmental quality (IEQ) has been studied. Comparison of chilled ceiling cooling in combination with displacement ventilation, cooling with fan coil unit and cooling with flat wall displacement outlets was performed. All three combinations were evaluated from the standpoint of whole-body and local thermal comfort criteria as well as from the standpoint of ventilation effectiveness. The comparison was made on the basis of numerical simulation with DesignBuilder and Fluent. Numerical simulations were carried out in two steps. Firstly the DesignBuilder software environment was used to model the buildings thermal performance and evaluation of the interaction between the environment and the building. Heat gains of the building and of the individual space, as well as the heat loss on the boundary surfaces in the room, were calculated. In the second step Fluent software environment was used to simulate the response of the indoor environment, evaluating the interaction between building and human, using the simulation results obtained in the first step. Among the systems presented, the ceiling cooling system in combination with displacement ventilation was found to be the most suitable as it offers a high level of thermal comfort with adequate ventilation efficiency. Fan coil cooling has proved inadequate from the standpoint of thermal comfort whereas flat wall displacement outlets were inadequate from the standpoint of ventilation effectiveness. The study showed the need in evaluating indoor environment not solely from the energy use point of view, but from the point of view of indoor environmental quality as well.

Keywords: cooling, ventilation, thermal comfort, ventilation effectiveness, indoor environmental quality, IEQ, computational fluid dynamics

Procedia PDF Downloads 177
2598 A Data-Driven Monitoring Technique Using Combined Anomaly Detectors

Authors: Fouzi Harrou, Ying Sun, Sofiane Khadraoui

Abstract:

Anomaly detection based on Principal Component Analysis (PCA) was studied intensively and largely applied to multivariate processes with highly cross-correlated process variables. Monitoring metrics such as the Hotelling's T2 and the Q statistics are usually used in PCA-based monitoring to elucidate the pattern variations in the principal and residual subspaces, respectively. However, these metrics are ill suited to detect small faults. In this paper, the Exponentially Weighted Moving Average (EWMA) based on the Q and T statistics, T2-EWMA and Q-EWMA, were developed for detecting faults in the process mean. The performance of the proposed methods was compared with that of the conventional PCA-based fault detection method using synthetic data. The results clearly show the benefit and the effectiveness of the proposed methods over the conventional PCA method, especially for detecting small faults in highly correlated multivariate data.

Keywords: data-driven method, process control, anomaly detection, dimensionality reduction

Procedia PDF Downloads 283