Search results for: available transfer capability
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3957

Search results for: available transfer capability

507 Clinical and Analytical Performance of Glial Fibrillary Acidic Protein and Ubiquitin C-Terminal Hydrolase L1 Biomarkers for Traumatic Brain Injury in the Alinity Traumatic Brain Injury Test

Authors: Raj Chandran, Saul Datwyler, Jaime Marino, Daniel West, Karla Grasso, Adam Buss, Hina Syed, Zina Al Sahouri, Jennifer Yen, Krista Caudle, Beth McQuiston

Abstract:

The Alinity i TBI test is Therapeutic Goods Administration (TGA) registered and is a panel of in vitro diagnostic chemiluminescent microparticle immunoassays for the measurement of glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) in plasma and serum. The Alinity i TBI performance was evaluated in a multi-center pivotal study to demonstrate the capability to assist in determining the need for a CT scan of the head in adult subjects (age 18+) presenting with suspected mild TBI (traumatic brain injury) with a Glasgow Coma Scale score of 13 to 15. TBI has been recognized as an important cause of death and disability and is a growing public health problem. An estimated 69 million people globally experience a TBI annually1. Blood-based biomarkers such as glial fibrillary acidic protein (GFAP) and ubiquitin C-terminal hydrolase L1 (UCH-L1) have shown utility to predict acute traumatic intracranial injury on head CT scans after TBI. A pivotal study using prospectively collected archived (frozen) plasma specimens was conducted to establish the clinical performance of the TBI test on the Alinity i system. The specimens were originally collected in a prospective, multi-center clinical study. Testing of the specimens was performed at three clinical sites in the United States. Performance characteristics such as detection limits, imprecision, linearity, measuring interval, expected values, and interferences were established following Clinical and Laboratory Standards Institute (CLSI) guidance. Of the 1899 mild TBI subjects, 120 had positive head CT scan results; 116 of the 120 specimens had a positive TBI interpretation (Sensitivity 96.7%; 95% CI: 91.7%, 98.7%). Of the 1779 subjects with negative CT scan results, 713 had a negative TBI interpretation (Specificity 40.1%; 95% CI: 37.8, 42.4). The negative predictive value (NPV) of the test was 99.4% (713/717, 95% CI: 98.6%, 99.8%). The analytical measuring interval (AMI) extends from the limit of quantitation (LoQ) to the upper LoQ and is determined by the range that demonstrates acceptable performance for linearity, imprecision, and bias. The AMI is 6.1 to 42,000 pg/mL for GFAP and 26.3 to 25,000 pg/mL for UCH-L1. Overall, within-laboratory imprecision (20 day) ranged from 3.7 to 5.9% CV for GFAP and 3.0 to 6.0% CV for UCH-L1, when including lot and instrument variances. The Alinity i TBI clinical performance results demonstrated high sensitivity and high NPV, supporting the utility to assist in determining the need for a head CT scan in subjects presenting to the emergency department with suspected mild TBI. The GFAP and UCH-L1 assays show robust analytical performance across a broad concentration range of GFAP and UCH-L1 and may serve as a valuable tool to help evaluate TBI patients across the spectrum of mild to severe injury.

Keywords: biomarker, diagnostic, neurology, TBI

Procedia PDF Downloads 35
506 Numerical Solution of Steady Magnetohydrodynamic Boundary Layer Flow Due to Gyrotactic Microorganism for Williamson Nanofluid over Stretched Surface in the Presence of Exponential Internal Heat Generation

Authors: M. A. Talha, M. Osman Gani, M. Ferdows

Abstract:

This paper focuses on the study of two dimensional magnetohydrodynamic (MHD) steady incompressible viscous Williamson nanofluid with exponential internal heat generation containing gyrotactic microorganism over a stretching sheet. The governing equations and auxiliary conditions are reduced to a set of non-linear coupled differential equations with the appropriate boundary conditions using similarity transformation. The transformed equations are solved numerically through spectral relaxation method. The influences of various parameters such as Williamson parameter γ, power constant λ, Prandtl number Pr, magnetic field parameter M, Peclet number Pe, Lewis number Le, Bioconvection Lewis number Lb, Brownian motion parameter Nb, thermophoresis parameter Nt, and bioconvection constant σ are studied to obtain the momentum, heat, mass and microorganism distributions. Moment, heat, mass and gyrotactic microorganism profiles are explored through graphs and tables. We computed the heat transfer rate, mass flux rate and the density number of the motile microorganism near the surface. Our numerical results are in better agreement in comparison with existing calculations. The Residual error of our obtained solutions is determined in order to see the convergence rate against iteration. Faster convergence is achieved when internal heat generation is absent. The effect of magnetic parameter M decreases the momentum boundary layer thickness but increases the thermal boundary layer thickness. It is apparent that bioconvection Lewis number and bioconvection parameter has a pronounced effect on microorganism boundary. Increasing brownian motion parameter and Lewis number decreases the thermal boundary layer. Furthermore, magnetic field parameter and thermophoresis parameter has an induced effect on concentration profiles.

Keywords: convection flow, similarity, numerical analysis, spectral method, Williamson nanofluid, internal heat generation

Procedia PDF Downloads 150
505 Unveiling Drought Dynamics in the Cuneo District, Italy: A Machine Learning-Enhanced Hydrological Modelling Approach

Authors: Mohammadamin Hashemi, Mohammadreza Kashizadeh

Abstract:

Droughts pose a significant threat to sustainable water resource management, agriculture, and socioeconomic sectors, particularly in the field of climate change. This study investigates drought simulation using rainfall-runoff modelling in the Cuneo district, Italy, over the past 60-year period. The study leverages the TUW model, a lumped conceptual rainfall-runoff model with a semi-distributed operation capability. Similar in structure to the widely used Hydrologiska Byråns Vattenbalansavdelning (HBV) model, the TUW model operates on daily timesteps for input and output data specific to each catchment. It incorporates essential routines for snow accumulation and melting, soil moisture storage, and streamflow generation. Multiple catchments' discharge data within the Cuneo district form the basis for thorough model calibration employing the Kling-Gupta Efficiency (KGE) metric. A crucial metric for reliable drought analysis is one that can accurately represent low-flow events during drought periods. This ensures that the model provides a realistic picture of water availability during these critical times. Subsequent validation of monthly discharge simulations thoroughly evaluates overall model performance. Beyond model development, the investigation delves into drought analysis using the robust Standardized Runoff Index (SRI). This index allows for precise characterization of drought occurrences within the study area. A meticulous comparison of observed and simulated discharge data is conducted, with particular focus on low-flow events that characterize droughts. Additionally, the study explores the complex interplay between land characteristics (e.g., soil type, vegetation cover) and climate variables (e.g., precipitation, temperature) that influence the severity and duration of hydrological droughts. The study's findings demonstrate successful calibration of the TUW model across most catchments, achieving commendable model efficiency. Comparative analysis between simulated and observed discharge data reveals significant agreement, especially during critical low-flow periods. This agreement is further supported by the Pareto coefficient, a statistical measure of goodness-of-fit. The drought analysis provides critical insights into the duration, intensity, and severity of drought events within the Cuneo district. This newfound understanding of spatial and temporal drought dynamics offers valuable information for water resource management strategies and drought mitigation efforts. This research deepens our understanding of drought dynamics in the Cuneo region. Future research directions include refining hydrological modelling techniques and exploring future drought projections under various climate change scenarios.

Keywords: hydrologic extremes, hydrological drought, hydrological modelling, machine learning, rainfall-runoff modelling

Procedia PDF Downloads 12
504 Price Gouging in Time of Covid-19 Pandemic: When National Competition Agencies are Weak Institutions that Exacerbate the Effects of Exploitative Economic Behaviour

Authors: Cesar Leines

Abstract:

The social effects of the pandemic are significant and diverse, most of those effects have widened the gap of economic inequality. Without a doubt, each country faces difficulties associated with the strengths and weaknesses of its own institutions that can address these causes and consequences. Around the world, pricing practices that have no connection to production costs have been used extensively in numerous markets beyond those relating to the supply of essential goods and services, and although it is not unlawful to adjust pricing considering the increased demand of certain products, shortages and disruption of supply chains, illegitimate pricing practices may arise and these tend to transfer wealth from consumers to producers that affect the purchasing power of the former, making people worse off. High prices with no objective justification indicate a poor state of the competitive process in any market and the impact of those underlying competition issues leading to inefficiency is increased when national competition agencies are weak and ineffective in enforcing competition in law and policy. It has been observed that in those countries where competition authorities are perceived as weak or ineffective, price increases of a wide range of products and services were more significant during the pandemic than those price increases observed in countries where the perception of the effectiveness of the competition agency is high. When a perception is created of a highly effective competition authority, one which enforces competition law and its non-enforcement activities result in the fulfillment of its substantive functions of protecting competition as the means to create efficient markets, the price rise observed in markets under its jurisdiction is low. A case study focused on the effectiveness of the national competition agency in Mexico (COFECE) points to institutional weakness as one of the causes leading to excessive pricing. There are many factors that contribute to its low effectiveness and which, in turn, have led to a very significant price hike, potentiated by the pandemic. This paper contributes to the discussion of these factors and proposes different steps that overall help COFECE or any other competition agency to increase the perception of effectiveness for the benefit of the consumers.

Keywords: agency effectiveness, competition, institutional weakness, price gouging

Procedia PDF Downloads 151
503 Investigation of Nucleation and Thermal Conductivity of Waxy Crude Oil on Pipe Wall via Particle Dynamics

Authors: Jinchen Cao, Tiantian Du

Abstract:

As waxy crude oil is easy to crystallization and deposition in the pipeline wall, it causes pipeline clogging and leads to the reduction of oil and gas gathering and transmission efficiency. In this paper, a mesoscopic scale dissipative particle dynamics method is employed, and constructed four pipe wall models, including smooth wall (SW), hydroxylated wall (HW), rough wall (RW), and single-layer graphene wall (GW). Snapshots of the simulation output trajectories show that paraffin molecules interact with each other to form a network structure that constrains water molecules as their nucleation sites. Meanwhile, it is observed that the paraffin molecules on the near-wall side are adsorbed horizontally between inter-lattice gaps of the solid wall. In the pressure range of 0 - 50 MPa, the pressure change has less effect on the affinity properties of SS, HS, and GS walls, but for RS walls, the contact angle between paraffin wax and water molecules was found to decrease with the increase in pressure, while the water molecules showed the opposite trend, the phenomenon is due to the change in pressure, leading to the transition of paraffin wax molecules from amorphous to crystalline state. Meanwhile, the minimum crystalline phase pressure (MCPP) was proposed to describe the lowest pressure at which crystallization of paraffin molecules occurs. The maximum number of crystalline clusters formed by paraffin molecules at MCPP in the system showed NSS (0.52 MPa) > NHS (0.55 MPa) > NRS (0.62 MPa) > NGS (0.75 MPa). The MCPP on the graphene surface, with the least number of clusters formed, indicates that the addition of graphene inhibited the crystallization process of paraffin deposition on the wall surface. Finally, the thermal conductivity was calculated, and the results show that on the near-wall side, the thermal conductivity changes drastically due to the occurrence of adsorption crystallization of paraffin waxes; on the fluid side the thermal conductivity gradually tends to stabilize, and the average thermal conductivity shows: ĸRS(0.254W/(m·K)) > ĸRS(0.249W/(m·K)) > ĸRS(0.218W/(m·K)) > ĸRS(0.188W/(m·K)).This study provides a theoretical basis for improving the transport efficiency and heat transfer characteristics of waxy crude oil in terms of wall type, wall roughness, and MCPP.

Keywords: waxy crude oil, thermal conductivity, crystallization, dissipative particle dynamics, MCPP

Procedia PDF Downloads 40
502 Interruption Overload in an Office Environment: Hungarian Survey Focusing on the Factors that Affect Job Satisfaction and Work Efficiency

Authors: Fruzsina Pataki-Bittó, Edit Németh

Abstract:

On the one hand, new technologies and communication tools improve employee productivity and accelerate information and knowledge transfer, while on the other hand, information overload and continuous interruptions make it even harder to concentrate at work. It is a great challenge for companies to find the right balance, while there is also an ongoing demand to recruit and retain the talented employees who are able to adopt the modern work style and effectively use modern communication tools. For this reason, this research does not focus on the objective measures of office interruptions, but aims to find those disruption factors which influence the comfort and job satisfaction of employees, and the way how they feel generally at work. The focus of this research is on how employees feel about the different types of interruptions, which are those they themselves identify as hindering factors, and those they feel as stress factors. By identifying and then reducing these destructive factors, job satisfaction can reach a higher level and employee turnover can be reduced. During the research, we collected information from depth interviews and questionnaires asking about work environment, communication channels used in the workplace, individual communication preferences, factors considered as disruptions, and individual steps taken to avoid interruptions. The questionnaire was completed by 141 office workers from several types of workplaces based in Hungary. Even though 66 respondents are working at Hungarian offices of multinational companies, the research is about the characteristics of the Hungarian labor force. The most important result of the research shows that while more than one third of the respondents consider office noise as a disturbing factor, personal inquiries are welcome and considered useful, even if in such cases the work environment will not be convenient to solve tasks requiring concentration. Analyzing the sizes of the offices, in an open-space environment, the rate of those who consider office noise as a disturbing factor is surprisingly lower than in smaller office rooms. Opinions are more diverse regarding information communication technologies. In addition to the interruption factors affecting the employees' job satisfaction, the research also focuses on the role of the offices in the 21st century.

Keywords: information overload, interruption, job satisfaction, office environment, work efficiency

Procedia PDF Downloads 207
501 Seismic Perimeter Surveillance System (Virtual Fence) for Threat Detection and Characterization Using Multiple ML Based Trained Models in Weighted Ensemble Voting

Authors: Vivek Mahadev, Manoj Kumar, Neelu Mathur, Brahm Dutt Pandey

Abstract:

Perimeter guarding and protection of critical installations require prompt intrusion detection and assessment to take effective countermeasures. Currently, visual and electronic surveillance are the primary methods used for perimeter guarding. These methods can be costly and complicated, requiring careful planning according to the location and terrain. Moreover, these methods often struggle to detect stealthy and camouflaged insurgents. The object of the present work is to devise a surveillance technique using seismic sensors that overcomes the limitations of existing systems. The aim is to improve intrusion detection, assessment, and characterization by utilizing seismic sensors. Most of the similar systems have only two types of intrusion detection capability viz., human or vehicle. In our work we could even categorize further to identify types of intrusion activity such as walking, running, group walking, fence jumping, tunnel digging and vehicular movements. A virtual fence of 60 meters at GCNEP, Bahadurgarh, Haryana, India, was created by installing four underground geophones at a distance of 15 meters each. The signals received from these geophones are then processed to find unique seismic signatures called features. Various feature optimization and selection methodologies, such as LightGBM, Boruta, Random Forest, Logistics, Recursive Feature Elimination, Chi-2 and Pearson Ratio were used to identify the best features for training the machine learning models. The trained models were developed using algorithms such as supervised support vector machine (SVM) classifier, kNN, Decision Tree, Logistic Regression, Naïve Bayes, and Artificial Neural Networks. These models were then used to predict the category of events, employing weighted ensemble voting to analyze and combine their results. The models were trained with 1940 training events and results were evaluated with 831 test events. It was observed that using the weighted ensemble voting increased the efficiency of predictions. In this study we successfully developed and deployed the virtual fence using geophones. Since these sensors are passive, do not radiate any energy and are installed underground, it is impossible for intruders to locate and nullify them. Their flexibility, quick and easy installation, low costs, hidden deployment and unattended surveillance make such systems especially suitable for critical installations and remote facilities with difficult terrain. This work demonstrates the potential of utilizing seismic sensors for creating better perimeter guarding and protection systems using multiple machine learning models in weighted ensemble voting. In this study the virtual fence achieved an intruder detection efficiency of over 97%.

Keywords: geophone, seismic perimeter surveillance, machine learning, weighted ensemble method

Procedia PDF Downloads 40
500 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 209
499 Clinical Evidence of the Efficacy of ArtiCovid (Artemisia Annua Extract) on Covid-19 Patients in DRC

Authors: Md, MCS, MPH Munyangi Wa Nkola Jerome

Abstract:

The pandemic of COVID-19, a recently discovered contagious respiratory disease called SARS-CoV-2 (Severe Acute Respiratory Syndrome-Coronavirus 2 Majority of people infected with SARS-CoV-2: Asymptomatic or mildly ill 14% of patients will develop severe illness requiring hospitalization and oxygen support, and 5% of these will be transferred to an intensive care unit, Urgent need for new treatments that can be used quickly to avoid transfer of patients to intensive care and death. Objective: To evaluate the clinical activity (efficacy) of ArtiCovid Hypothesis: Administration of 3 times a teaspoon per day by COVID patients (symptomatic, mild, or moderate forms) results in the disappearance of symptoms and improvement of biological parameters (including viral suppression). Clinical efficacy: the disappearance of clinical signs after seven days of treatment; reduction in the rate of patients transferred to intensive care units for mechanical ventilation and a decrease in mortality related to this infection Paraclinical efficacy: improvement of biological parameters (mainly d-dimer, CRP) Virological efficacy: suppression of the viral load after seven days of treatment (control test on the seventh day is negative) Pilot study using a standardized solution based on Artemisia annua (ARTICOVID) Obtaining authorization from the health authorities of the province of Central Kongo Recruitment of volunteer patients, mainly in the Kinkanda HospitalCarrying out tests before and after treatment as well as analyses before and after treatment. The protocol obtained the approval of the ethics committee 50 patients who completed the treatment were aged between 2 and 70 years, with an average age of 36 yearsMore half were male (56%). One in four patients was a health professional (25%) Of the 12 health professionals, 4 were physicians. For those who reported the date of onset of the disease, the average duration between the appearance of the first symptoms and the medical consultation was 5 days. The 50 patients put on ARTICOVID were discharged alive with CRP levels substantially normalizedAfter seven to eight days, the control test came back negative. This pilot study suggests that ARTICOVID may be effective against COVID-19 infection.

Keywords: artiCovid, DRC, Covid-19, SARS_COV_2

Procedia PDF Downloads 90
498 Observation of a Phase Transition in Adsorbed Hydrogen at 101 Kelvin

Authors: Raina J. Olsen, Andrew K. Gillespie, John W. Taylor, Cristian I. Contescu, Peter Pfeifer, James R. Morris

Abstract:

While adsorbent surfaces such as graphite are known to increase the melting temperature of solid H2, this effect is normally rather small, increasing to 20 Kelvin (K) relative to 14 K in the bulk. An as-yet unidentified phase transition has been observed in a system of H2 adsorbed in a porous, locally graphitic, Saran carbon with sub-nanometer sized pores at temperatures (74-101 K) and pressures ( > 76 bar) well above the critical point of bulk H2 using hydrogen adsorption and neutron scattering experiments. Adsorption data shows a discontinuous pressure jump in the kinetics at 76 bar after nearly an hour of equilibration time, which is identified as an exothermic phase transition. This discontinuity is observed in the 87 K isotherm, but not the 77 K isotherm. At higher pressures, the measured isotherms show greater excess adsorption at 87 K than 77 K. Inelastic neutron scattering measurements also show a striking phase transition, with the amount of high angle scattering (corresponding to large momentum transfer/ large effective mass) increasing by up to a factor of 5 in the novel phase. During the course of the neutron scattering experiment, three of these reversible spectral phase transitions were observed to occur in response to only changes in sample temperature. The novel phase was observed by neutron scattering only at high H2 pressure (123 bar and 187 bar) and temperatures between 74-101 K in the sample of interest, but not at low pressure (30 bar), or in a control activated carbon at 186 bar of H2 pressure. Based on several of the more unusual observations, such as the slow equilibration and the presence of both an upper and lower temperature bound, a reasonable hypothesis is that this phase forms only in the presence of a high concentration of ortho-H2 (nuclear spin S=1). The increase in adsorption with temperature, temperatures which cross the lower temperature bound observed by neutron scattering, indicates that this novel phase is denser. Structural characterization data on the adsorbent shows that it may support a commensurate solid phase denser than those known to exist on graphite at much lower temperatures. Whatever this phase is eventually proven to be, these results show that surfaces can have a more striking effect on hydrogen phases than previously thought.

Keywords: adsorbed phases, hydrogen, neutron scattering, nuclear spin

Procedia PDF Downloads 440
497 Cognitive Radio in Aeronautic: Comparison of Some Spectrum Sensing Technics

Authors: Abdelkhalek Bouchikhi, Elyes Benmokhtar, Sebastien Saletzki

Abstract:

The aeronautical field is experiencing issues with RF spectrum congestion due to the constant increase in the number of flights, aircrafts and telecom systems on board. In addition, these systems are bulky in size, weight and energy consumption. The cognitive radio helps particularly solving the spectrum congestion issue by its capacity to detect idle frequency channels then, allowing an opportunistic exploitation of the RF spectrum. The present work aims to propose a new use case for aeronautical spectrum sharing and to study the performances of three different detection techniques: energy detector, matched filter and cyclostationary detector within the aeronautical use case. The spectrum in the proposed cognitive radio is allocated dynamically where each cognitive radio follows a cognitive cycle. The spectrum sensing is a crucial step. The goal of the sensing is gathering data about the surrounding environment. Cognitive radio can use different sensors: antennas, cameras, accelerometer, thermometer, etc. In IEEE 802.22 standard, for example, a primary user (PU) has always the priority to communicate. When a frequency channel witch used by the primary user is idle, the secondary user (SU) is allowed to transmit in this channel. The Distance Measuring Equipment (DME) is composed of a UHF transmitter/receiver (interrogator) in the aircraft and a UHF receiver/transmitter on the ground. While the future cognitive radio will be used jointly to alleviate the spectrum congestion issue in the aeronautical field. LDACS, for example, is a good candidate; it provides two isolated data-links: ground-to-air and air-to-ground data-links. The first contribution of the present work is a strategy allowing sharing the L-band. The adopted spectrum sharing strategy is as follow: the DME will play the role of PU which is the licensed user and the LDACS1 systems will be the SUs. The SUs could use the L-band channels opportunely as long as they do not causing harmful interference signals which affect the QoS of the DME system. Although the spectrum sensing is a key step, it helps detecting holes by determining whether the primary signal is present or not in a given frequency channel. A missing detection on primary user presence creates interference between PU and SU and will affect seriously the QoS of the legacy radio. In this study, first brief definitions, concepts and the state of the art of cognitive radio will be presented. Then, a study of three communication channel detection algorithms in a cognitive radio context is carried out. The study is made from the point of view of functions, material requirements and signal detection capability in the aeronautical field. Then, we presented a modeling of the detection problem by three different methods (energy, adapted filter, and cyclostationary) as well as an algorithmic description of these detectors is done. Then, we study and compare the performance of the algorithms. Simulations were carried out using MATLAB software. We analyzed the results based on ROCs curves for SNR between -10dB and 20dB. The three detectors have been tested with a synthetics and real world signals.

Keywords: aeronautic, communication, navigation, surveillance systems, cognitive radio, spectrum sensing, software defined radio

Procedia PDF Downloads 144
496 Wide Dissemination of CTX-M-Type Extended-Spectrum β-Lactamases in Korean Swine Farms

Authors: Young Ah Kim, Hyunsoo Kim, Eun-Jeong Yoon, Young Hee Seo, Kyungwon Lee

Abstract:

Extended-spectrum β-lactamase (ESBL)-producing Escherichia coli from food animals are considered as a reservoir for transmission of ESBL genes to human. The aim of this study is to assess the prevalence and molecular epidemiology of ESBL-producing E. coli colonization in pigs, farm workers, and farm environments to elucidate the transmission of multidrug-resistant clones from animal to human. Nineteen pig farms were enrolled across the country in Korea from August to December 2017. ESBL-producing E. coli isolates were detected in 190 pigs, 38 farm workers, and 112 sites of farm environments using ChromID ESBL (bioMerieux, Marcy l'Etoile, France), directly (stool or perirectal swab) or after enrichment (sewage). Antimicrobial susceptibility tests were done with disk diffusion methods and blaTEM, blaSHV, and blaCTX-M were detected with PCR and sequencing. The genomes of the four CTX-M-55-producing E. coli isolates from various sources in one farm were entirely sequenced to assess the relatedness of the strains. Whole genome sequencing (WGS) was performed with PacBio RS II system (Pacific Biosciences, Menlo Park, CA, USA). ESBL genotypes were 85 CTX-M-1 group (one CTX-M-3, 23 CTX-M-15, one CTX-M-28, 59 CTX-M-55, one CTX-M-69) and 60 CTX-M-9 group (41 CTX-M-14, one CTX-M-17, one CTX-M-27, 13 CTX-M-65, 4 CTX-M-102) in total 145 isolates. The rectal colonization rates were 53.2% (101/190) in pigs and 39.5% (15/38) in farm workers. In WGS, sequence types (STs) were determined as ST69 (E. coli PJFH115 isolate from a human carrier), ST457 (two E. coli isolates PJFE101 recovered from a fence and PJFA1104 from a pig) and ST5899 (E. coli PJFA173 isolate from the other pig). The four plasmids encoding CTX-M-55 (88,456 to 149, 674 base pair), whether it belonged to IncFIB or IncFIC-IncFIB type, shared IncF backbone furnishing the conjugal elements, suggesting of genes originated from same ancestor. In conclusion, the prevalence of ESBL-producing E. coli in swine farms was surprisingly high, and many of them shared common ESBL genotypes of clinical isolates such as CTX-M-14, 15, and 55 in Korea. It could spread by horizontal transfer between isolates from different reservoirs (human-animal-environment).

Keywords: Escherichia coli, extended-spectrum β-lactamase, prevalence, whole genome sequencing

Procedia PDF Downloads 175
495 Statistical Modeling of Constituents in Ash Evolved From Pulverized Coal Combustion

Authors: Esam Jassim

Abstract:

Industries using conventional fossil fuels have an interest in better understanding the mechanism of particulate formation during combustion since such is responsible for emission of undesired inorganic elements that directly impact the atmospheric pollution level. Fine and ultrafine particulates have tendency to escape the flue gas cleaning devices to the atmosphere. They also preferentially collect on surfaces in power systems resulting in ascending in corrosion inclination, descending in the heat transfer thermal unit, and severe impact on human health. This adverseness manifests particularly in the regions of world where coal is the dominated source of energy for consumption. This study highlights the behavior of calcium transformation as mineral grains verses organically associated inorganic components during pulverized coal combustion. The influence of existing type of calcium on the coarse, fine and ultrafine mode formation mechanisms is also presented. The impact of two sub-bituminous coals on particle size and calcium composition evolution during combustion is to be assessed. Three mixed blends named Blends 1, 2, and 3 are selected according to the ration of coal A to coal B by weight. Calcium percentage in original coal increases as going from Blend 1 to 3. A mathematical model and a new approach of describing constituent distribution are proposed. Analysis of experiments of calcium distribution in ash is also modeled using Poisson distribution. A novel parameter, called elemental index λ, is introduced as a measuring factor of element distribution. Results show that calcium in ash that originally in coal as mineral grains has index of 17, whereas organically associated calcium transformed to fly ash shown to be best described when elemental index λ is 7. As an alkaline-earth element, calcium is considered the fundamental element responsible for boiler deficiency since it is the major player in the mechanism of ash slagging process. The mechanism of particle size distribution and mineral species of ash particles are presented using CCSEM and size-segregated ash characteristics. Conclusions are drawn from the analysis of pulverized coal ash generated from a utility-scale boiler.

Keywords: coal combustion, inorganic element, calcium evolution, fluid dynamics

Procedia PDF Downloads 307
494 Adapting an Accurate Reverse-time Migration Method to USCT Imaging

Authors: Brayden Mi

Abstract:

Reverse time migration has been widely used in the Petroleum exploration industry to reveal subsurface images and to detect rock and fluid properties since the early 1980s. The seismic technology involves the construction of a velocity model through interpretive model construction, seismic tomography, or full waveform inversion, and the application of the reverse-time propagation of acquired seismic data and the original wavelet used in the acquisition. The methodology has matured from 2D, simple media to present-day to handle full 3D imaging challenges in extremely complex geological conditions. Conventional Ultrasound computed tomography (USCT) utilize travel-time-inversion to reconstruct the velocity structure of an organ. With the velocity structure, USCT data can be migrated with the “bend-ray” method, also known as migration. Its seismic application counterpart is called Kirchhoff depth migration, in which the source of reflective energy is traced by ray-tracing and summed to produce a subsurface image. It is well known that ray-tracing-based migration has severe limitations in strongly heterogeneous media and irregular acquisition geometries. Reverse time migration (RTM), on the other hand, fully accounts for the wave phenomena, including multiple arrives and turning rays due to complex velocity structure. It has the capability to fully reconstruct the image detectable in its acquisition aperture. The RTM algorithms typically require a rather accurate velocity model and demand high computing powers, and may not be applicable to real-time imaging as normally required in day-to-day medical operations. However, with the improvement of computing technology, such a computational bottleneck may not present a challenge in the near future. The present-day (RTM) algorithms are typically implemented from a flat datum for the seismic industry. It can be modified to accommodate any acquisition geometry and aperture, as long as sufficient illumination is provided. Such flexibility of RTM can be conveniently implemented for the application in USCT imaging if the spatial coordinates of the transmitters and receivers are known and enough data is collected to provide full illumination. This paper proposes an implementation of a full 3D RTM algorithm for USCT imaging to produce an accurate 3D acoustic image based on the Phase-shift-plus-interpolation (PSPI) method for wavefield extrapolation. In this method, each acquired data set (shot) is propagated back in time, and a known ultrasound wavelet is propagated forward in time, with PSPI wavefield extrapolation and a piece-wise constant velocity model of the organ (breast). The imaging condition is then applied to produce a partial image. Although each image is subject to the limitation of its own illumination aperture, the stack of multiple partial images will produce a full image of the organ, with a much-reduced noise level if compared with individual partial images.

Keywords: illumination, reverse time migration (RTM), ultrasound computed tomography (USCT), wavefield extrapolation

Procedia PDF Downloads 48
493 Chemically Enhanced Primary Treatment: Full Scale Trial Results Conducted at a South African Wastewater Works

Authors: Priyanka Govender, S. Mtshali, Theresa Moonsamy, Zanele Mkwanazi, L. Mthembu

Abstract:

Chemically enhanced primary treatment (CEPT) can be used at wastewater works to improve the quality of the final effluent discharge, provided that the plant has spare anaerobic digestion capacity. CEPT can transfer part of the organic load to the digesters thereby effectively relieving the hydraulic loading on the plant and in this way can allow the plant to continue operating long after the hydraulic capacity of the plant has been exceeded. This can allow a plant to continue operating well beyond its original design capacity, requiring only fairly simple and inexpensive modifications to the primary settling tanks as well as additional chemical costs, thereby delaying or even avoiding the need for expensive capital upgrades. CEPT can also be effective at plants where high organic loadings prevent the wastewater discharge from meeting discharge standards, especially in the case of COD, phosphates and suspended solids. By increasing removals of these pollutants in the primary settling tanks, CEPT can enable the plant to conform to specifications without the need for costly upgrades. Laboratory trials were carried out recently at the Umbilo WWTW in Durban and these were followed by a baseline assessment of the current plant performance and a subsequent full scale trial on the Conventional plant i.e. West Plant. The operating conditions of the plant are described and the improvements obtained in COD, phosphate and suspended solids, are discussed. The PST and plant overall suspended solids removal efficiency increased by approximately 6% during the trial. Details regarding the effect that CEPT had on sludge production and the digesters are also provided. The cost implications of CEPT are discussed in terms of capital costs as well as operation and maintenance costs and the impact of Ferric chloride on the infrastructure was also studied and found to be minimal. It was concluded that CEPT improves the final quality of the discharge effluent, thereby improving the compliance of this effluent with the discharge license. It could also allow for a delay in upgrades to the plant, allowing the plant to operate above its design capacity. This will be elaborated further upon presentation.

Keywords: chemically enhanced, ferric, wastewater, primary

Procedia PDF Downloads 270
492 An Electrochemical Enzymatic Biosensor Based on Multi-Walled Carbon Nanotubes and Poly (3,4 Ethylenedioxythiophene) Nanocomposites for Organophosphate Detection

Authors: Navpreet Kaur, Himkusha Thakur, Nirmal Prabhakar

Abstract:

The most controversial issue in crop production is the use of Organophosphate insecticides. This is evident in many reports that Organophosphate (OP) insecticides, among the broad range of pesticides are mainly involved in acute and chronic poisoning cases. OPs detection is of crucial importance for health protection, food and environmental safety. In our study, a nanocomposite of poly (3,4 ethylenedioxythiophene) (PEDOT) and multi-walled carbon nanotubes (MWCNTs) has been deposited electrochemically onto the surface of fluorine doped tin oxide sheets (FTO) for the analysis of malathion OP. The -COOH functionalization of MWCNTs has been done for the covalent binding with amino groups of AChE enzyme. The use of PEDOT-MWCNT films exhibited an excellent conductivity, enables fast transfer kinetics and provided a favourable biocompatible microenvironment for AChE, for the significant malathion OP detection. The prepared biosensors were characterized by Fourier transform infrared spectrometry (FTIR), Field emission-scanning electron microscopy (FE-SEM) and electrochemical studies. Various optimization studies were done for different parameters including pH (7.5), AChE concentration (50 mU), substrate concentration (0.3 mM) and inhibition time (10 min). Substrate kinetics has been performed and studied for the determination of Michaelis Menten constant. The detection limit for malathion OP was calculated to be 1 fM within the linear range 1 fM to 1 µM. The activity of inhibited AChE enzyme was restored to 98% of its original value by 2-pyridine aldoxime methiodide (2-PAM) (5 mM) treatment for 11 min. The oxime 2-PAM is able to remove malathion from the active site of AChE by means of trans-esterification reaction. The storage stability and reusability of the prepared biosensor is observed to be 30 days and seven times, respectively. The application of the developed biosensor has also been evaluated for spiked lettuce sample. Recoveries of malathion from the spiked lettuce sample ranged between 96-98%. The low detection limit obtained by the developed biosensor made them reliable, sensitive and a low cost process.

Keywords: PEDOT-MWCNT, malathion, organophosphates, acetylcholinesterase, biosensor, oxime (2-PAM)

Procedia PDF Downloads 423
491 To Investigate a Discharge Planning Connect with Long Term Care 2.0 Program in a Medical Center in Taiwan

Authors: Chan Hui-Ya, Ding Shin-Tan

Abstract:

Background and Aim: The discharge planning is considered helpful to reduce the hospital length of stay and readmission rate, and then increased satisfaction with healthcare for patients and professionals. In order to decrease the waiting time of long-term care and boost the care quality of patients after discharge from the hospital, the Ministry of Health and Welfare department in Taiwan initiates a program “discharge planning connects with long-term care 2.0 services” in 2017. The purpose of this study is to investigate the outcome of the pilot of this program in a medical center. Methods: By purpose sampling, the study chose five wards in a medical center as pilot units. The researchers compared the beds of service, the numbers of cases which were transferred to the long-term care center and transferred rates per month between the pilot units and the other units, and analyze the basic data, the long-term care service needs and the approval service items of cases transfer to the long-term care center in pilot units. Results: From June to September 2017, a total of 92 referrals were made, and 51 patients were enrolled into the pilot program. There is a significant difference of transferring rate between the pilot units and the other units (χ = 702.6683, p < 0.001). Only 20 cases (39.2% success rate) were approved to accept the parts of service items of long-term care in the pilot units. The most approval item was respite care service (n = 13; 65%), while it was third at needs ranking of service lists during linking services process. Among the reasons of patients who cancelled the request, 38.71% reasons were related to the services which could not match the patients’ needs and expectation. Conclusion: The results indicate there is a requirement to modify the long-term care services to fit the needs of cases. The researchers suggest estimating the potential cases by screening data from hospital informatics systems and to hire more case manager according the service time of potential cases. Meanwhile, the strategies shortened the assessment scale and authorized hospital case managers to approve some items of long-term care should be considered.

Keywords: discharge planning, long-term care, case manager, patient care

Procedia PDF Downloads 260
490 Experience of Inpatient Life in Korean Complex Regional Pain Syndrome: A Phenomenological Study

Authors: Se-Hwa Park, En-Kyung Han, Jae-Young Lim, Hye-Jung Ahn

Abstract:

Purpose: The objective of this study is to provide basic data for understanding the substance of inpatient life with CRPS (Complex Regional Pain Syndrome) and developing efficient and effective nursing intervention. Methods: From September 2018 to November, we have interviewed 10 CRPS patients about inpatient experiences. To understand the implication of inpatient life experiences with CRPS and intrinsic structure, we have used the question: 'How about the inpatient experiences with CRPS'. For data analysis, the method suggested by Colaizzi was applied as a phenomenological method. Results: According to the analysis, the study participants' inpatient life process was structured in six categories: (a) breakthrough pain experience (b) the limitation of pain treatment, (c) worsen factors of pain during inpatient period, (d) treat method for pain, (e) positive experience for inpatient period, (f) requirements for medical team, family and people in hospital room. Conclusion: Inpatient with CRPS have experienced the breakthrough pain. They had expected immediate treatment for breakthrough pain, but they experienced severe pain because immediate treatment was not implemented. Pain-worsening factors which patients with CRPS are as follows: personal factors from negative emotions such as insomnia, stress, sensitive character, pain part touch or vibration stimulus on the bed, physical factors from high threshold or rapid speed during fast transfer, conflict with other people, climate factors such as humidity or low temperature, noise, smell, lack of space because of many visitors. Patients actively manage the pain committing into another tasks or diversion. And also, patients passively manage the pain, just suppress, give-up. They think positively about rehabilitation treatment. And they require the understanding and sympathy for other people, and emotional support, immediate intervention for medical team. Based on the results of this study, we suppose the guideline of systematic breakthrough pain management for the relaxation of sudden pain, using notice of informing caution for touch or vibration. And we need to develop non-medicine pain management nursing intervention.

Keywords: breakthrough pain, CRPS, complex regional pain syndrome, inpatient life experiences, phenomenological method

Procedia PDF Downloads 108
489 An eHealth Intervention Using Accelerometer- Smart Phone-App Technology to Promote Physical Activity and Health among Employees in a Military Setting

Authors: Emilia Pietiläinen, Heikki Kyröläinen, Tommi Vasankari, Matti Santtila, Tiina Luukkaala, Kai Parkkola

Abstract:

Working in the military sets special demands on physical fitness, however, reduced physical activity levels among employees in the Finnish Defence Forces (FDF), a trend also being seen among the working-age population in Finland, is leading to reduced physical fitness levels and increased risk of cardiovascular and metabolic diseases, something which also increases human resource costs. Therefore, the aim of the present study was to develop an eHealth intervention using accelerometer- smartphone app feedback technique, telephone counseling and physical activity recordings to increase physical activity of the personnel and thereby improve their health. Specific aims were to reduce stress, improve quality of sleep and mental and physical performance, ability to work and reduce sick leave absences. Employees from six military brigades around Finland were invited to participate in the study, and finally, 260 voluntary participants were included (66 women, 194 men). The participants were randomized into intervention (156) and control groups (104). The eHealth intervention group used accelerometers measuring daily physical activity and duration and quality of sleep for six months. The accelerometers transmitted the data to smartphone apps while giving feedback about daily physical activity and sleep. The intervention group participants were also encouraged to exercise for two hours a week during working hours, a benefit that was already offered to employees following existing FDF guidelines. To separate the exercise done during working hours from the accelerometer data, the intervention group marked this exercise into an exercise diary. The intervention group also participated in telephone counseling about their physical activity. On the other hand, the control group participants continued with their normal exercise routine without the accelerometer and feedback. They could utilize the benefit of being able to exercise during working hours, but they were not separately encouraged for it, nor was the exercise diary used. The participants were measured at baseline, after the entire intervention period, and six months after the end of the entire intervention. The measurements included accelerometer recordings, biochemical laboratory tests, body composition measurements, physical fitness tests, and a wide questionnaire focusing on sociodemographic factors, physical activity and health. In terms of results, the primary indicators of effectiveness are increased physical activity and fitness, improved health status, and reduced sick leave absences. The evaluation of the present scientific reach is based on the data collected during the baseline measurements. Maintenance of the studied outcomes is assessed by comparing the results of the control group measured at the baseline and a year follow-up. Results of the study are not yet available but will be presented at the conference. The present findings will help to develop an easy and cost-effective model to support the health and working capability of employees in the military and other workplaces.

Keywords: accelerometer, health, mobile applications, physical activity, physical performance

Procedia PDF Downloads 167
488 CFD-DEM Modelling of Liquid Fluidizations of Ellipsoidal Particles

Authors: Esmaeil Abbaszadeh Molaei, Zongyan Zhou, Aibing Yu

Abstract:

The applications of liquid fluidizations have been increased in many parts of industries such as particle classification, backwashing of granular filters, crystal growth, leaching and washing, and bioreactors due to high-efficient liquid–solid contact, favorable mass and heat transfer, high operation flexibilities, and reduced back mixing of phases. In most of these multiphase operations the particles properties, i.e. size, density, and shape, may change during the process because of attrition, coalescence or chemical reactions. Previous studies, either experimentally or numerically, mainly have focused on studies of liquid-solid fluidized beds containing spherical particles; however, the role of particle shape on the hydrodynamics of liquid fluidized beds is still not well-known. A three-dimensional Discrete Element Model (DEM) and Computational Fluid Dynamics (CFD) are coupled to study the influence of particles shape on particles and liquid flow patterns in liquid-solid fluidized beds. In the simulations, ellipsoid particles are used to study the shape factor since they can represent a wide range of particles shape from oblate and sphere to prolate shape particles. Different particle shapes from oblate (disk shape) to elongated particles (rod shape) are selected to investigate the effect of aspect ratio on different flow characteristics such as general particles and liquid flow pattern, pressure drop, and particles orientation. First, the model is verified based on experimental observations, then further detail analyses are made. It was found that spherical particles showed a uniform particle distribution in the bed, which resulted in uniform pressure drop along the bed height. However for particles with aspect ratios less than one (disk-shape), some particles were carried into the freeboard region, and the interface between the bed and freeboard was not easy to be determined. A few particle also intended to leave the bed. On the other hand, prolate particles showed different behaviour in the bed. They caused unstable interface and some flow channeling was observed for low liquid velocities. Because of the non-uniform particles flow pattern for particles with aspect ratios lower (oblate) and more (prolate) than one, the pressure drop distribution in the bed was not observed as uniform as what was found for spherical particles.

Keywords: CFD, DEM, ellipsoid, fluidization, multiphase flow, non-spherical, simulation

Procedia PDF Downloads 283
487 The Origins of Representations: Cognitive and Brain Development

Authors: Athanasios Raftopoulos

Abstract:

In this paper, an attempt is made to explain the evolution or development of human’s representational arsenal from its humble beginnings to its modern abstract symbols. Representations are physical entities that represent something else. To represent a thing (in a general sense of “thing”) means to use in the mind or in an external medium a sign that stands for it. The sign can be used as a proxy of the represented thing when the thing is absent. Representations come in many varieties, from signs that perceptually resemble their representative to abstract symbols that are related to their representata through conventions. Relying the distinction among indices, icons, and symbols, it is explained how symbolic representations gradually emerged from indices and icons. To understand the development or evolution of our representational arsenal, the development of the cognitive capacities that enabled the gradual emergence of representations of increasing complexity and expressive capability should be examined. The examination of these factors should rely on a careful assessment of the available empirical neuroscientific and paleo-anthropological evidence. These pieces of evidence should be synthesized to produce arguments whose conclusions provide clues concerning the developmental process of our representational capabilities. The analysis of the empirical findings in this paper shows that Homo Erectus was able to use both icons and symbols. Icons were used as external representations, while symbols were used in language. The first step in the emergence of representations is that a sensory-motor purely causal schema involved in indices is decoupled from its normal causal sensory-motor functions and serves as a representation of the object that initially called it into play. Sensory-motor schemes are tied to specific contexts of the organism-environment interactions and are activated only within these contexts. For a representation of an object to be possible, this scheme must be de-contextualized so that the same object can be represented in different contexts; a decoupled schema loses its direct ties to reality and becomes mental content. The analysis suggests that symbols emerged due to selection pressures of the social environment. The need to establish and maintain social relationships in ever-enlarging groups that would benefit the group was a sufficient environmental pressure to lead to the appearance of the symbolic capacity. Symbols could serve this need because they can express abstract relationships, such as marriage or monogamy. Icons, by being firmly attached to what can be observed, could not go beyond surface properties to express abstract relations. The cognitive capacities that are required for having iconic and then symbolic representations were present in Homo Erectus, which had a language that started without syntactic rules but was structured so as to mirror the structure of the world. This language became increasingly complex, and grammatical rules started to appear to allow for the construction of more complex expressions required to keep up with the increasing complexity of social niches. This created evolutionary pressures that eventually led to increasing cranial size and restructuring of the brain that allowed more complex representational systems to emerge.

Keywords: mental representations, iconic representations, symbols, human evolution

Procedia PDF Downloads 21
486 Effect of Using PCMs and Transparency Rations on Energy Efficiency and Thermal Performance of Buildings in Hot Climatic Regions. A Simulation-Based Evaluation

Authors: Eda K. Murathan, Gulten Manioglu

Abstract:

In the building design process, reducing heating and cooling energy consumption according to the climatic region conditions of the building are important issues to be considered in order to provide thermal comfort conditions in the indoor environment. Applying a phase-change material (PCM) on the surface of a building envelope is the new approach for controlling heat transfer through the building envelope during the year. The transparency ratios of the window are also the determinants of the amount of solar radiation gain in the space, thus thermal comfort and energy expenditure. In this study, a simulation-based evaluation was carried out by using Energyplus to determine the effect of coupling PCM and transparency ratio when integrated into the building envelope. A three-storey building, a 30m x 30m sized floor area and 10m x 10m sized courtyard are taken as an example of the courtyard building model, which is frequently seen in the traditional architecture of hot climatic regions. 8 zones (10m x10m sized) with 2 exterior façades oriented in different directions on each floor were obtained. The percentage of transparent components on the PCM applied surface was increased at every step (%30, %40, %50). For every zone differently oriented, annual heating, cooling energy consumptions, and thermal comfort based on the Fanger method were calculated. All calculations are made for the zones of the intermediate floor of the building. The study was carried out for Diyarbakır provinces representing the hot-dry climate region and Antalya representing the hot-humid climate region. The increase in the transparency ratio has led to a decrease in heating energy consumption but an increase in cooling energy consumption for both provinces. When PCM is applied to all developed options, It was observed that heating and cooling energy consumption decreased in both Antalya (6.06%-19.78% and %1-%3.74) and Diyarbakır (2.79%-3.43% and 2.32%-4.64%) respectively. When the considered building is evaluated under passive conditions for the 21st of July, which represents the hottest day of the year, it is seen that the user feels comfortable between 11 pm-10 am with the effect of night ventilation for both provinces.

Keywords: building envelope, heating and cooling energy consumptions, phase change material, transparency ratio

Procedia PDF Downloads 149
485 Effect of Varying Zener-Hollomon Parameter (Temperature and Flow Stress) and Stress Relaxation on Creep Response of Hot Deformed AA3104 Can Body Stock

Authors: Oyindamola Kayode, Sarah George, Roberto Borrageiro, Mike Shirran

Abstract:

A phenomenon identified by our industrial partner has experienced sag on AA3104 can body stock (CBS) transfer bar during transportation of the slab from the breakdown mill to the finishing mill. Excessive sag results in bottom scuffing of the slab onto the roller table, resulting in surface defects on the final product. It has been found that increasing the strain rate on the breakdown mill final pass results in a slab resistant to sag. The creep response for materials hot deformed at different Zener–Holloman parameter values needs to be evaluated experimentally to gain better understanding of the operating mechanism. This study investigates this identified phenomenon through laboratory simulation of the breakdown mill conditions for various strain rates by utilizing the Gleeble at UCT Centre for Materials Engineering. The experiment will determine the creep response for a range of conditions as well as quantifying the associated material microstructure (sub-grain size, grain structure etc). The experimental matrices were determined based on experimental conditions approximate to industrial hot breakdown rolling and carried out on the Gleeble 3800 at the Centre for Materials Engineering, University of Cape Town. Plane strain compression samples were used for this series of tests at an applied load that allow for better contact and exaggerated creep displacement. A tantalum barrier layer was used for increased conductivity and decreased risk of anvil welding. One set of tests with no in-situ hold time was performed, where the samples were quenched after deformation. The samples were retained for microstructure analysis of the micrographs from the light microscopy (LM), quantitative data and images from scanning electron microscopy (SEM) and energy dispersive X-ray (EDX), sub-grain size and grain structure from electron back scattered diffraction (EBSD).

Keywords: aluminium alloy, can-body stock, hot rolling, creep response, Zener-Hollomon parameter

Procedia PDF Downloads 58
484 Modelling Tyre Rubber Materials for High Frequency FE Analysis

Authors: Bharath Anantharamaiah, Tomas Bouda, Elke Deckers, Stijn Jonckheere, Wim Desmet, Juan J. Garcia

Abstract:

Automotive tyres are gaining importance recently in terms of their noise emission, not only with respect to reduction in noise, but also their perception and detection. Tyres exhibit a mechanical noise generation mechanism up to 1 kHz. However, owing to the fact that tyre is a composite of several materials, it has been difficult to model it using finite elements to predict noise at high frequencies. The currently available FE models have a reliability of about 500 Hz, the limit which, however, is not enough to perceive the roughness or sharpness of noise from tyre. These noise components are important in order to alert pedestrians on the street about passing by slow, especially electric vehicles. In order to model tyre noise behaviour up to 1 kHz, its dynamic behaviour must be accurately developed up to a 1 kHz limit using finite elements. Materials play a vital role in modelling the dynamic tyre behaviour precisely. Since tyre is a composition of several components, their precise definition in finite element simulations is necessary. However, during the tyre manufacturing process, these components are subjected to various pressures and temperatures, due to which these properties could change. Hence, material definitions are better described based on the tyre responses. In this work, the hyperelasticity of tyre component rubbers is calibrated, using the design of experiments technique from the tyre characteristic responses that are measured on a stiffness measurement machine. The viscoelasticity of rubbers are defined by the Prony series for rubbers, which are determined from the loss factor relationship between the loss and storage moduli, assuming that the rubbers are excited within the linear viscoelasticity ranges. These values of loss factor are measured and theoretically expressed as a function of rubber shore hardness or hyperelasticities. From the results of the work, there exists a good correlation between test and simulation vibrational transfer function up to 1 kHz. The model also allows flexibility, i.e., the frequency limit can also be extended, if required, by calibrating the Prony parameters of rubbers corresponding to the frequency of interest. As future work, these tyre models are used for noise generation at high frequencies and thus for tyre noise perception.

Keywords: tyre dynamics, rubber materials, prony series, hyperelasticity

Procedia PDF Downloads 163
483 The Accuracy of an In-House Developed Computer-Assisted Surgery Protocol for Mandibular Micro-Vascular Reconstruction

Authors: Christophe Spaas, Lies Pottel, Joke De Ceulaer, Johan Abeloos, Philippe Lamoral, Tom De Backer, Calix De Clercq

Abstract:

We aimed to evaluate the accuracy of an in-house developed low-cost computer-assisted surgery (CAS) protocol for osseous free flap mandibular reconstruction. All patients who underwent primary or secondary mandibular reconstruction with a free (solely or composite) osseous flap, either a fibula free flap or iliac crest free flap, between January 2014 and December 2017 were evaluated. The low-cost protocol consisted out of a virtual surgical planning, a prebend custom reconstruction plate and an individualized free flap positioning guide. The accuracy of the protocol was evaluated through comparison of the postoperative outcome with the 3D virtual planning, based on measurement of the following parameters: intercondylar distance, mandibular angle (axial and sagittal), inner angular distance, anterior-posterior distance, length of the fibular/iliac crest segments and osteotomy angles. A statistical analysis of the obtained values was done. Virtual 3D surgical planning and cutting guide design were performed with Proplan CMF® software (Materialise, Leuven, Belgium) and IPS Gate (KLS Martin, Tuttlingen, Germany). Segmentation of the DICOM data as well as outcome analysis were done with BrainLab iPlan® Software (Brainlab AG, Feldkirchen, Germany). A cost analysis of the protocol was done. Twenty-two patients (11 fibula /11 iliac crest) were included and analyzed. Based on voxel-based registration on the cranial base, 3D virtual planning landmark parameters did not significantly differ from those measured on the actual treatment outcome (p-values >0.05). A cost evaluation of the in-house developed CAS protocol revealed a 1750 euro cost reduction in comparison with a standard CAS protocol with a patient-specific reconstruction plate. Our results indicate that an accurate transfer of the planning with our in-house developed low-cost CAS protocol is feasible at a significant lower cost.

Keywords: CAD/CAM, computer-assisted surgery, low-cost, mandibular reconstruction

Procedia PDF Downloads 117
482 Prandtl Number Influence Analysis on Droplet Migration in Natural Convection Flow Using the Level Set Method

Authors: Isadora Bugarin, Taygoara F. de Oliveira

Abstract:

Multiphase flows have currently been placed as a key solution for technological advances in energy and thermal sciences. The comprehension of droplet motion and behavior on non-isothermal flows is, however, rather limited. The present work consists of an investigation of a 2D droplet migration on natural convection inside a square enclosure with differentially heated walls. The investigation in question concerns the effects on drop motion of imposing different combinations of Prandtl and Rayleigh numbers while defining the drop on distinct initial positions. The finite differences method was used to compute the Navier-Stokes and energy equations for a laminar flow, considering the Boussinesq approximation. Also, a high order level set method was applied to simulate the two-phase flow. A previous analysis developed by the authors had shown that for fixed values of Rayleigh and Prandtl, the variation of the droplet initial position at the beginning of the simulation delivered different patterns of motion, in which for Ra≥10⁴ the droplet presents two very specific behaviors: it can travel through a helical path towards the center or define cyclic circular paths resulting in closed paths when reaching the stationary regime. Now, when varying the Prandtl number for different Rayleigh regimes, it was observed that this particular parameter also affects the migration of the droplet, altering the motion patterns as its value is increased. On higher Prandtl values, the drop performs wider paths with larger amplitudes, traveling closer to the walls and taking longer time periods to finally reach the stationary regime. It is important to highlight that drastic drop behavior changes on the stationary regime were not yet observed, but the path traveled from the begging of the simulation until the stationary regime was significantly altered, resulting in distinct turning over frequencies. The flow’s unsteady Nusselt number is also registered for each case studied, enabling a discussion on the overall effects on heat transfer variations.

Keywords: droplet migration, level set method, multiphase flow, natural convection in enclosure, Prandtl number

Procedia PDF Downloads 95
481 Influence of Strike-Slip Faulting in the Tectonic Evolution of North-Eastern Tunisia

Authors: Aymen Arfaoui, Abdelkader Soumaya, Ali Kadri, Noureddine Ben Ayed

Abstract:

The major contractional events characterized by strike-slip faulting, folding, and thrusting occurred in the Eocene, Late Miocene, and Quaternary along with the NE Tunisian domain between Bou Kornine-Ressas- Msella and Cap Bon Peninsula. During the Plio-Quaternary, the Grombalia and Mornag grabens show a maximum of collapse in parallelism with the NNW-SSE SHmax direction and developed as 3rd order extensive regions within a regional compressional regime. Using available tectonic and geophysical data supplemented by new fault-kinematic observations, we show that Cenozoic deformations are dominated by first order N-S faults reactivation, this sinistral wrench system is responsible for the formation of strike-slip duplexes, thrusts, folds, and grabens. Based on our new structural interpretation, the major faults of N-S Axis, Bou Kornine-Ressas-Messella (MRB), and Hammamet-Korbous (HK) form an N-S first order restraining stepover within a left-lateral strike-slip duplex. The N-S master MRB fault is dominated by contractional imbricate fans, while the parallel HK fault is characterized by a trailing of extensional imbricate fans. The Eocene and Miocene compression phases in the study area caused sinistral strike-slip reactivation of pre-existing N-S faults, reverse reactivation of NE-SW trending faults, and normal-oblique reactivation of NW-SE faults, creating a NE-SW to N-S trending system of east-verging folds and overlaps. Seismic tomography images reveal a key role for the lithospheric subvertical tear or STEP fault (Slab Transfer Edge Propagator) evidenced below this region on the development of the MRB and the HK relay zone. The presence of extensive syntectonic Pliocene sequences above this crustal scale fault may be the result of a recent lithospheric vertical motion of this STEP fault due to the rollback and lateral migration of the Calabrian slab eastward.

Keywords: Tunisia, strike-slip fault, contractional duplex, tectonic stress, restraining stepover, STEP fault

Procedia PDF Downloads 102
480 Performance Study of Neodymium Extraction by Carbon Nanotubes Assisted Emulsion Liquid Membrane Using Response Surface Methodology

Authors: Payman Davoodi-Nasab, Ahmad Rahbar-Kelishami, Jaber Safdari, Hossein Abolghasemi

Abstract:

The high purity rare earth elements (REEs) have been vastly used in the field of chemical engineering, metallurgy, nuclear energy, optical, magnetic, luminescence and laser materials, superconductors, ceramics, alloys, catalysts, and etc. Neodymium is one of the most abundant rare earths. By development of a neodymium–iron–boron (Nd–Fe–B) permanent magnet, the importance of neodymium has dramatically increased. Solvent extraction processes have many operational limitations such as large inventory of extractants, loss of solvent due to the organic solubility in aqueous solutions, volatilization of diluents, etc. One of the promising methods of liquid membrane processes is emulsion liquid membrane (ELM) which offers an alternative method to the solvent extraction processes. In this work, a study on Nd extraction through multi-walled carbon nanotubes (MWCNTs) assisted ELM using response surface methodology (RSM) has been performed. The ELM composed of diisooctylphosphinic acid (CYANEX 272) as carrier, MWCNTs as nanoparticles, Span-85 (sorbitan triooleate) as surfactant, kerosene as organic diluent and nitric acid as internal phase. The effects of important operating variables namely, surfactant concentration, MWCNTs concentration, and treatment ratio were investigated. Results were optimized using a central composite design (CCD) and a regression model for extraction percentage was developed. The 3D response surfaces of Nd(III) extraction efficiency were achieved and significance of three important variables and their interactions on the Nd extraction efficiency were found out. Results indicated that introducing the MWCNTs to the ELM process led to increasing the Nd extraction due to higher stability of membrane and mass transfer enhancement. MWCNTs concentration of 407 ppm, Span-85 concentration of 2.1 (%v/v) and treatment ratio of 10 were achieved as the optimum conditions. At the optimum condition, the extraction of Nd(III) reached the maximum of 99.03%.

Keywords: emulsion liquid membrane, extraction of neodymium, multi-walled carbon nanotubes, response surface method

Procedia PDF Downloads 228
479 Study on the Voltage Induced Wrinkling of Elastomer with Different Electrode Areas

Authors: Zhende Hou, Fan Yang, Guoli Zhang

Abstract:

Dielectric elastomer is a promising class of Electroactive polymers which can deform in response to an applied electric field. Comparing general smart material, the Dielectric elastomer is more compliance and can achieve higher energy density, which can be for diverse applications such as actuators, artificial muscles, soft robotics, and energy harvesters. The coupling of the Electroactive polymers and the electric field is that the elastomer is sandwiched between two compliant electrodes and when the electrodes are subjected to a voltage, the positive and negative charges on the two electrodes compress the polymer, so that the polymer reduces in thickness and expands in area. However, the pre-stretched dielectric elastomer film not only can achieve large electric-field induced deformation but also is prone to wrinkling, under the interaction of its own strain energy and the applied electric field energy. For a uniaxially pre-stretched dielectric elastomer film, the electrode area is an important parameter to the electric-field induced deformation and may also be a key factor affecting the film wrinkling. To determine and quantify the effect experimentally, VHB 9473 tapes were employed and compliant electrodes with different areas were pant on each of them. The tape was first tensed to a uniaxial stretch of 8. Then a DC voltage was applied to the electrodes and increased gradually until wrinkling occurred in the film. Then, the critical wrinkling voltages of the film with different electrode areas were obtained, and the wrinkle wavelengths were obtained simultaneously for analyzing the wrinkling characteristics. Experimental results indicate when the electrode area is smaller the wrinkling voltage is higher, and with the increases of electrode area, the wrinkling voltage decreases rapidly until a specific area. Beyond that, the wrinkling voltage becomes larger gradually with the increases of the area. While the wrinkle wavelength decreases gradually with the increase of voltage monotonically. That is, the relation between the critical wrinkling voltage and the electrode areas is U-shaped. Analysis believes that the film wrinkling is a kind of local effect, the interaction and the energy transfer between electrode region and non-electrode region have great influence on wrinkling. In the experiment, very thin copper wires are used as the electrode leads that just contact with the electrodes, which can avoid the stiffness of the leads affecting the wrinkling.

Keywords: elastomers, uniaxial stretch, electrode area, wrinkling

Procedia PDF Downloads 213
478 Design, Construction and Evaluation of a Mechanical Vapor Compression Distillation System for Wastewater Treatment in a Poultry Company

Authors: Juan S. Vera, Miguel A. Gomez, Omar Gelvez

Abstract:

Water is Earth's most valuable resource, and the lack of it is currently a critical problem in today’s society. Non-treated wastewaters contribute to this situation, especially those coming from industrial activities, as they reduce the quality of the water bodies, annihilating all kind of life and bringing disease to people in contact with them. An effective solution for this problem is distillation, which removes most contaminants. However, this approach must also be energetically efficient in order to appeal to the industry. In this endeavour, most water distillation treatments fail, with the exception of the Mechanical Vapor Compression (MVC) distillation system, which has a great efficiency due to energy input by a compressor and the latent heat exchange. This paper presents the process of design, construction, and evaluation of a Mechanical Vapor Compression (MVC) distillation system for the main Colombian poultry company Avidesa Macpollo SA. The system will be located in the principal slaughterhouse in the state of Santander, and it will work along with the Gas Energy Mixing system (GEM) to treat the wastewaters from the plant. The main goal of the MVC distiller, rarely used in this type of application, is to reduce the chlorides, Chemical Oxygen Demand (COD) and Biological Oxygen Demand (BOD) levels according to the state regulations since the GEM cannot decrease them enough. The MVC distillation system works with three components, the evaporator/condenser heat exchanger where the distillation takes place, a low-pressure compressor which gives the energy to create the temperature differential between the evaporator and condenser cavities and a preheater to save the remaining energy in the distillate. The model equations used to describe how the compressor power consumption, heat exchange area and distilled water are related is based on a thermodynamic balance and heat transfer analysis, with correlations taken from the literature. Finally, the design calculations and the measurements of the installation are compared, showing accordance with the predictions in distillate production and power consumption, changing the temperature difference of the evaporator/condenser.

Keywords: mechanical vapor compression, distillation, wastewater, design, construction, evaluation

Procedia PDF Downloads 135