Search results for: real estate price prediction
604 The Impact of Undisturbed Flow Speed on the Correlation of Aerodynamic Coefficients as a Function of the Angle of Attack for the Gyroplane Body
Authors: Zbigniew Czyz, Krzysztof Skiba, Miroslaw Wendeker
Abstract:
This paper discusses the results of aerodynamic investigation of the Tajfun gyroplane body designed by a Polish company, Aviation Artur Trendak. This gyroplane has been studied as a 1:8 scale model. Scaling objects for aerodynamic investigation is an inherent procedure in any kind of designing. If scaling, the criteria of similarity need to be satisfied. The basic criteria of similarity are geometric, kinematic and dynamic. Despite the results of aerodynamic research are often reduced to aerodynamic coefficients, one should pay attention to how values of coefficients behave if certain criteria are to be satisfied. To satisfy the dynamic criterion, for example, the Reynolds number should be focused on. This is the correlation of inertial to viscous forces. With the multiplied flow speed by the specific dimension as a numerator (with a constant kinematic viscosity coefficient), flow speed in a wind tunnel research should be increased as many times as an object is decreased. The aerodynamic coefficients specified in this research depend on the real forces that act on an object, its specific dimension, medium speed and variations in its density. Rapid prototyping with a 3D printer was applied to create the research object. The research was performed with a T-1 low-speed wind tunnel (its diameter of the measurement volume is 1.5 m) and a six-element aerodynamic internal scales, WDP1, at the Institute of Aviation in Warsaw. This T-1 wind tunnel is low-speed continuous operation with open space measurement. The research covered a number of the selected speeds of undisturbed flow, i.e. V = 20, 30 and 40 m/s, corresponding to the Reynolds numbers (as referred to 1 m) Re = 1.31∙106, 1.96∙106, 2.62∙106 for the angles of attack ranging -15° ≤ α ≤ 20°. Our research resulted in basic aerodynamic characteristics and observing the impact of undisturbed flow speed on the correlation of aerodynamic coefficients as a function of the angle of attack of the gyroplane body. If the speed of undisturbed flow in the wind tunnel changes, the aerodynamic coefficients are significantly impacted. At speed from 20 m/s to 30 m/s, drag coefficient, Cx, changes by 2.4% up to 9.9%, whereas lift coefficient, Cz, changes by -25.5% up to 15.7% if the angle of attack of 0° excluded or by -25.5% up to 236.9% if the angle of attack of 0° included. Within the same speed range, the coefficient of a pitching moment, Cmy, changes by -21.1% up to 7.3% if the angles of attack -15° and -10° excluded or by -142.8% up to 618.4% if the angle of attack -15° and -10° included. These discrepancies in the coefficients of aerodynamic forces definitely need to consider while designing the aircraft. For example, if load of certain aircraft surfaces is calculated, additional correction factors definitely need to be applied. This study allows us to estimate the discrepancies in the aerodynamic forces while scaling the aircraft. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: aerodynamics, criteria of similarity, gyroplane, research tunnel
Procedia PDF Downloads 393603 The Role of People and Data in Complex Spatial-Related Long-Term Decisions: A Case Study of Capital Project Management Groups
Authors: Peter Boyes, Sarah Sharples, Paul Tennent, Gary Priestnall, Jeremy Morley
Abstract:
Significant long-term investment projects can involve complex decisions. These are often described as capital projects, and the factors that contribute to their complexity include budgets, motivating reasons for investment, stakeholder involvement, interdependent projects, and the delivery phases required. The complexity of these projects often requires management groups to be established involving stakeholder representatives; these teams are inherently multidisciplinary. This study uses two university campus capital projects as case studies for this type of management group. Due to the interaction of projects with wider campus infrastructure and users, decisions are made at varying spatial granularity throughout the project lifespan. This spatial-related context brings complexity to the group decisions. Sensemaking is the process used to achieve group situational awareness of a complex situation, enabling the team to arrive at a consensus and make a decision. The purpose of this study is to understand the role of people and data in the complex spatial related long-term decision and sensemaking processes. The paper aims to identify and present issues experienced in practical settings of these types of decision. A series of exploratory semi-structured interviews with members of the two projects elicit an understanding of their operation. From two stages of thematic analysis, inductive and deductive, emergent themes are identified around the group structure, the data usage, and the decision making within these groups. When data were made available to the group, there were commonly issues with the perception of veracity and validity of the data presented; this impacted the ability of group to reach consensus and, therefore, for decisions to be made. Similarly, there were different responses to forecasted or modelled data, shaped by the experience and occupation of the individuals within the multidisciplinary management group. This paper provides an understanding of further support required for team sensemaking and decision making in complex capital projects. The paper also discusses the barriers found to effective decision making in this setting and suggests opportunities to develop decision support systems in this team strategic decision-making process. Recommendations are made for further research into the sensemaking and decision-making process of this complex spatial-related setting.Keywords: decision making, decisions under uncertainty, real decisions, sensemaking, spatial, team decision making
Procedia PDF Downloads 131602 Visual Aid and Imagery Ramification on Decision Making: An Exploratory Study Applicable in Emergency Situations
Authors: Priyanka Bharti
Abstract:
Decades ago designs were based on common sense and tradition, but after an enhancement in visualization technology and research, we are now able to comprehend the cognitive ability involved in the decoding of the visual information. However, many fields in visuals need intense research to deliver an efficient explanation for the events. Visuals are an information representation mode through images, symbols and graphics. It plays an impactful role in decision making by facilitating quick recognition, comprehension, and analysis of a situation. They enhance problem-solving capabilities by enabling the processing of more data without overloading the decision maker. As research proves that, visuals offer an improved learning environment by a factor of 400 compared to textual information. Visual information engages learners at a cognitive level and triggers the imagination, which enables the user to process the information faster (visuals are processed 60,000 times faster in the brain than text). Appropriate information, visualization, and its presentation are known to aid and intensify the decision-making process for the users. However, most literature discusses the role of visual aids in comprehension and decision making during normal conditions alone. Unlike emergencies, in a normal situation (e.g. our day to day life) users are neither exposed to stringent time constraints nor face the anxiety of survival and have sufficient time to evaluate various alternatives before making any decision. An emergency is an unexpected probably fatal real-life situation which may inflict serious ramifications on both human life and material possessions unless corrective measures are taken instantly. The situation demands the exposed user to negotiate in a dynamic and unstable scenario in the absence or lack of any preparation, but still, take swift and appropriate decisions to save life/lives or possessions. But the resulting stress and anxiety restricts cue sampling, decreases vigilance, reduces the capacity of working memory, causes premature closure in evaluating alternative options, and results in task shedding. Limited time, uncertainty, high stakes and vague goals negatively affect cognitive abilities to take appropriate decisions. More so, theory of natural decision making by experts has been understood with far more depth than that of an ordinary user. Therefore, in this study, the author aims to understand the role of visual aids in supporting rapid comprehension to take appropriate decisions during an emergency situation.Keywords: cognition, visual, decision making, graphics, recognition
Procedia PDF Downloads 268601 Retrospective Cartography of Tbilisi and Surrounding Area
Authors: Dali Nikolaishvili, Nino Khareba, Mariam Tsitsagi
Abstract:
Tbilisi has been a capital of Georgia since the 5ᵗʰ century. City area was covered by forest in historical past. Nowadays the situation has been changing dramatically. Dozens of problems are caused by damages/destruction of green cover and solution, at one glance, seems to be uncomplicated (planting trees and creating green quarters), but on the other hand, according to the increasing tendency, the built up of areas still remains unsolved. Finding out the ways to overcome such obstacles is important even for protecting the health of society. Making of Retrospective cartography of the forest area of Tbilisi with use of GIS technology and remote sensing was the main aim of the research. Research about the dynamic of forest-cover in Tbilisi and its surroundings included the following steps: assessment of the dynamic of forest in Tbilisi and its surroundings. The survey was mainly based on the retrospective mapping method. Using of GIS technology, studying, comparing and identifying the narrative sources was the next step. And the last one was analyzed of the changes from the 80s to the present days on the basis of decryption of remotely sensed images. After creating a unified cartographic basis, the mapping and plans of different periods have been linked to this geodatabase. Data about green parks, individual old plants existing in the private yards and respondents' Information (according to a questionnaire created in advance) was added to the basic database, the general plan of Tbilisi and Scientific works as well. On the basis of analysis of historic, including cartographic sources, forest-cover maps for different periods of time were made. In addition, was made the catalog of individual green parks (location, area, typical composition, name and so on), which was the basis of creating several thematic maps. Areas with a high rate of green area degradation were identified. Several maps depicting the dynamics of forest cover of Tbilisi were created and analyzed. The methods of linking the data of the old cartographic sources to the modern basis were developed too, the result of which may be used in Urban Planning of Tbilisi. Understanding, perceiving and analyzing the real condition of green cover in Tbilisi and its problems, in turn, will help to take appropriate measures for the maintenance of ancient plants, to develop forests and to plan properly parks, squares, and recreational sites. Because the healthy environment is the main condition of human health and implies to the rational development of the city.Keywords: catalogue of green area, GIS, historical cartography, cartography, remote sensing, Tbilisi
Procedia PDF Downloads 137600 Rapid Detection of Cocaine Using Aggregation-Induced Emission and Aptamer Combined Fluorescent Probe
Authors: Jianuo Sun, Jinghan Wang, Sirui Zhang, Chenhan Xu, Hongxia Hao, Hong Zhou
Abstract:
In recent years, the diversification and industrialization of drug-related crimes have posed significant threats to public health and safety globally. The widespread and increasingly younger demographics of drug users and the persistence of drug-impaired driving incidents underscore the urgency of this issue. Drug detection, a specialized forensic activity, is pivotal in identifying and analyzing substances involved in drug crimes. It relies on pharmacological and chemical knowledge and employs analytical chemistry and modern detection techniques. However, current drug detection methods are limited by their inability to perform semi-quantitative, real-time field analyses. They require extensive, complex laboratory-based preprocessing, expensive equipment, and specialized personnel and are hindered by long processing times. This study introduces an alternative approach using nucleic acid aptamers and Aggregation-Induced Emission (AIE) technology. Nucleic acid aptamers, selected artificially for their specific binding to target molecules and stable spatial structures, represent a new generation of biosensors following antibodies. Rapid advancements in AIE technology, particularly in tetraphenyl ethene-based luminous, offer simplicity in synthesis and versatility in modifications, making them ideal for fluorescence analysis. This work successfully synthesized, isolated, and purified an AIE molecule and constructed a probe comprising the AIE molecule, nucleic acid aptamers, and exonuclease for cocaine detection. The probe demonstrated significant relative fluorescence intensity changes and selectivity towards cocaine over other drugs. Using 4-Butoxytriethylammonium Bromide Tetraphenylethene (TPE-TTA) as the fluorescent probe, the aptamer as the recognition unit, and Exo I as an auxiliary, the system achieved rapid detection of cocaine within 5 mins in aqueous and urine, with detection limits of 1.0 and 5.0 µmol/L respectively. The probe-maintained stability and interference resistance in urine, enabling quantitative cocaine detection within a certain concentration range. This fluorescent sensor significantly reduces sample preprocessing time, offers a basis for rapid onsite cocaine detection, and promises potential for miniaturized testing setups.Keywords: drug detection, aggregation-induced emission (AIE), nucleic acid aptamer, exonuclease, cocaine
Procedia PDF Downloads 61599 Dynamic Characterization of Shallow Aquifer Groundwater: A Lab-Scale Approach
Authors: Anthony Credoz, Nathalie Nief, Remy Hedacq, Salvador Jordana, Laurent Cazes
Abstract:
Groundwater monitoring is classically performed in a network of piezometers in industrial sites. Groundwater flow parameters, such as direction, sense and velocity, are deduced from indirect measurements between two or more piezometers. Groundwater sampling is generally done on the whole column of water inside each borehole to provide concentration values for each piezometer location. These flow and concentration values give a global ‘static’ image of potential plume of contaminants evolution in the shallow aquifer with huge uncertainties in time and space scales and mass discharge dynamic. TOTAL R&D Subsurface Environmental team is challenging this classical approach with an innovative dynamic way of characterization of shallow aquifer groundwater. The current study aims at optimizing the tools and methodologies for (i) a direct and multilevel measurement of groundwater velocities in each piezometer and, (ii) a calculation of potential flux of dissolved contaminant in the shallow aquifer. Lab-scale experiments have been designed to test commercial and R&D tools in a controlled sandbox. Multiphysics modeling were performed and took into account Darcy equation in porous media and Navier-Stockes equation in the borehole. The first step of the current study focused on groundwater flow at porous media/piezometer interface. Huge uncertainties from direct flow rate measurements in the borehole versus Darcy flow rate in the porous media were characterized during experiments and modeling. The structure and location of the tools in the borehole also impacted the results and uncertainties of velocity measurement. In parallel, direct-push tool was tested and presented more accurate results. The second step of the study focused on mass flux of dissolved contaminant in groundwater. Several active and passive commercial and R&D tools have been tested in sandbox and reactive transport modeling has been performed to validate the experiments at the lab-scale. Some tools will be selected and deployed in field assays to better assess the mass discharge of dissolved contaminants in an industrial site. The long-term subsurface environmental strategy is targeting an in-situ, real-time, remote and cost-effective monitoring of groundwater.Keywords: dynamic characterization, groundwater flow, lab-scale, mass flux
Procedia PDF Downloads 167598 Quercetin and INT3 Inhibits Endocrine Therapy Resistance and Epithelial to Mesenchymal Transition in MCF7 Breast Cancer Cells
Authors: S. Pradhan, D. Pradhan, G. Tripathy
Abstract:
Anti-estrogen treatment resistant is a noteworthy reason for disease relapse and mortality in estrogen receptor alpha (ERα)- positive breast cancers. Tamoxifen or estrogen withdrawal increases the dependance of breast malignancy cells on INT3 signaling. Here, we researched the contribution of Quercetin and INT3 signaling in endocrine resistant breast cancer cells. Methods: We utilized two models of endocrine therapies resistant (ETR-) breast cancer: tamoxifen-resistant (TamR) and long term estrogen-deprived (LTED) MCF7 cells. We assessed the migratory and invasive limit of these cells by Transwell assay. Expression of epithelial to mesenchymal transition (EMT) controllers and in addition INT3 receptors and targets were assessed by real-time PCR and western blot analysis. Besides, we tried in vitro anti-Quercetin monoclonal antibodies (mAbs) and gamma secretase inhibitors (GSIs) as potential EMT reversal therapeutic agents. At last, we created stable Quercetin over expessing MCF7 cells and assessed their EMT features and response to tamoxifen. Results:We found that ETR cells acquired an epithelial to mesenchymal transition (EMT) phenotype and showed expanded levels of Quercetin and INT3 targets. Interestingly, we detected higher level of INT3 however lower levels of INT31 and INT32 proposing a switch to targeting through distinctive INT3 receptors after obtaining of resistance. Anti-Quercetin monoclonal antibodies and the GSI PF03084014 were effective in obstructing the Quercetin/INT3 axis and in part inhibiting the EMT process. As a consequence of this, cell migration and invasion were weakened and the stem cell like population was considerably decreased. Genetic hushing of Quercetin and INT3 prompted proportionate impacts. Finally, stable overexpression of Quercetin was adequate to make MCF7 lethargic to tamoxifen by INT3 activation. Conclusions: ETR cells express abnormal amounts of Quercetin and INT3, whose actuation eventually drives invasive conduct. Anti-Quercetin mAbs and GSI PF03084014 lessen expression of EMT molecules decreasing cellular invasiveness. Quercetin overexpression instigates tamoxifen resistance connected to obtaining of EMT phenotype. Our discovering propose that focusing on Quercetin and/or INT3 warrants further clinical assessment as substantial therapeutic methodologies in endocrine-resistant breast cancer.Keywords: quercetin, INT3, mesenchymal transition, MCF7 breast cancer cells
Procedia PDF Downloads 311597 Relation of Consumer Satisfaction on Organization by Focusing on the Different Aspects of Buying Behavior
Abstract:
Introduction. Buyer conduct is a progression of practices or examples that buyers pursue before making a buy. It begins when the shopper ends up mindful of a need or wish for an item, at that point finishes up with the buying exchange. Business visionaries can't generally simply shake hands with their intended interest group people and become more acquainted with them. Research is often necessary, so every organization primarily involves doing continuous research to understand and satisfy consumer needs pattern. Aims and Objectives: The aim of the present study is to examine the different behaviors of the consumer, including pre-purchase, purchase, and post-purchase behavior. Materials and Methods: In order to get results, face to face interview held with 80 people which comprise a larger part of female individuals having upper as well as middle-class status. The prime source of data collection was primary. However, the study has also used the theoretical contribution of many researchers in their respective field. Results: Majority of the respondents were females (70%) from the age group of 20-50. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis, and ANOVA which has rejected the null hypothesis that there is no relation between researching the consumer behavior at different stages and organizational performance. The real finding of this study is that simply focusing on the buying part isn't enough to gain profits and fame, however, understanding the pre, buy and post-buy behavior of consumer performs a huge role in organization success. The outcomes demonstrated that the organization, which deals with the three phases of research of purchasing conduct is able to establish a great brand image as compare to their competitors. Alongside, enterprises can observe customer conduct in a considerably more proficient manner. Conclusion: The analyses of consumer behavior presented in this study is an attempt to understand the factors affecting consumer purchasing behavior. This study has revealed that those corporations are more successful, which work on understanding buying behavior instead to just focus on the selling products. As a result, organizations perform good and grow rapidly because consumers are the one who can make or break the company. The interviews that were conducted face to face, clearly revealed that those organizations become at top-notch whom consumers are satisfied, not just with product but also with services of the company. The study is not targeting the particular class of audience; however, it brings out benefits to the masses, in particular to business organizations.Keywords: consumer behavior, pre purchase, post purchase, consumer satisfaction
Procedia PDF Downloads 112596 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 248595 Design of DNA Origami Structures Using LAMP Products as a Combined System for the Detection of Extended Spectrum B-Lactamases
Authors: Kalaumari Mayoral-Peña, Ana I. Montejano-Montelongo, Josué Reyes-Muñoz, Gonzalo A. Ortiz-Mancilla, Mayrin Rodríguez-Cruz, Víctor Hernández-Villalobos, Jesús A. Guzmán-López, Santiago García-Jacobo, Iván Licona-Vázquez, Grisel Fierros-Romero, Rosario Flores-Vallejo
Abstract:
The group B-lactamic antibiotics include some of the most frequently used small drug molecules against bacterial infections. Nevertheless, an alarming decrease in their efficacy has been reported due to the emergence of antibiotic-resistant bacteria. Infections caused by bacteria expressing extended Spectrum B-lactamases (ESBLs) are difficult to treat and account for higher morbidity and mortality rates, delayed recovery, and high economic burden. According to the Global Report on Antimicrobial Resistance Surveillance, it is estimated that mortality due to resistant bacteria will ascend to 10 million cases per year worldwide. These facts highlight the importance of developing low-cost and readily accessible detection methods of drug-resistant ESBLs bacteria to prevent their spread and promote accurate and fast diagnosis. Bacterial detection is commonly done using molecular diagnostic techniques, where PCR stands out for its high performance. However, this technique requires specialized equipment not available everywhere, is time-consuming, and has a high cost. Loop-Mediated Isothermal Amplification (LAMP) is an alternative technique that works at a constant temperature, significantly decreasing the equipment cost. It yields double-stranded DNA of several lengths with repetitions of the target DNA sequence as a product. Although positive and negative results from LAMP can be discriminated by colorimetry, fluorescence, and turbidity, there is still a large room for improvement in the point-of-care implementation. DNA origami is a technique that allows the formation of 3D nanometric structures by folding a large single-stranded DNA (scaffold) into a determined shape with the help of short DNA sequences (staples), which hybridize with the scaffold. This research aimed to generate DNA origami structures using LAMP products as scaffolds to improve the sensitivity to detect ESBLs in point-of-care diagnosis. For this study, the coding sequence of the CTM-X-15 ESBL of E. coli was used to generate the LAMP products. The set of LAMP primers were designed using PrimerExplorerV5. As a result, a target sequence of 200 nucleotides from CTM-X-15 ESBL was obtained. Afterward, eight different DNA origami structures were designed using the target sequence in the SDCadnano and analyzed with CanDo to evaluate the stability of the 3D structures. The designs were constructed minimizing the total number of staples to reduce costs and complexity for point-of-care applications. After analyzing the DNA origami designs, two structures were selected. The first one was a zig-zag flat structure, while the second one was a wall-like shape. Given the sequence repetitions in the scaffold sequence, both were able to be assembled with only 6 different staples each one, ranging between 18 to 80 nucleotides. Simulations of both structures were performed using scaffolds of different sizes yielding stable structures in all the cases. The generation of the LAMP products were tested by colorimetry and electrophoresis. The formation of the DNA structures was analyzed using electrophoresis and colorimetry. The modeling of novel detection methods through bioinformatics tools allows reliable control and prediction of results. To our knowledge, this is the first study that uses LAMP products and DNA-origami in combination to delect ESBL-producing bacterial strains, which represent a promising methodology for diagnosis in the point-of-care.Keywords: beta-lactamases, antibiotic resistance, DNA origami, isothermal amplification, LAMP technique, molecular diagnosis
Procedia PDF Downloads 222594 Policies to Reduce the Demand and Supply of Illicit Drugs in the Latin America: 2004 to 2016
Authors: Ana Caroline Ibrahim Lino, Denise Bomtempo Birche de Carvalho
Abstract:
The background of this research is the international process of control and monitoring of illicit psychoactive substances that has commenced in the early 20th century. This process was intensified with the UN Single Convention on Narcotic Drugs of 1961 and had its culmination in the 1970s with the "War on drugs", a doctrine undertaken by the United States of America. Since then, the phenomenon of drug prohibition has been pushing debates around alternatives of public policies to confront their consequences at a global level and in the specific context of Latin America. Previous research has answered the following key questions: a) With what characteristics and models has the international illicit drug control system consolidated in Latin America with the creation of the Organization of American States (OAS) and the Inter-American Drug Abuse Control Commission (CICAD)? b) What drug policies and programs were determined as guidelines for the member states by the OAS and CICAD? The present paper mainly addresses the analysis of the drug strategies developed by the OAS/CICAD for the Americas from 2004 to 2016. The primary sources have been extracted from the OAS/CICAD documents and reports, listed on the Internet sites of these organizations. Secondary sources refer to bibliographic research on the subject with the following descriptors: illicit drugs, public policies, international organizations, OAS, CICAD, and reducing the demand and supply of illicit drugs. The "content analysis" technique was used to organize the collected material and to choose the axes of analysis. The results show that the policies, strategies, and action plans for Latin America had been focused on anti-drug actions since the creation of the Commission until 2010. The discourses and policies to reduce drug demand and supply were of great importance for solving the problem. However, the real focus was on eliminating the substances by controlling the production, marketing, and distribution of illicit drugs. Little attention was given to the users and their families. The research is of great relevance to the Social Work. The guidelines and parameters of the Social Worker's profession are in line with the need for social, ethical, and political strengthening of any dimension that guarantees the rights of users of psychoactive substances. In addition, it contributed to the understanding of the political, economic, social, and cultural factors that structure the prohibitionism, whose matrix anchors the deprivation of rights and violence.Keywords: illicit drug policies, international organizations, latin America, prohibitionism, reduce the demand and supply of illicit drugs
Procedia PDF Downloads 161593 The First Import of Yellow Fever Cases in China and Its Revealing Suggestions for the Control and Prevention of Imported Emerging Diseases
Authors: Chao Li, Lei Zhou, Ruiqi Ren, Dan Li, Yali Wang, Daxin Ni, Zijian Feng, Qun Li
Abstract:
Background: In 2016, yellow fever had been first ever discovered in China, soon after the yellow fever epidemic occurred in Angola. After the discovery, China had promptly made the national protocol of control and prevention and strengthened the surveillance on passenger and vector. In this study, a descriptive analysis was conducted to summarize China’s experiences of response towards this import epidemic, in the hope of providing experiences on prevention and control of yellow fever and other similar imported infectious diseases in the future. Methods: The imported cases were discovered and reported by General Administration of Quality Supervision, Inspection and Quarantine (AQSIQ) and several hospitals. Each clinically diagnosed yellow fever case was confirmed by real-time reverse transcriptase polymerase chain reaction (RT–PCR). The data of the imported yellow fever cases were collected by local Centers for Disease Control and Prevention (CDC) through field investigations soon after they received the reports. Results: A total of 11 imported cases from Angola were reported in China, during Angola’s yellow fever outbreak. Six cases were discovered by the AQSIQ, among which two with mild symptom were initiative declarations at the time of entry. Except for one death, the remaining 10 cases all had recovered after timely and proper treatment. All cases are Chinese, and lived in Luanda, the capital of Angola. 73% were retailers (8/11) from Fuqing city in Fujian province, and the other three were labors send by companies. 10 cases had experiences of medical treatment in Luanda after onset, among which 8 cases visited the same local Chinese medicine hospital (China Railway four Bureau Hospital). Among the 11 cases, only one case had an effective vaccination. The result of emergency surveillance for mosquito density showed that only 14 containers of water were found positive around places of three cases, and the Breteau Index is 15. Conclusions: Effective response was taken to control and prevent the outbreak of yellow fever in China after discovering the imported cases. However, though the similar origin of Chinese in Angola has provided an easy access for disease detection, information sharing, health education and vaccination on yellow fever; these conveniences were overlooked during previous disease prevention methods. Besides, only one case having effective vaccination revealed the inadequate capacity of immunization service in China. These findings will provide suggestions to improve China’s capacity to deal with not only yellow fever but also other similar imported diseases in China.Keywords: yellow fever, first import, China, suggestion
Procedia PDF Downloads 187592 Dose Profiler: A Tracking Device for Online Range Monitoring in Particle Therapy
Authors: G. Battistoni, F. Collamati, E. De Lucia, R. Faccini, C. Mancini-Terracciano, M. Marafini, I. Mattei, S. Muraro, V. Patera, A. Sarti, A. Sciubba, E. Solfaroli Camillocci, M. Toppi, G. Traini, S. M. Valle, C. Voena
Abstract:
Accelerated charged particles, mainly protons and carbon ions, are presently used in Particle Therapy (PT) to treat solid tumors. The precision of PT exploiting the charged particle high localized dose deposition in tissues and biological effectiveness in killing cancer cells demands for an online dose monitoring technique, crucial to improve the quality assurance of treatments: possible patient mis-positionings and biological changes with respect to the CT scan could negatively affect the therapy outcome. In PT the beam range confined in the irradiated target can be monitored thanks to the secondary radiation produced by the interaction of the projectiles with the patient tissue. The Dose Profiler (DP) is a novel device designed to track charged secondary particles and reconstruct their longitudinal emission distribution, correlated to the Bragg peak position. The feasibility of this approach has been demonstrated by dedicated experimental measurements. The DP has been developed in the framework of the INSIDE project, MIUR, INFN and Centro Fermi, Museo Storico della Fisica e Centro Studi e Ricerche 'E. Fermi', Roma, Italy and will be tested at the Proton Therapy center of Trento (Italy) within the end of 2017. The DP combines a tracker, made of six layers of two-view scintillating fibers with square cross section (0.5 x 0.5 mm2) with two layers of two-view scintillating bars (section 12.0 x 0.6 mm2). The electronic readout is performed by silicon photomultipliers. The sensitive area of the tracking planes is 20 x 20 cm2. To optimize the detector layout, a Monte Carlo (MC) simulation based on the FLUKA code has been developed. The complete DP geometry and the track reconstruction code have been fully implemented in the MC. In this contribution, the DP hardware will be described. The expected detector performance computed using a dedicated simulation of a 220 MeV/u carbon ion beam impinging on a PMMA target will be presented, and the result will be discussed in the standard clinical application framework. A possible procedure for real-time beam range monitoring is proposed, following the expectations in actual clinical operation.Keywords: online range monitoring, particle therapy, quality assurance, tracking detector
Procedia PDF Downloads 240591 Method of Complex Estimation of Text Perusal and Indicators of Reading Quality in Different Types of Commercials
Authors: Victor N. Anisimov, Lyubov A. Boyko, Yazgul R. Almukhametova, Natalia V. Galkina, Alexander V. Latanov
Abstract:
Modern commercials presented on billboards, TV and on the Internet contain a lot of information about the product or service in text form. However, this information cannot always be perceived and understood by consumers. Typical sociological focus group studies often cannot reveal important features of the interpretation and understanding information that has been read in text messages. In addition, there is no reliable method to determine the degree of understanding of the information contained in a text. Only the fact of viewing a text does not mean that consumer has perceived and understood the meaning of this text. At the same time, the tools based on marketing analysis allow only to indirectly estimate the process of reading and understanding a text. Therefore, the aim of this work is to develop a valid method of recording objective indicators in real time for assessing the fact of reading and the degree of text comprehension. Psychophysiological parameters recorded during text reading can form the basis for this objective method. We studied the relationship between multimodal psychophysiological parameters and the process of text comprehension during reading using the method of correlation analysis. We used eye-tracking technology to record eye movements parameters to estimate visual attention, electroencephalography (EEG) to assess cognitive load and polygraphic indicators (skin-galvanic reaction, SGR) that reflect the emotional state of the respondent during text reading. We revealed reliable interrelations between perceiving the information and the dynamics of psychophysiological parameters during reading the text in commercials. Eye movement parameters reflected the difficulties arising in respondents during perceiving ambiguous parts of text. EEG dynamics in rate of alpha band were related with cumulative effect of cognitive load. SGR dynamics were related with emotional state of the respondent and with the meaning of text and type of commercial. EEG and polygraph parameters together also reflected the mental difficulties of respondents in understanding text and showed significant differences in cases of low and high text comprehension. We also revealed differences in psychophysiological parameters for different type of commercials (static vs. video, financial vs. cinema vs. pharmaceutics vs. mobile communication, etc.). Conclusions: Our methodology allows to perform multimodal evaluation of text perusal and the quality of text reading in commercials. In general, our results indicate the possibility of designing an integral model to estimate the comprehension of reading the commercial text in percent scale based on all noticed markers.Keywords: reading, commercials, eye movements, EEG, polygraphic indicators
Procedia PDF Downloads 166590 Experimental Investigation of Hydrogen Addition in the Intake Air of Compressed Engines Running on Biodiesel Blend
Authors: Hendrick Maxil Zárate Rocha, Ricardo da Silva Pereira, Manoel Fernandes Martins Nogueira, Carlos R. Pereira Belchior, Maria Emilia de Lima Tostes
Abstract:
This study investigates experimentally the effects of hydrogen addition in the intake manifold of a diesel generator operating with a 7% biodiesel-diesel oil blend (B7). An experimental apparatus setup was used to conduct performance and emissions tests in a single cylinder, air cooled diesel engine. This setup consisted of a generator set connected to a wirewound resistor load bank that was used to vary engine load. In addition, a flowmeter was used to determine hydrogen volumetric flowrate and a digital anemometer coupled with an air box to measure air flowrate. Furthermore, a digital precision electronic scale was used to measure engine fuel consumption and a gas analyzer was used to determine exhaust gas composition and exhaust gas temperature. A thermopar was installed near the exhaust collection to measure cylinder temperature. In-cylinder pressure was measured using an AVL Indumicro data acquisition system with a piezoelectric pressure sensor. An AVL optical encoder was installed in the crankshaft and synchronized with in-cylinder pressure in real time. The experimental procedure consisted of injecting hydrogen into the engine intake manifold at different mass concentrations of 2,6,8 and 10% of total fuel mass (B7 + hydrogen), which represented energy fractions of 5,15, 20 and 24% of total fuel energy respectively. Due to hydrogen addition, the total amount of fuel energy introduced increased and the generators fuel injection governor prevented any increases of engine speed. Several conclusions can be stated from the test results. A reduction in specific fuel consumption as a function of hydrogen concentration increase was noted. Likewise, carbon dioxide emissions (CO2), carbon monoxide (CO) and unburned hydrocarbons (HC) decreased as hydrogen concentration increased. On the other hand, nitrogen oxides emissions (NOx) increased due to average temperatures inside the cylinder being higher. There was also an increase in peak cylinder pressure and heat release rate inside the cylinder, since the fuel ignition delay was smaller due to hydrogen content increase. All this indicates that hydrogen promotes faster combustion and higher heat release rates and can be an important additive to all kind of fuels used in diesel generators.Keywords: diesel engine, hydrogen, dual fuel, combustion analysis, performance, emissions
Procedia PDF Downloads 350589 An Experimental Study on the Coupled Heat Source and Heat Sink Effects on Solid Rockets
Authors: Vinayak Malhotra, Samanyu Raina, Ajinkya Vajurkar
Abstract:
Enhancing the rocket efficiency by controlling the external factors in solid rockets motors has been an active area of research for most of the terrestrial and extra-terrestrial system operations. Appreciable work has been done, but the complexity of the problem has prevented thorough understanding due to heterogenous heat and mass transfer. On record, severe issues have surfaced amounting to irreplaceable loss of mankind, instruments, facilities, and huge amount of money being invested every year. The coupled effect of an external heat source and external heat sink is an aspect yet to be articulated in combustion. Better understanding of this coupled phenomenon will induce higher safety standards, efficient missions, reduced hazard risks, with better designing, validation, and testing. The experiment will help in understanding the coupled effect of an external heat sink and heat source on the burning process, contributing in better combustion and fire safety, which are very important for efficient and safer rocket flights and space missions. Safety is the most prevalent issue in rockets, which assisted by poor combustion efficiency, emphasizes research efforts to evolve superior rockets. This signifies real, engineering, scientific, practical, systems and applications. One potential application is Solid Rocket Motors (S.R.M). The study may help in: (i) Understanding the effect on efficiency of core engines due to the primary boosters if considered as source, (ii) Choosing suitable heat sink materials for space missions so as to vary the efficiency of the solid rocket depending on the mission, (iii) Giving an idea about how the preheating of the successive stage due to previous stage acting as a source may affect the mission. The present work governs the temperature (resultant) and thus the heat transfer which is expected to be non-linear because of heterogeneous heat and mass transfer. The study will deepen the understanding of controlled inter-energy conversions and the coupled effect of external source/sink(s) surrounding the burning fuel eventually leading to better combustion thus, better propulsion. The work is motivated by the need to have enhanced fire safety and better rocket efficiency. The specific objective of the work is to understand the coupled effect of external heat source and sink on propellant burning and to investigate the role of key controlling parameters. Results as of now indicate that there exists a singularity in the coupled effect. The dominance of the external heat sink and heat source decides the relative rocket flight in Solid Rocket Motors (S.R.M).Keywords: coupled effect, heat transfer, sink, solid rocket motors, source
Procedia PDF Downloads 223588 Towards Dynamic Estimation of Residential Building Energy Consumption in Germany: Leveraging Machine Learning and Public Data from England and Wales
Authors: Philipp Sommer, Amgad Agoub
Abstract:
The construction sector significantly impacts global CO₂ emissions, particularly through the energy usage of residential buildings. To address this, various governments, including Germany's, are focusing on reducing emissions via sustainable refurbishment initiatives. This study examines the application of machine learning (ML) to estimate energy demands dynamically in residential buildings and enhance the potential for large-scale sustainable refurbishment. A major challenge in Germany is the lack of extensive publicly labeled datasets for energy performance, as energy performance certificates, which provide critical data on building-specific energy requirements and consumption, are not available for all buildings or require on-site inspections. Conversely, England and other countries in the European Union (EU) have rich public datasets, providing a viable alternative for analysis. This research adapts insights from these English datasets to the German context by developing a comprehensive data schema and calibration dataset capable of predicting building energy demand effectively. The study proposes a minimal feature set, determined through feature importance analysis, to optimize the ML model. Findings indicate that ML significantly improves the scalability and accuracy of energy demand forecasts, supporting more effective emissions reduction strategies in the construction industry. Integrating energy performance certificates into municipal heat planning in Germany highlights the transformative impact of data-driven approaches on environmental sustainability. The goal is to identify and utilize key features from open data sources that significantly influence energy demand, creating an efficient forecasting model. Using Extreme Gradient Boosting (XGB) and data from energy performance certificates, effective features such as building type, year of construction, living space, insulation level, and building materials were incorporated. These were supplemented by data derived from descriptions of roofs, walls, windows, and floors, integrated into three datasets. The emphasis was on features accessible via remote sensing, which, along with other correlated characteristics, greatly improved the model's accuracy. The model was further validated using SHapley Additive exPlanations (SHAP) values and aggregated feature importance, which quantified the effects of individual features on the predictions. The refined model using remote sensing data showed a coefficient of determination (R²) of 0.64 and a mean absolute error (MAE) of 4.12, indicating predictions based on efficiency class 1-100 (G-A) may deviate by 4.12 points. This R² increased to 0.84 with the inclusion of more samples, with wall type emerging as the most predictive feature. After optimizing and incorporating related features like estimated primary energy consumption, the R² score for the training and test set reached 0.94, demonstrating good generalization. The study concludes that ML models significantly improve prediction accuracy over traditional methods, illustrating the potential of ML in enhancing energy efficiency analysis and planning. This supports better decision-making for energy optimization and highlights the benefits of developing and refining data schemas using open data to bolster sustainability in the building sector. The study underscores the importance of supporting open data initiatives to collect similar features and support the creation of comparable models in Germany, enhancing the outlook for environmental sustainability.Keywords: machine learning, remote sensing, residential building, energy performance certificates, data-driven, heat planning
Procedia PDF Downloads 57587 Questioning the Predominant Feminism in Ahalya, a Short Film by Sujoy Ghosh
Authors: Somya Sharma
Abstract:
Ahalya, the critically acclaimed short film, is known to demolish the gender constructs of the age old myth of Ahalya. The paper tries to crack the overt meaning of the short film by reading between the dialogues and deconstructing the idea of the pseudo feminism in the short film Ahalya by Sujoy Ghosh. The film, by subverting the role of male character by making it seem submissive as compared to the female character's role seems to be just a surface level reading of the text. It seems that Sujoy Ghosh has played not just with changing the paradigm, but also trying to alter the history by doing so. The age old myth of putting Ahalya as a part of the five virgins (panchkanya) of Hindu mythology is explored in the paper. God's manoeuvre cannot be questioned and the two male characters tend to again shape the deed and the life of the female character, Ahalya. It is of importance to note that even in the 21st century, progressive actors like Radhika Apte fail to acknowledge the politics of altering history, not in a progressive way. The film blinds the viewer in the first watch to fall for the female strength and ownership of her sexuality, which is reflected in the opening scene itself where she opens the gate for the police man Indra Sen (representing God Indra who seduced her) who is charmed by her white dress. White, in Hindu mythology, stands for mourning, and this can be a hint towards the prophecy of what is about to come. Ahalya, bold, strong, and confident in this scene seems to be in total ownership of her sexual identity. Further, as the film progresses, control of Ahalya over her acts becomes even more dominant. In the myth of Ahalya, Gautama Maharishi, her husband, who wins her by Brahma's courtesy, curses her for her infidelity. She is then turned into a stone because of the curse and is redeemed when Lord Rama's foot brushes the stone. In the film, it is with the help of Ahalya that Goutam Sadhu turns Indra Sen into a stone doll. Ahalya is seen as a seductress who bewitches Indra Sen, and because the latter falls for the trap laid by the husband wife duo, he is turned into a doll. The attempt made by the paper is to read Ahalya as a character of the stand in wife who is yet again a pawn in the play of Goutama's revenge from Indra (who in the myth is able to escape from any curse or punishment for the act). The paper, therefore, reverts the idea which has till now been signified by the film and attempts to study the feminism this film appropriates. It is essential to break down the structure formed by such overt transgressing films in order to provide a real outlook of how feminism is twisted and moulded according to a man’s wishes.Keywords: deconstructing, Hindu mythology, Panchkanya, predominant feminism, seductress, stone doll
Procedia PDF Downloads 253586 Photovoltaic Modules Fault Diagnosis Using Low-Cost Integrated Sensors
Authors: Marjila Burhanzoi, Kenta Onohara, Tomoaki Ikegami
Abstract:
Faults in photovoltaic (PV) modules should be detected to the greatest extent as early as possible. For that conventional fault detection methods such as electrical characterization, visual inspection, infrared (IR) imaging, ultraviolet fluorescence and electroluminescence (EL) imaging are used, but they either fail to detect the location or category of fault, or they require expensive equipment and are not convenient for onsite application. Hence, these methods are not convenient to use for monitoring small-scale PV systems. Therefore, low cost and efficient inspection techniques with the ability of onsite application are indispensable for PV modules. In this study in order to establish efficient inspection technique, correlation between faults and magnetic flux density on the surface is of crystalline PV modules are investigated. Magnetic flux on the surface of normal and faulted PV modules is measured under the short circuit and illuminated conditions using two different sensor devices. One device is made of small integrated sensors namely 9-axis motion tracking sensor with a 3-axis electronic compass embedded, an IR temperature sensor, an optical laser position sensor and a microcontroller. This device measures the X, Y and Z components of the magnetic flux density (Bx, By and Bz) few mm above the surface of a PV module and outputs the data as line graphs in LabVIEW program. The second device is made of a laser optical sensor and two magnetic line sensor modules consisting 16 pieces of magnetic sensors. This device scans the magnetic field on the surface of PV module and outputs the data as a 3D surface plot of the magnetic flux intensity in a LabVIEW program. A PC equipped with LabVIEW software is used for data acquisition and analysis for both devices. To show the effectiveness of this method, measured results are compared to those of a normal reference module and their EL images. Through the experiments it was confirmed that the magnetic field in the faulted areas have different profiles which can be clearly identified in the measured plots. Measurement results showed a perfect correlation with the EL images and using position sensors it identified the exact location of faults. This method was applied on different modules and various faults were detected using it. The proposed method owns the ability of on-site measurement and real-time diagnosis. Since simple sensors are used to make the device, it is low cost and convenient to be sued by small-scale or residential PV system owners.Keywords: fault diagnosis, fault location, integrated sensors, PV modules
Procedia PDF Downloads 224585 Evaluation of Sequential Polymer Flooding in Multi-Layered Heterogeneous Reservoir
Authors: Panupong Lohrattanarungrot, Falan Srisuriyachai
Abstract:
Polymer flooding is a well-known technique used for controlling mobility ratio in heterogeneous reservoirs, leading to improvement of sweep efficiency as well as wellbore profile. However, low injectivity of viscous polymer solution attenuates oil recovery rate and consecutively adds extra operating cost. An attempt of this study is to improve injectivity of polymer solution while maintaining recovery factor, enhancing effectiveness of polymer flooding method. This study is performed by using reservoir simulation program to modify conventional single polymer slug into sequential polymer flooding, emphasizing on increasing of injectivity and also reduction of polymer amount. Selection of operating conditions for single slug polymer including pre-injected water, polymer concentration and polymer slug size is firstly performed for a layered-heterogeneous reservoir with Lorenz coefficient (Lk) of 0.32. A selected single slug polymer flooding scheme is modified into sequential polymer flooding with reduction of polymer concentration in two different modes: Constant polymer mass and reduction of polymer mass. Effects of Residual Resistance Factor (RRF) is also evaluated. From simulation results, it is observed that first polymer slug with the highest concentration has the main function to buffer between displacing phase and reservoir oil. Moreover, part of polymer from this slug is also sacrificed for adsorption. Reduction of polymer concentration in the following slug prevents bypassing due to unfavorable mobility ratio. At the same time, following slugs with lower viscosity can be injected easily through formation, improving injectivity of the whole process. A sequential polymer flooding with reduction of polymer mass shows great benefit by reducing total production time and amount of polymer consumed up to 10% without any downside effect. The only advantage of using constant polymer mass is slightly increment of recovery factor (up to 1.4%) while total production time is almost the same. Increasing of residual resistance factor of polymer solution yields a benefit on mobility control by reducing effective permeability to water. Nevertheless, higher adsorption results in low injectivity, extending total production time. Modifying single polymer slug into sequence of reduced polymer concentration yields major benefits on reducing production time as well as polymer mass. With certain design of polymer flooding scheme, recovery factor can even be further increased. This study shows that application of sequential polymer flooding can be certainly applied to reservoir with high value of heterogeneity since it requires nothing complex for real implementation but just a proper design of polymer slug size and concentration.Keywords: polymer flooding, sequential, heterogeneous reservoir, residual resistance factor
Procedia PDF Downloads 476584 Informed Urban Design: Minimizing Urban Heat Island Intensity via Stochastic Optimization
Authors: Luis Guilherme Resende Santos, Ido Nevat, Leslie Norford
Abstract:
The Urban Heat Island (UHI) is characterized by increased air temperatures in urban areas compared to undeveloped rural surrounding environments. With urbanization and densification, the intensity of UHI increases, bringing negative impacts on livability, health and economy. In order to reduce those effects, it is required to take into consideration design factors when planning future developments. Given design constraints such as population size and availability of area for development, non-trivial decisions regarding the buildings’ dimensions and their spatial distribution are required. We develop a framework for optimization of urban design in order to jointly minimize UHI intensity and buildings’ energy consumption. First, the design constraints are defined according to spatial and population limits in order to establish realistic boundaries that would be applicable in real life decisions. Second, the tools Urban Weather Generator (UWG) and EnergyPlus are used to generate outputs of UHI intensity and total buildings’ energy consumption, respectively. Those outputs are changed based on a set of variable inputs related to urban morphology aspects, such as building height, urban canyon width and population density. Lastly, an optimization problem is cast where the utility function quantifies the performance of each design candidate (e.g. minimizing a linear combination of UHI and energy consumption), and a set of constraints to be met is set. Solving this optimization problem is difficult, since there is no simple analytic form which represents the UWG and EnergyPlus models. We therefore cannot use any direct optimization techniques, but instead, develop an indirect “black box” optimization algorithm. To this end we develop a solution that is based on stochastic optimization method, known as the Cross Entropy method (CEM). The CEM translates the deterministic optimization problem into an associated stochastic optimization problem which is simple to solve analytically. We illustrate our model on a typical residential area in Singapore. Due to fast growth in population and built area and land availability generated by land reclamation, urban planning decisions are of the most importance for the country. Furthermore, the hot and humid climate in the country raises the concern for the impact of UHI. The problem presented is highly relevant to early urban design stages and the objective of such framework is to guide decision makers and assist them to include and evaluate urban microclimate and energy aspects in the process of urban planning.Keywords: building energy consumption, stochastic optimization, urban design, urban heat island, urban weather generator
Procedia PDF Downloads 131583 Standardized Testing of Filter Systems regarding Their Separation Efficiency in Terms of Allergenic Particles and Airborne Germs
Authors: Johannes Mertl
Abstract:
Our surrounding air contains various particles. Besides typical representatives of inorganic dust, such as soot and ash, also particles originating from animals, microorganisms or plants are floating through the air, so-called bioaerosols. The group of bioaerosols consists of a broad spectrum of particles of different size, including fungi, bacteria, viruses, spores, or tree, flower and grass pollen that are of high relevance for allergy sufferers. In dependence of the environmental climate and the actual season, these allergenic particles can be found in enormous numbers in the air and are inhaled by humans via the respiration tract, with a potential for inflammatory diseases of the airways, such as asthma or allergic rhinitis. As a consequence air filter systems of ventilation and air conditioning devices are required to meet very high standards to prevent, or at least lower the number of allergens and airborne germs entering the indoor air. Still, filter systems are merely classified for their separation rates using well-defined mineral test dust, while no appropriate sufficiently standardized test methods for bioaerosols exist. However, determined separation rates for mineral test particles of a certain size cannot simply be transferred to bioaerosols, as separation efficiency of particularly fine and respirable particles (< 10 microns) is dependent not only on their shape and particle diameter, but also defined by their density and physicochemical properties. For this reason, the OFI developed a test method, which directly enables a testing of filters and filter media for their separation rates on bioaerosols, as well as a classification of filters. Besides allergens from an intact or fractured tree or grass pollen, allergenic proteins bound to particulates, as well as allergenic fungal spores (e.g. Cladosporium cladosporioides), or bacteria can be used to classify filters regarding their separation rates. Allergens passing through the filter can then be detected by highly sensitive immunological assays (ELISA) or in the case of fungal spores by microbiological methods, which allow for the detection of even one single spore passing the filter. The test procedure, which is carried out in laboratory scale, was furthermore validated regarding its sufficiency to cover real life situations by upscaling using air conditioning devices showing great conformity in terms of separation rates. Additionally, a clinical study with allergy sufferers was performed to verify analytical results. Several different air conditioning filters from the car industry have been tested, showing significant differences in their separation rates.Keywords: airborne germs, allergens, classification of filters, fine dust
Procedia PDF Downloads 252582 The Biological Function and Clinical Significance of Long Non-coding RNA LINC AC008063 in Head and Neck Squamous Carcinoma
Authors: Maierhaba Mijiti
Abstract:
Objective:The aim is to understand the relationship between the expression level of the long-non-coding RNA LINC AC008063 and the clinicopathological parameters of patients with head and neck squamous cell carcinoma (HNSCC), and to clarify the biological function of LINC AC008063 in HNSCC cells. Moreover, it provides a potential biomarker for the diagnosis, treatment, and prognosis evaluation of HNSCC. Methods: The expression level of LINC AC008063 in the HNSCC was analyzed using transcriptome sequencing data from the TCGA (The cancer genome atlas) database. The expression levels of LINC AC008063 in human embryonic lung diploid cells 2BS, human immortalized keratinocytes HACAT, HNSCC cell lines CAL-27, Detroit562, AMC-HN-8, FD-LSC-1, FaDu and WSU-HN30 were determined by real-time quantitative PCR (qPCR). RNAi (RNA interference) was introduced for LINC AC008063 knockdown in HNSCC cell lines, the localization and abundance analysis of LINC AC008063 was determined by RT-qPCR, and the biological functions were examined by CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assay. Results: LINC AC008063 was upregulated in HNSCC tissue (P<0.001), and verified b CCK-8, clone formation, flow cytometry, transwell invasion and migration assays, Seahorse assayy qPCR in HNSCC cell lines. The survival analysis revealed that the overall survival rate (OS) of patients with high LINC AC008063 expression group was significantly lower than that in the LINC AC008063 expression group, the median survival times for the two groups were 33.10 months and 61.27 months, respectively (P=0.002). The clinical correlation analysis revealed that its expression was positively correlated with the age of patients with HNSCC (P<0.001) and positively correlated with pathological state (T3+T4>T1+T2, P=0.03). The RT-qPCR results showed that LINC AC008063 was mainly enriched in cytoplasm (P=0.01). Knockdown of LINC AC008063 inhibited proliferation, colony formation, migration and invasion; the glycolytic capacity was significantly decreased in HNSCC cell lines (P<0.05). Conclusion: High level of LINC AC008063 was associated with the malignant progression of HNSCC as well as promoting the important biological functions of proliferation, colony formation, migration and invasion; in particular, the glycolytic capacity was decreased in HNSCC cells. Therefore, LINC AC008063 may serve as a potential biomarker for HNSCC and a distinct molecular target to inhibit glycolysis.Keywords: head and neck squamous cell carcinoma, oncogene, long non-coding RNA, LINC AC008063, invasion and metastasis
Procedia PDF Downloads 10581 Measuring Systems Interoperability: A Focal Point for Standardized Assessment of Regional Disaster Resilience
Authors: Joel Thomas, Alexa Squirini
Abstract:
The key argument of this research is that every element of systems interoperability is an enabler of regional disaster resilience, and arguably should become a focal point for standardized measurement of communities’ ability to work together. Few resilience research efforts have focused on the development and application of solutions that measurably improve communities’ ability to work together at a regional level, yet a majority of the most devastating and disruptive disasters are those that have had a regional impact. The key findings of the research include a unique theoretical, mathematical, and operational approach to tangibly and defensibly measure and assess systems interoperability required to support crisis information management activities performed by governments, the private sector, and humanitarian organizations. A most effective way for communities to measurably improve regional disaster resilience is through deliberately executed disaster preparedness activities. Developing interoperable crisis information management capabilities is a crosscutting preparedness activity that greatly affects a community’s readiness and ability to work together in times of crisis. Thus, improving communities’ human and technical posture to work together in advance of a crisis, with the ultimate goal of enabling information sharing to support coordination and the careful management of available resources, is a primary means by which communities may improve regional disaster resilience. This model describes how systems interoperability can be qualitatively and quantitatively assessed when characterized as five forms of capital: governance; standard operating procedures; technology; training and exercises; and usage. The unique measurement framework presented defines the relationships between systems interoperability, information sharing and safeguarding, operational coordination, community preparedness and regional disaster resilience, and offers a means by which to implement real-world solutions and measure progress over the course of a multi-year program. The model is being developed and piloted in partnership with the U.S. Department of Homeland Security (DHS) Science and Technology Directorate (S&T) and the North Atlantic Treaty Organization (NATO) Advanced Regional Civil Emergency Coordination Pilot (ARCECP) with twenty-three organizations in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro. The intended effect of the model implementation is to enable communities to answer two key questions: 'Have we measurably improved crisis information management capabilities as a result of this effort?' and, 'As a result, are we more resilient?'Keywords: disaster, interoperability, measurement, resilience
Procedia PDF Downloads 143580 Multimodal Integration of EEG, fMRI and Positron Emission Tomography Data Using Principal Component Analysis for Prognosis in Coma Patients
Authors: Denis Jordan, Daniel Golkowski, Mathias Lukas, Katharina Merz, Caroline Mlynarcik, Max Maurer, Valentin Riedl, Stefan Foerster, Eberhard F. Kochs, Andreas Bender, Ruediger Ilg
Abstract:
Introduction: So far, clinical assessments that rely on behavioral responses to differentiate coma states or even predict outcome in coma patients are unreliable, e.g. because of some patients’ motor disabilities. The present study was aimed to provide prognosis in coma patients using markers from electroencephalogram (EEG), blood oxygen level dependent (BOLD) functional magnetic resonance imaging (fMRI) and [18F]-fluorodeoxyglucose (FDG) positron emission tomography (PET). Unsuperwised principal component analysis (PCA) was used for multimodal integration of markers. Methods: Approved by the local ethics committee of the Technical University of Munich (Germany) 20 patients (aged 18-89) with severe brain damage were acquired through intensive care units at the Klinikum rechts der Isar in Munich and at the Therapiezentrum Burgau (Germany). At the day of EEG/fMRI/PET measurement (date I) patients (<3.5 month in coma) were grouped in the minimal conscious state (MCS) or vegetative state (VS) on the basis of their clinical presentation (coma recovery scale-revised, CRS-R). Follow-up assessment (date II) was also based on CRS-R in a period of 8 to 24 month after date I. At date I, 63 channel EEG (Brain Products, Gilching, Germany) was recorded outside the scanner, and subsequently simultaneous FDG-PET/fMRI was acquired on an integrated Siemens Biograph mMR 3T scanner (Siemens Healthineers, Erlangen Germany). Power spectral densities, permutation entropy (PE) and symbolic transfer entropy (STE) were calculated in/between frontal, temporal, parietal and occipital EEG channels. PE and STE are based on symbolic time series analysis and were already introduced as robust markers separating wakefulness from unconsciousness in EEG during general anesthesia. While PE quantifies the regularity structure of the neighboring order of signal values (a surrogate of cortical information processing), STE reflects information transfer between two signals (a surrogate of directed connectivity in cortical networks). fMRI was carried out using SPM12 (Wellcome Trust Center for Neuroimaging, University of London, UK). Functional images were realigned, segmented, normalized and smoothed. PET was acquired for 45 minutes in list-mode. For absolute quantification of brain’s glucose consumption rate in FDG-PET, kinetic modelling was performed with Patlak’s plot method. BOLD signal intensity in fMRI and glucose uptake in PET was calculated in 8 distinct cortical areas. PCA was performed over all markers from EEG/fMRI/PET. Prognosis (persistent VS and deceased patients vs. recovery to MCS/awake from date I to date II) was evaluated using the area under the curve (AUC) including bootstrap confidence intervals (CI, *: p<0.05). Results: Prognosis was reliably indicated by the first component of PCA (AUC=0.99*, CI=0.92-1.00) showing a higher AUC when compared to the best single markers (EEG: AUC<0.96*, fMRI: AUC<0.86*, PET: AUC<0.60). CRS-R did not show prediction (AUC=0.51, CI=0.29-0.78). Conclusion: In a multimodal analysis of EEG/fMRI/PET in coma patients, PCA lead to a reliable prognosis. The impact of this result is evident, as clinical estimates of prognosis are inapt at time and could be supported by quantitative biomarkers from EEG, fMRI and PET. Due to the small sample size, further investigations are required, in particular allowing superwised learning instead of the basic approach of unsuperwised PCA.Keywords: coma states and prognosis, electroencephalogram, entropy, functional magnetic resonance imaging, machine learning, positron emission tomography, principal component analysis
Procedia PDF Downloads 339579 Static Charge Control Plan for High-Density Electronics Centers
Authors: Clara Oliver, Oibar Martinez, Jose Miguel Miranda
Abstract:
Ensuring a safe environment for sensitive electronics boards in places with high limitations in size poses two major difficulties: the control of charge accumulation in floating floors and the prevention of excess charge generation due to air cooling flows. In this paper, we discuss these mechanisms and possible solutions to prevent them. An experiment was made in the control room of a Cherenkov Telescope, where six racks of 2x1x1 m size and independent cooling units are located. The room is 10x4x2.5 m, and the electronics include high-speed digitizers, trigger circuits, etc. The floor used in this room was antistatic, but it was a raised floor mounted in floating design to facilitate the handling of the cables and maintenance. The tests were made by measuring the contact voltage acquired by a person who was walking along the room with different footwear qualities. In addition, we took some measurements of the voltage accumulated in a person in other situations like running or sitting up and down on an office chair. The voltages were taken in real time with an electrostatic voltage meter and dedicated control software. It is shown that peak voltages as high as 5 kV were measured with ambient humidity of more than 30%, which are within the range of a class 3A according to the HBM standard. In order to complete the results, we have made the same experiment in different spaces with alternative types of the floor like synthetic floor and earthenware floor obtaining peak voltages much lower than the ones measured with the floating synthetic floor. The grounding quality one achieves with this kind of floors can hardly beat the one typically encountered in standard floors glued directly on a solid substrate. On the other hand, the air ventilation used to prevent the overheating of the boards probably contributed in a significant way to the charge accumulated in the room. During the assessment of the quality of the static charge control, it is necessary to guarantee that the tests are made under repeatable conditions. One of the major difficulties which one encounters during these assessments is the fact the electrostatic voltmeters might provide different values depending on the humidity conditions and ground resistance quality. In addition, the use of certified antistatic footwear might mask deficiencies in the charge control. In this paper, we show how we defined protocols to guarantee that electrostatic readings are reliable. We believe that this can be helpful not only to qualify the static charge control in a laboratory but also to asses any procedure oriented to minimize the risk of electrostatic discharge events.Keywords: electrostatics, ESD protocols, HBM, static charge control
Procedia PDF Downloads 129578 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage
Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy
Abstract:
Availability of a wide variety of renewable resources, such as large reserves of hydro, biomass, solar and wind in Canada provides significant potential to improve the sustainability of energy uses. As buildings represent a considerable portion of energy use in Canada, application of distributed solar energy systems for heating and cooling may increase the amount of renewable energy use. Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. Heat production by concentrating solar rays using parabolic troughs can overcome the poor efficiencies of flat panels and evacuated tubes in cold climates. A numerical dynamic model is developed to simulate an installed parabolic solar trough facility in Winnipeg. The results of the numerical model are validated using the experimental data obtained from this system. The model is developed in Simulink and will be utilized to simulate a tri-generation system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates as this is lacking in the literature. In this paper, the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using organic Rankine cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modeling provides dynamic performance results using real time minutely meteorological data which are collected at the same location the solar system is installed. This is a big step ahead of the current models by accurately calculating the available solar energy at each time step considering the solar radiation fluctuations due to passing clouds. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. A natural gas water heater provides the required excess heat for the absorption cooling at low or no solar radiation periods. The results of the simulation are presented for a summer month in Winnipeg which includes the amount of generated electric power from ORC and contribution of solar energy in the cooling load provisionKeywords: absorption cooling, parabolic solar trough, remote community, validated model
Procedia PDF Downloads 216577 Application of Metaverse Service to Construct Nursing Education Theory and Platform in the Post-pandemic Era
Authors: Chen-Jung Chen, Yi-Chang Chen
Abstract:
While traditional virtual reality and augmented reality only allow for small movement learning and cannot provide a truly immersive teaching experience to give it the illusion of movement, the new technology of both content creation and immersive interactive simulation of the metaverse can just reach infinite close to the natural teaching situation. However, the mixed reality virtual classroom of metaverse has not yet explored its theory, and it is rarely implemented in the situational simulation teaching of nursing education. Therefore, in the first year, the study will intend to use grounded theory and case study methods and in-depth interviews with nursing education and information experts. Analyze the interview data to investigate the uniqueness of metaverse development. The proposed analysis will lead to alternative theories and methods for the development of nursing education. In the second year, it will plan to integrate the metaverse virtual situation simulation technology into the alternate teaching strategy in the pediatric nursing technology course and explore the nursing students' use of this teaching method as the construction of personal technology and experience. By leveraging the unique features of distinct teaching platforms and developing processes to deliver alternative teaching strategies in a nursing technology teaching environment. The aim is to increase learning achievements without compromising teaching quality and teacher-student relationships in the post-pandemic era. A descriptive and convergent mixed methods design will be employed. Sixty third-grade nursing students will be recruited to participate in the research and complete the pre-test. The students in the experimental group (N=30) agreed to participate in 4 real-time mixed virtual situation simulation courses in self-practice after class and conducted qualitative interviews after each 2 virtual situation courses; the control group (N=30) adopted traditional practice methods of self-learning after class. Both groups of students took a post-test after the course. Data analysis will adopt descriptive statistics, paired t-tests, one-way analysis of variance, and qualitative content analysis. This study addresses key issues in the virtual reality environment for teaching and learning within the metaverse, providing valuable lessons and insights for enhancing the quality of education. The findings of this study are expected to contribute useful information for the future development of digital teaching and learning in nursing and other practice-based disciplines.Keywords: metaverse, post-pandemic era, online virtual classroom, immersive teaching
Procedia PDF Downloads 68576 Efficiency of Maritime Simulator Training in Oil Spill Response Competence Development
Authors: Antti Lanki, Justiina Halonen, Juuso Punnonen, Emmi Rantavuo
Abstract:
Marine oil spill response operation requires extensive vessel maneuvering and navigation skills. At-sea oil containment and recovery include both single vessel and multi-vessel operations. Towing long oil containment booms that are several hundreds of meters in length, is a challenge in itself. Boom deployment and towing in multi-vessel configurations is an added challenge that requires precise coordination and control of the vessels. Efficient communication, as a prerequisite for shared situational awareness, is needed in order to execute the response task effectively. To gain and maintain adequate maritime skills, practical training is needed. Field exercises are the most effective way of learning, but especially the related vessel operations are resource-intensive and costly. Field exercises may also be affected by environmental limitations such as high sea-state or other adverse weather conditions. In Finland, the seasonal ice-coverage also limits the training period to summer seasons only. In addition, environmental sensitiveness of the sea area restricts the use of real oil or other target substances. This paper examines, whether maritime simulator training can offer a complementary method to overcome the training challenges related to field exercises. The objective is to assess the efficiency and the learning impact of simulator training, and the specific skills that can be trained most effectively in simulators. This paper provides an overview of learning results from two oil spill response pilot courses, in which maritime navigational bridge simulators were used to train the oil spill response authorities. The simulators were equipped with an oil spill functionality module. The courses were targeted at coastal Fire and Rescue Services responsible for near shore oil spill response in Finland. The competence levels of the participants were surveyed before and after the course in order to measure potential shifts in competencies due to the simulator training. In addition to the quantitative analysis, the efficiency of the simulator training is evaluated qualitatively through feedback from the participants. The results indicate that simulator training is a valid and effective method for developing marine oil spill response competencies that complement traditional field exercises. Simulator training provides a safe environment for assessing various oil containment and recovery tactics. One of the main benefits of the simulator training was found to be the immediate feedback the spill modelling software provides on the oil spill behaviour as a reaction to response measures.Keywords: maritime training, oil spill response, simulation, vessel manoeuvring
Procedia PDF Downloads 172575 The Learning Loops in the Public Realm Project in South Verona: Air Quality and Noise Pollution Participatory Data Collection towards Co-Design, Planning and Construction of Mitigation Measures in Urban Areas
Authors: Massimiliano Condotta, Giovanni Borga, Chiara Scanagatta
Abstract:
Urban systems are places where the various actors involved interact and enter in conflict, in particular with reference to topics such as traffic congestion and security. But topics of discussion, and often clash because of their strong complexity, are air and noise pollution. For air pollution, the complexity stems from the fact that atmospheric pollution is due to many factors, but above all, the observation and measurement of the amount of pollution of a transparent, mobile and ethereal element like air is very difficult. Often the perceived condition of the inhabitants does not coincide with the real conditions, because it is conditioned - sometimes in positive ways other in negative ways - from many other factors such as the presence, or absence, of natural elements such as trees or rivers. These problems are seen with noise pollution as well, which is also less considered as an issue even if it’s problematic just as much as air quality. Starting from these opposite positions, it is difficult to identify and implement valid, and at the same time shared, mitigation solutions for the problem of urban pollution (air and noise pollution). The LOOPER (Learning Loops in the Public Realm) project –described in this paper – wants to build and test a methodology and a platform for participatory co-design, planning, and construction process inside a learning loop process. Novelties in this approach are various; the most relevant are three. The first is that citizens participation starts since from the research of problems and air quality analysis through a participatory data collection, and that continues in all process steps (design and construction). The second is that the methodology is characterized by a learning loop process. It means that after the first cycle of (1) problems identification, (2) planning and definition of design solution and (3) construction and implementation of mitigation measures, the effectiveness of implemented solutions is measured and verified through a new participatory data collection campaign. In this way, it is possible to understand if the policies and design solution had a positive impact on the territory. As a result of the learning process produced by the first loop, it will be possible to improve the design of the mitigation measures and start the second loop with new and more effective measures. The third relevant aspect is that the citizens' participation is carried out via Urban Living Labs that involve all stakeholder of the city (citizens, public administrators, associations of all urban stakeholders,…) and that the Urban Living Labs last for all the cycling of the design, planning and construction process. The paper will describe in detail the LOOPER methodology and the technical solution adopted for the participatory data collection and design and construction phases.Keywords: air quality, co-design, learning loops, noise pollution, urban living labs
Procedia PDF Downloads 365