Search results for: single walled nanotube (SWNT)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4690

Search results for: single walled nanotube (SWNT)

640 Flow Field Optimization for Proton Exchange Membrane Fuel Cells

Authors: Xiao-Dong Wang, Wei-Mon Yan

Abstract:

The flow field design in the bipolar plates affects the performance of the proton exchange membrane (PEM) fuel cell. This work adopted a combined optimization procedure, including a simplified conjugate-gradient method and a completely three-dimensional, two-phase, non-isothermal fuel cell model, to look for optimal flow field design for a single serpentine fuel cell of size 9×9 mm with five channels. For the direct solution, the two-fluid method was adopted to incorporate the heat effects using energy equations for entire cells. The model assumes that the system is steady; the inlet reactants are ideal gases; the flow is laminar; and the porous layers such as the diffusion layer, catalyst layer and PEM are isotropic. The model includes continuity, momentum and species equations for gaseous species, liquid water transport equations in the channels, gas diffusion layers, and catalyst layers, water transport equation in the membrane, electron and proton transport equations. The Bulter-Volumer equation was used to describe electrochemical reactions in the catalyst layers. The cell output power density Pcell is maximized subjected to an optimal set of channel heights, H1-H5, and channel widths, W2-W5. The basic case with all channel heights and widths set at 1 mm yields a Pcell=7260 Wm-2. The optimal design displays a tapered characteristic for channels 1, 3 and 4, and a diverging characteristic in height for channels 2 and 5, producing a Pcell=8894 Wm-2, about 22.5% increment. The reduced channel heights of channels 2-4 significantly increase the sub-rib convection and widths for effectively removing liquid water and oxygen transport in gas diffusion layer. The final diverging channel minimizes the leakage of fuel to outlet via sub-rib convection from channel 4 to channel 5. Near-optimal design without huge loss in cell performance but is easily manufactured is tested. The use of a straight, final channel of 0.1 mm height has led to 7.37% power loss, while the design with all channel widths to be 1 mm with optimal channel heights obtained above yields only 1.68% loss of current density. The presence of a final, diverging channel has greater impact on cell performance than the fine adjustment of channel width at the simulation conditions set herein studied.

Keywords: optimization, flow field design, simplified conjugate-gradient method, serpentine flow field, sub-rib convection

Procedia PDF Downloads 271
639 Robust Electrical Segmentation for Zone Coherency Delimitation Base on Multiplex Graph Community Detection

Authors: Noureddine Henka, Sami Tazi, Mohamad Assaad

Abstract:

The electrical grid is a highly intricate system designed to transfer electricity from production areas to consumption areas. The Transmission System Operator (TSO) is responsible for ensuring the efficient distribution of electricity and maintaining the grid's safety and quality. However, due to the increasing integration of intermittent renewable energy sources, there is a growing level of uncertainty, which requires a faster responsive approach. A potential solution involves the use of electrical segmentation, which involves creating coherence zones where electrical disturbances mainly remain within the zone. Indeed, by means of coherent electrical zones, it becomes possible to focus solely on the sub-zone, reducing the range of possibilities and aiding in managing uncertainty. It allows faster execution of operational processes and easier learning for supervised machine learning algorithms. Electrical segmentation can be applied to various applications, such as electrical control, minimizing electrical loss, and ensuring voltage stability. Since the electrical grid can be modeled as a graph, where the vertices represent electrical buses and the edges represent electrical lines, identifying coherent electrical zones can be seen as a clustering task on graphs, generally called community detection. Nevertheless, a critical criterion for the zones is their ability to remain resilient to the electrical evolution of the grid over time. This evolution is due to the constant changes in electricity generation and consumption, which are reflected in graph structure variations as well as line flow changes. One approach to creating a resilient segmentation is to design robust zones under various circumstances. This issue can be represented through a multiplex graph, where each layer represents a specific situation that may arise on the grid. Consequently, resilient segmentation can be achieved by conducting community detection on this multiplex graph. The multiplex graph is composed of multiple graphs, and all the layers share the same set of vertices. Our proposal involves a model that utilizes a unified representation to compute a flattening of all layers. This unified situation can be penalized to obtain (K) connected components representing the robust electrical segmentation clusters. We compare our robust segmentation to the segmentation based on a single reference situation. The robust segmentation proves its relevance by producing clusters with high intra-electrical perturbation and low variance of electrical perturbation. We saw through the experiences when robust electrical segmentation has a benefit and in which context.

Keywords: community detection, electrical segmentation, multiplex graph, power grid

Procedia PDF Downloads 45
638 Development and Validation of a Semi-Quantitative Food Frequency Questionnaire for Use in Urban and Rural Communities of Rwanda

Authors: Phenias Nsabimana, Jérôme W. Some, Hilda Vasanthakaalam, Stefaan De Henauw, Souheila Abbeddou

Abstract:

Tools for the dietary assessment in adults are limited in low- and middle-income settings. The objective of this study was to develop and validate a semi-quantitative food frequency questionnaire (FFQ) against the multiple pass-24 h recall tool for use in urban and rural Rwanda. A total of 212 adults (154 females and 58 males), 18-49 aged, including 105 urban and 107 rural residents, from the four regions of Rwanda, were recruited in the present study. A multiple-pass 24- H recall technique was used to collect dietary data in both urban and rural areas in four different rounds, on different days (one weekday and one weekend day), separated by a period of three months, from November 2020 to October 2021. The details of all the foods and beverages consumed over the 24h period of the day prior to the interview day were collected during face-to-face interviews. A list of foods, beverages, and commonly consumed recipes was developed by the study researchers and ten research assistants from the different regions of Rwanda. Non-standard recipes were collected when the information was available. A single semi-quantitative FFQ was also developed in the same group discussion prior to the beginning of the data collection. The FFQ was collected at the beginning and the end of the data collection period. Data were collected digitally. The amount of energy and macro-nutrients contributed by each food, recipe, and beverage will be computed based on nutrient composition reported in food composition tables and weight consumed. Median energy and nutrient contents of different food intakes from FFQ and 24-hour recalls and median differences (24-hour recall –FFQ) will be calculated. Kappa, Spearman, Wilcoxon, and Bland-Altman plot statistics will be conducted to evaluate the correlation between estimated nutrient and energy intake found by the two methods. Differences will be tested for their significance and all analyses will be done with STATA 11. Data collection was completed in November 2021. Data cleaning is ongoing and the data analysis is expected to be completed by July 2022. A developed and validated semi-quantitative FFQ will be available for use in dietary assessment. The developed FFQ will help researchers to collect reliable data that will support policy makers to plan for proper dietary change intervention in Rwanda.

Keywords: food frequency questionnaire, reproducibility, 24-H recall questionnaire, validation

Procedia PDF Downloads 111
637 Cluster-Based Exploration of System Readiness Levels: Mathematical Properties of Interfaces

Authors: Justin Fu, Thomas Mazzuchi, Shahram Sarkani

Abstract:

A key factor in technological immaturity in defense weapons acquisition is lack of understanding critical integrations at the subsystem and component level. To address this shortfall, recent research in integration readiness level (IRL) combines with technology readiness level (TRL) to form a system readiness level (SRL). SRL can be enriched with more robust quantitative methods to provide the program manager a useful tool prior to committing to major weapons acquisition programs. This research harnesses previous mathematical models based on graph theory, Petri nets, and tropical algebra and proposes a modification of the desirable SRL mathematical properties such that a tightly integrated (multitude of interfaces) subsystem can display a lower SRL than an inherently less coupled subsystem. The synthesis of these methods informs an improved decision tool for the program manager to commit to expensive technology development. This research ties the separately developed manufacturing readiness level (MRL) into the network representation of the system and addresses shortfalls in previous frameworks, including the lack of integration weighting and the over-importance of a single extremely immature component. Tropical algebra (based on the minimum of a set of TRLs or IRLs) allows one low IRL or TRL value to diminish the SRL of the entire system, which may not be reflective of actuality if that component is not critical or tightly coupled. Integration connections can be weighted according to importance and readiness levels are modified to be a cardinal scale (based on an analytic hierarchy process). Integration arcs’ importance are dependent on the connected nodes and the additional integrations arcs connected to those nodes. Lack of integration is not represented by zero, but by a perfect integration maturity value. Naturally, the importance (or weight) of such an arc would be zero. To further explore the impact of grouping subsystems, a multi-objective genetic algorithm is then used to find various clusters or communities that can be optimized for the most representative subsystem SRL. This novel calculation is then benchmarked through simulation and using past defense acquisition program data, focusing on the newly introduced Middle Tier of Acquisition (rapidly field prototypes). The model remains a relatively simple, accessible tool, but at higher fidelity and validated with past data for the program manager to decide major defense acquisition program milestones.

Keywords: readiness, maturity, system, integration

Procedia PDF Downloads 58
636 Beware the Trolldom: Speculative Interests and Policy Implications behind the Circulation of Damage Claims

Authors: Antonio Davola

Abstract:

Moving from the evaluations operated by Richard Posner in his judgment on the case Carhart v. Halaska, the paper seeks to analyse the so-called ‘litigation troll’ phenomenon and the development of a damage claims market, i.e. a market in which the right to propose claims is voluntary exchangeable for money and can be asserted by private buyers. The aim of our study is to assess whether the implementation of a ‘damage claims market’ might represent a resource for victims or if, on the contrary, it might operate solely as a speculation tool for private investors. The analysis will move from the US experience, and will then focus on the EU framework. Firstly, the paper will analyse the relation between the litigation troll phenomenon and the patent troll activity: even though these activities are considered similar by Posner, a comparative study shows how these practices significantly differ in their impact on the market and on consumer protection, even moving from similar economic perspectives. The second part of the paper will focus on the main specific concerns related to the litigation trolling activity. The main issues that will be addressed are the risk that the circulation of damage claims might spur non-meritorious litigation and the implications of the misalignment between the victim of a tort and the actual plaintiff in court arising from the sale of a claim. In its third part, the paper will then focus on the opportunities and benefits that the introduction and regulation of a claims market might imply both for potential claims sellers and buyers, in order to ultimately assess whether such a solution might actually increase individual’s legal empowerment. Through the damage claims market compensation would be granted more quickly and easily to consumers who had suffered harm: tort victims would, in fact, be compensated instantly upon the sale of their claims without any burden of proof. On the other hand, claim-buyers would profit from the gap between the amount that a consumer would accept for an immediate refund and the compensation awarded in court. In the fourth part of the paper, the analysis will focus on the legal legitimacy of the litigation trolling activity in the US and the EU framework. Even though there is no express provision that forbids the sale of the right to pursue a claim in court - or that deems such a right to be non-transferable – procedural laws of single States (especially in the EU panorama) must be taken into account in evaluating this aspect. The fifth and final part of the paper will summarize the various data collected to suggest an evaluation on if, and through which normative solutions, the litigation trolling might comport benefits for competition and which would be its overall effect over consumer’s protection.

Keywords: competition, claims, consumer's protection, litigation

Procedia PDF Downloads 211
635 Frequency Response of Complex Systems with Localized Nonlinearities

Authors: E. Menga, S. Hernandez

Abstract:

Finite Element Models (FEMs) are widely used in order to study and predict the dynamic properties of structures and usually, the prediction can be obtained with much more accuracy in the case of a single component than in the case of assemblies. Especially for structural dynamics studies, in the low and middle frequency range, most complex FEMs can be seen as assemblies made by linear components joined together at interfaces. From a modelling and computational point of view, these types of joints can be seen as localized sources of stiffness and damping and can be modelled as lumped spring/damper elements, most of time, characterized by nonlinear constitutive laws. On the other side, most of FE programs are able to run nonlinear analysis in time-domain. They treat the whole structure as nonlinear, even if there is one nonlinear degree of freedom (DOF) out of thousands of linear ones, making the analysis unnecessarily expensive from a computational point of view. In this work, a methodology in order to obtain the nonlinear frequency response of structures, whose nonlinearities can be considered as localized sources, is presented. The work extends the well-known Structural Dynamic Modification Method (SDMM) to a nonlinear set of modifications, and allows getting the Nonlinear Frequency Response Functions (NLFRFs), through an ‘updating’ process of the Linear Frequency Response Functions (LFRFs). A brief summary of the analytical concepts is given, starting from the linear formulation and understanding what the implications of the nonlinear one, are. The response of the system is formulated in both: time and frequency domain. First the Modal Database is extracted and the linear response is calculated. Secondly the nonlinear response is obtained thru the NL SDMM, by updating the underlying linear behavior of the system. The methodology, implemented in MATLAB, has been successfully applied to estimate the nonlinear frequency response of two systems. The first one is a two DOFs spring-mass-damper system, and the second example takes into account a full aircraft FE Model. In spite of the different levels of complexity, both examples show the reliability and effectiveness of the method. The results highlight a feasible and robust procedure, which allows a quick estimation of the effect of localized nonlinearities on the dynamic behavior. The method is particularly powerful when most of the FE Model can be considered as acting linearly and the nonlinear behavior is restricted to few degrees of freedom. The procedure is very attractive from a computational point of view because the FEM needs to be run just once, which allows faster nonlinear sensitivity analysis and easier implementation of optimization procedures for the calibration of nonlinear models.

Keywords: frequency response, nonlinear dynamics, structural dynamic modification, softening effect, rubber

Procedia PDF Downloads 242
634 Consensus Reaching Process and False Consensus Effect in a Problem of Portfolio Selection

Authors: Viviana Ventre, Giacomo Di Tollo, Roberta Martino

Abstract:

The portfolio selection problem includes the evaluation of many criteria that are difficult to compare directly and is characterized by uncertain elements. The portfolio selection problem can be modeled as a group decision problem in which several experts are invited to present their assessment. In this context, it is important to study and analyze the process of reaching a consensus among group members. Indeed, due to the various diversities among experts, reaching consensus is not necessarily always simple and easily achievable. Moreover, the concept of consensus is accompanied by the concept of false consensus, which is particularly interesting in the dynamics of group decision-making processes. False consensus can alter the evaluation and selection phase of the alternative and is the consequence of the decision maker's inability to recognize that his preferences are conditioned by subjective structures. The present work aims to investigate the dynamics of consensus attainment in a group decision problem in which equivalent portfolios are proposed. In particular, the study aims to analyze the impact of the subjective structure of the decision-maker during the evaluation and selection phase of the alternatives. Therefore, the experimental framework is divided into three phases. In the first phase, experts are sent to evaluate the characteristics of all portfolios individually, without peer comparison, arriving independently at the selection of the preferred portfolio. The experts' evaluations are used to obtain individual Analytical Hierarchical Processes that define the weight that each expert gives to all criteria with respect to the proposed alternatives. This step provides insight into how the decision maker's decision process develops, step by step, from goal analysis to alternative selection. The second phase includes the description of the decision maker's state through Markov chains. In fact, the individual weights obtained in the first phase can be reviewed and described as transition weights from one state to another. Thus, with the construction of the individual transition matrices, the possible next state of the expert is determined from the individual weights at the end of the first phase. Finally, the experts meet, and the process of reaching consensus is analyzed by considering the single individual state obtained at the previous stage and the false consensus bias. The work contributes to the study of the impact of subjective structures, quantified through the Analytical Hierarchical Process, and how they combine with the false consensus bias in group decision-making dynamics and the consensus reaching process in problems involving the selection of equivalent portfolios.

Keywords: analytical hierarchical process, consensus building, false consensus effect, markov chains, portfolio selection problem

Procedia PDF Downloads 68
633 Affects Associations Analysis in Emergency Situations

Authors: Joanna Grzybowska, Magdalena Igras, Mariusz Ziółko

Abstract:

Association rule learning is an approach for discovering interesting relationships in large databases. The analysis of relations, invisible at first glance, is a source of new knowledge which can be subsequently used for prediction. We used this data mining technique (which is an automatic and objective method) to learn about interesting affects associations in a corpus of emergency phone calls. We also made an attempt to match revealed rules with their possible situational context. The corpus was collected and subjectively annotated by two researchers. Each of 3306 recordings contains information on emotion: (1) type (sadness, weariness, anxiety, surprise, stress, anger, frustration, calm, relief, compassion, contentment, amusement, joy) (2) valence (negative, neutral, or positive) (3) intensity (low, typical, alternating, high). Also, additional information, that is a clue to speaker’s emotional state, was annotated: speech rate (slow, normal, fast), characteristic vocabulary (filled pauses, repeated words) and conversation style (normal, chaotic). Exponentially many rules can be extracted from a set of items (an item is a previously annotated single information). To generate the rules in the form of an implication X → Y (where X and Y are frequent k-itemsets) the Apriori algorithm was used - it avoids performing needless computations. Then, two basic measures (Support and Confidence) and several additional symmetric and asymmetric objective measures (e.g. Laplace, Conviction, Interest Factor, Cosine, correlation coefficient) were calculated for each rule. Each applied interestingness measure revealed different rules - we selected some top rules for each measure. Owing to the specificity of the corpus (emergency situations), most of the strong rules contain only negative emotions. There are though strong rules including neutral or even positive emotions. Three examples of the strongest rules are: {sadness} → {anxiety}; {sadness, weariness, stress, frustration} → {anger}; {compassion} → {sadness}. Association rule learning revealed the strongest configurations of affects (as well as configurations of affects with affect-related information) in our emergency phone calls corpus. The acquired knowledge can be used for prediction to fulfill the emotional profile of a new caller. Furthermore, a rule-related possible context analysis may be a clue to the situation a caller is in.

Keywords: data mining, emergency phone calls, emotional profiles, rules

Procedia PDF Downloads 385
632 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market

Authors: Taylan Kabbani, Ekrem Duman

Abstract:

The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.

Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent

Procedia PDF Downloads 149
631 Tuning of Indirect Exchange Coupling in FePt/Al₂O₃/Fe₃Pt System

Authors: Rajan Goyal, S. Lamba, S. Annapoorni

Abstract:

The indirect exchange coupled system consists of two ferromagnetic layers separated by non-magnetic spacer layer. The type of exchange coupling may be either ferro or anti-ferro depending on the thickness of the spacer layer. In the present work, the strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt has been investigated by varying the thickness of the spacer layer Al₂O₃. The FePt/Al₂O₃/Fe₃Pt trilayer structure is fabricated on Si <100> single crystal substrate using sputtering technique. The thickness of FePt and Fe₃Pt is fixed at 60 nm and 2 nm respectively. The thickness of spacer layer Al₂O₃ was varied from 0 to 16 nm. The normalized hysteresis loops recorded at room temperature both in the in-plane and out of plane configuration reveals that the orientation of easy axis lies along the plane of the film. It is observed that the hysteresis loop for ts=0 nm does not exhibit any knee around H=0 indicating that the hard FePt layer and soft Fe₃Pt layer are strongly exchange coupled. However, the insertion of Al₂O₃ spacer layer of thickness ts = 0.7 nm results in appearance of a minor knee around H=0 suggesting the weakening of exchange coupling between FePt and Fe₃Pt. The disappearance of knee in hysteresis loop with further increase in thickness of the spacer layer up to 8 nm predicts the co-existence of ferromagnetic (FM) and antiferromagnetic (AFM) exchange interaction between FePt and Fe₃Pt. In addition to this, the out of plane hysteresis loop also shows an asymmetry around H=0. The exchange field Hex = (Hc↑-HC↓)/2, where Hc↑ and Hc↓ are the coercivity estimated from lower and upper branch of hysteresis loop, increases from ~ 150 Oe to ~ 700 Oe respectively. This behavior may be attributed to the uncompensated moments in the hard FePt layer and soft Fe₃Pt layer at the interface. A better insight into the variation in indirect exchange coupling has been investigated using recoil curves. It is observed that the almost closed recoil curves are obtained for ts= 0 nm up to a reverse field of ~ 5 kOe. On the other hand, the appearance of appreciable open recoil curves at lower reverse field ~ 4 kOe for ts = 0.7 nm indicates that uncoupled soft phase undergoes irreversible magnetization reversal at lower reverse field suggesting the weakening of exchange coupling. The openness of recoil curves decreases with increase in thickness of the spacer layer up to 8 nm. This behavior may be attributed to the competition between FM and AFM exchange interactions. The FM exchange coupling between FePt and Fe₃Pt due to porous nature of Al₂O₃ decreases much slower than the weak AFM coupling due to interaction between Fe ions of FePt and Fe₃Pt via O ions of Al₂O₃. The hysteresis loop has been simulated using Monte Carlo based on Metropolis algorithm to investigate the variation in strength of exchange coupling in FePt/Al₂O₃/Fe₃Pt trilayer system.

Keywords: indirect exchange coupling, MH loop, Monte Carlo simulation, recoil curve

Procedia PDF Downloads 165
630 Investigation of Clusters of MRSA Cases in a Hospital in Western Kenya

Authors: Lillian Musila, Valerie Oundo, Daniel Erwin, Willie Sang

Abstract:

Staphylococcus aureus infections are a major cause of nosocomial infections in Kenya. Methicillin resistant S. aureus (MRSA) infections are a significant burden to public health and are associated with considerable morbidity and mortality. At a hospital in Western Kenya two clusters of MRSA cases emerged within short periods of time. In this study we explored whether these clusters represented a nosocomial outbreak by characterizing the isolates using phenotypic and molecular assays and examining epidemiological data to identify possible transmission patterns. Specimens from the site of infection of the subjects were collected, cultured and S. aureus isolates identified phenotypically and confirmed by APIStaph™. MRSA were identified by cefoxitin disk screening per CLSI guidelines. MRSA were further characterized based on their antibiotic susceptibility patterns and spa gene typing. Characteristics of cases with MRSA isolates were compared with those with MSSA isolated around the same time period. Two cases of MRSA infection were identified in the two week period between 21 April and 4 May 2015. A further 2 MRSA isolates were identified on the same day on 7 September 2015. The antibiotic resistance patterns of the two MRSA isolates in the 1st cluster of cases were different suggesting that these were distinct isolates. One isolate had spa type t2029 and the other had a novel spa type. The 2 isolates were obtained from urine and an open skin wound. In the 2nd cluster of MRSA isolates, the antibiotic susceptibility patterns were similar but isolates had different spa types: one was t037 and the other a novel spa type different from the novel MRSA spa type in the first cluster. Both cases in the second cluster were admitted into the hospital but one infection was community- and the other hospital-acquired. Only one of the four MRSA cases was classified as an HAI from an infection acquired post-operatively. When compared to other S. aureus strains isolated within the same time period from the same hospital only one spa type t2029 was found in both MRSA and non-MRSA strains. None of the cases infected with MRSA in the two clusters shared any common epidemiological characteristic such as age, sex or known risk factors for MRSA such as prolonged hospitalization or institutionalization. These data suggest that the observed MRSA clusters were multi strain clusters and not an outbreak of a single strain. There was no clear relationship between the isolates by spa type suggesting that no transmission was occurring within the hospital between these cluster cases but rather that the majority of the MRSA strains were circulating in the community. There was high diversity of spa types among the MRSA strains with none of the isolates sharing spa types. Identification of disease clusters in space and time is critical for immediate infection control action and patient management. Spa gene typing is a rapid way of confirming or ruling out MRSA outbreaks so that costly interventions are applied only when necessary.

Keywords: cluster, Kenya, MRSA, spa typing

Procedia PDF Downloads 290
629 Reducing Pressure Drop in Microscale Channel Using Constructal Theory

Authors: K. X. Cheng, A. L. Goh, K. T. Ooi

Abstract:

The effectiveness of microchannels in enhancing heat transfer has been demonstrated in the semiconductor industry. In order to tap the microscale heat transfer effects into macro geometries, overcoming the cost and technological constraints, microscale passages were created in macro geometries machined using conventional fabrication methods. A cylindrical insert was placed within a pipe, and geometrical profiles were created on the outer surface of the insert to enhance heat transfer under steady-state single-phase liquid flow conditions. However, while heat transfer coefficient values of above 10 kW/m2·K were achieved, the heat transfer enhancement was accompanied by undesirable pressure drop increment. Therefore, this study aims to address the high pressure drop issue using Constructal theory, a universal design law for both animate and inanimate systems. Two designs based on Constructal theory were developed to study the effectiveness of Constructal features in reducing the pressure drop increment as compared to parallel channels, which are commonly found in microchannel fabrication. The hydrodynamic and heat transfer performance for the Tree insert and Constructal fin (Cfin) insert were studied using experimental methods, and the underlying mechanisms were substantiated by numerical results. In technical terms, the objective is to achieve at least comparable increment in both heat transfer coefficient and pressure drop, if not higher increment in the former parameter. Results show that the Tree insert improved the heat transfer performance by more than 16 percent at low flow rates, as compared to the Tree-parallel insert. However, the heat transfer enhancement reduced to less than 5 percent at high Reynolds numbers. On the other hand, the pressure drop increment stayed almost constant at 20 percent. This suggests that the Tree insert has better heat transfer performance in the low Reynolds number region. More importantly, the Cfin insert displayed improved heat transfer performance along with favourable hydrodynamic performance, as compared to Cfinparallel insert, at all flow rates in this study. At 2 L/min, the enhancement of heat transfer was more than 30 percent, with 20 percent pressure drop increment, as compared to Cfin-parallel insert. Furthermore, comparable increment in both heat transfer coefficient and pressure drop was observed at 8 L/min. In other words, the Cfin insert successfully achieved the objective of this study. Analysis of the results suggests that bifurcation of flows is effective in reducing the increment in pressure drop relative to heat transfer enhancement. Optimising the geometries of the Constructal fins is therefore the potential future study in achieving a bigger stride in energy efficiency at much lower costs.

Keywords: constructal theory, enhanced heat transfer, microchannel, pressure drop

Procedia PDF Downloads 304
628 Automatic Identification of Aquatic Insects Based on Deep Learning and Computer Vision

Authors: Predrag Simović, Katarina Stojanović, Milena Radenković, Dimitrija Savić Zdravković, Aleksandar Milosavljević, Bratislav Predić, Milenka Božanić, Ana Petrović, Djuradj Milošević

Abstract:

Mayflies (Ephemeroptera), stoneflies (Plecoptera), and caddisflies (Trichoptera) (collectively referred to as EPT) are key participants in most freshwater habitats and often exhibit high diversity. Moreover, their presence and relative abundance are used in freshwater ecological and biomonitoring studies. Current methods for freshwater ecosystem biomonitoring follow a traditional approach of taxa monitoring based on morphological characters, which is time-consuming, and often generates data sets with low taxonomic resolution and unverifiable identification precision. To assist in solving identification problems and contribute to the knowledge of the distribution of many species, there was a need to develop alternative approaches in macroinvertebrate sample identification. Here, we establish an automatic machine-based identification approach for EPT taxa (Insect) using deep Convolutional Neural Networks (CNNs) and computer vision to increase the efficiency and taxonomic resolution in biomonitoring. The 5 550 specimens were collected from freshwater ecosystems of Serbia, and the deep model was built upon 90 EPT taxa. The protocol for obtaining images included the following stages: taxonomic identification by human experts and DNA barcoding validation, mounting the larvae, and photographing the dorsal side using a stereomicroscope and camera (16 650 individuals). The most informative image regions (the dorsal segments of individuals) for the decision-making process in the deep learning model were visualized using Gradient Weighted Class Activation Mapping (Grad-CAM). After training the artificial neural network, a CNN model was then built that was able to classify the 90 EPT taxa into their respective taxonomic categories automatically with 98.7%. Our approach offers a straightforward and efficient solution for routine monitoring programs, focusing on key biotic descriptors, such as EPT taxa. In addition, this application provides a streamlined solution that not only saves time, reduces equipment and expert requirements but also significantly enhances reliability and information content. The identification of the EPT larvae is difficult because of the variation of morphological features even within a single genus or the close resemblance of several species, and therefore, future research should focus on increasing the number of entities (species) in the model.

Keywords: convolutional neural networks, DNA barcoding, EPT taxa, biomonitoring

Procedia PDF Downloads 48
627 A Dynamic Cardiac Single Photon Emission Computer Tomography Using Conventional Gamma Camera to Estimate Coronary Flow Reserve

Authors: Maria Sciammarella, Uttam M. Shrestha, Youngho Seo, Grant T. Gullberg, Elias H. Botvinick

Abstract:

Background: Myocardial perfusion imaging (MPI) is typically performed with static imaging protocols and visually assessed for perfusion defects based on the relative intensity distribution. Dynamic cardiac SPECT, on the other hand, is a new imaging technique that is based on time varying information of radiotracer distribution, which permits quantification of myocardial blood flow (MBF). In this abstract, we report a progress and current status of dynamic cardiac SPECT using conventional gamma camera (Infinia Hawkeye 4, GE Healthcare) for estimation of myocardial blood flow and coronary flow reserve. Methods: A group of patients who had high risk of coronary artery disease was enrolled to evaluate our methodology. A low-dose/high-dose rest/pharmacologic-induced-stress protocol was implemented. A standard rest and a standard stress radionuclide dose of ⁹⁹ᵐTc-tetrofosmin (140 keV) was administered. The dynamic SPECT data for each patient were reconstructed using the standard 4-dimensional maximum likelihood expectation maximization (ML-EM) algorithm. Acquired data were used to estimate the myocardial blood flow (MBF). The correspondence between flow values in the main coronary vasculature with myocardial segments defined by the standardized myocardial segmentation and nomenclature were derived. The coronary flow reserve, CFR, was defined as the ratio of stress to rest MBF values. CFR values estimated with SPECT were also validated with dynamic PET. Results: The range of territorial MBF in LAD, RCA, and LCX was 0.44 ml/min/g to 3.81 ml/min/g. The MBF between estimated with PET and SPECT in the group of independent cohort of 7 patients showed statistically significant correlation, r = 0.71 (p < 0.001). But the corresponding CFR correlation was moderate r = 0.39 yet statistically significant (p = 0.037). The mean stress MBF value was significantly lower for angiographically abnormal than that for the normal (Normal Mean MBF = 2.49 ± 0.61, Abnormal Mean MBF = 1.43 ± 0. 0.62, P < .001). Conclusions: The visually assessed image findings in clinical SPECT are subjective, and may not reflect direct physiologic measures of coronary lesion. The MBF and CFR measured with dynamic SPECT are fully objective and available only with the data generated from the dynamic SPECT method. A quantitative approach such as measuring CFR using dynamic SPECT imaging is a better mode of diagnosing CAD than visual assessment of stress and rest images from static SPECT images Coronary Flow Reserve.

Keywords: dynamic SPECT, clinical SPECT/CT, selective coronary angiograph, ⁹⁹ᵐTc-Tetrofosmin

Procedia PDF Downloads 129
626 Preoperative Smoking Cessation Audit: A Single Centre Experience from Metropolitan Melbourne

Authors: Ya-Chu May Tsai, Ibrahim Yacoub, Eoin Casey

Abstract:

The Australian and New Zealand College of Anaesthetists (ANZCA) advises that smoking should not be permitted within 12 hours of surgery. There is little information in the medical literature regarding patients awareness of perioperative smoking cessation recommendations nor their appreciation of how smoking might negatively impact their perioperative course. The aim of the study is to assess the prevalence of current smokers presenting to Werribee Mercy Hospital (WMH) and to evaluate if pre-operative provision of both written and verbal pre-operative advice was, 1: Effective in improving patient awareness of the benefits of pre-operative smoking cessation, 2: Associated with an increase in the number of elective surgical patients who stop smoking at least 12 hours pre-operatively. Methods: The initial survey included all patients who presented to WMH for elective surgical procedures from 19 – 30 September 2016 using a standardized questionnaire focused on patients’ smoking history and their awareness of smoking cessation preoperatively. The intervention consisted of a standard pre-operative phone call to all patients advising them of the increased perioperative risks associated with smoking, and advised patients to cease 12 hours prior. In addition, written information on smoking cessation strategies were sent out in mail at least 1 week prior to planned procedure date to all patients. Questionnaire-based study after the intervention was conducted on day of elective procedure from 10 – 21 October 2016 inclusive. Primary outcomes measured were patient’s awareness of smoking cessation and proportion of smokers who quit >12 hours, considered a clinically meaning duration to reduce anaesthetics complications. Comparison of pre and post intervention results were made using SPSS 21.0. Results: In the pre-intervention group (n=156), 36 (22.4%) patients were current smokers, 46 were ex-smokers (29.5%) and 74 were non-smokers (48.1%). Of the smokers, 12 (33%) reported having been informed of smoking cessation prior to operation and 8 (22%) were aware of increased intra- and perioperative adverse events associated with smoking. In the post-intervention group n= 177, 38 (21.5%) patients were current smokers, 39 were ex-smokers (22.0%) and 100 were non-smokers (56.5%). Of the smokers, 32 (88.9%) reported having been informed of smoking cessation prior to operation and 35 (97.2%) reported being aware of increased intra- and perioperative adverse events associated with smoking. The median time since last smoke in the pre-intervention group was 5.5 hours (Q1-Q3 = 2-14) compared with 13 hours (Q1-Q3 = 5-24) in post intervention group. Amongst the smokers, smoking cessation at least 12 hours prior to surgery significantly increased from 27.8% pre-intervention to 52.6% post intervention (P=0.03). Conclusion: A standard preoperative phone call and written instruction on smoking cessation guidelines at time of waitlist placement increase preoperative smoking cessation rates by almost 2-fold.

Keywords: anaesthesia, audit, perioperative medicine, smoking cessation

Procedia PDF Downloads 279
625 Barrier Analysis of Sustainable Development of Small Towns: A Perspective of Southwest China

Authors: Yitian Ren, Liyin Shen, Tao Zhou, Xiao Li

Abstract:

The past urbanization process in China has brought out series of problems, the Chinese government has then positioned small towns in essential roles for implementing the strategy 'The National New-type Urbanization Plan (2014-2020)'. As the connector and transfer station of cities and countryside, small towns are important force to narrow the gap between urban and rural area, and to achieve the mission of new-type urbanization in China. The sustainable development of small towns plays crucial role because cities are not capable enough to absorb the surplus rural population. Nevertheless, there are various types of barriers hindering the sustainable development of small towns, which led to the limited development of small towns and has presented a bottleneck in Chinese urbanization process. Therefore, this paper makes deep understanding of these barriers, thus effective actions can be taken to address them. And this paper chooses the perspective of Southwest China (refers to Sichuan province, Yunnan province, Guizhou province, Chongqing Municipality City and Tibet Autonomous Region), cause the urbanization rate in Southwest China is far behind the average urbanization level of the nation and the number of small towns accounts for a great proportion in mainland China, also the characteristics of small towns in Southwest China are distinct. This paper investigates the barriers of sustainable development of small towns which located in Southwest China by using the content analysis method, combing with the field work and interviews in sample small towns, then identified and concludes 18 barriers into four dimensions, namely, institutional barriers, economic barriers, social barriers and ecological barriers. Based on the research above, questionnaire survey and data analysis are implemented, thus the key barriers hinder the sustainable development of small towns in Southwest China are identified by using fuzzy set theory, those barriers are, lack of independent financial power, lack of construction land index, financial channels limitation, single industrial structure, topography variety and complexity, which mainly belongs to institutional barriers and economic barriers. In conclusion part, policy suggestions are come up with to improve the politic and institutional environment of small town development, also the market mechanism are supposed to be introduced to the development process of small towns, which can effectively overcome the economic barriers, promote the sustainable development of small towns, accelerate the in-situ urbanization by absorbing peasants in nearby villages, and achieve the mission of new-type urbanization in China from the perspective of people-oriented.

Keywords: barrier analysis, sustainable development, small town, Southwest China

Procedia PDF Downloads 317
624 Quality of Life of Elderly and Factors Associated in Bharatpur Metropolitan City, Chitwan: A Mixed Method Study

Authors: Rubisha Adhikari, Rajani Shah

Abstract:

Introduction: Aging is a natural, global and inevitable phenomenon every single person has to go through, and nobody can escape the process. One of the emerging challenges to public health is to improve the quality of later years of life as life expectancy continues to increase. Quality of life (QoL) has grown to be a key goal for many public health initiatives. Population aging has become a global phenomenon as they are growing more quickly in emerging nations than they are in industrialized nations, leaving minimal opportunities to regulate the consequences of the demographic shift. Methods: A community-based descriptive analytical approach was used to examine the quality of life and associated factors among elderly people. A mixed method was chosen for the study. For the quantitative data collection, a household survey was conducted using the WHOQOL-OLD tool. In-depth interviews were conducted among twenty participants for qualitative data collection. Data generated through in-depth interviews were transcribed verbatim. In-depth interviews lasted about an hour and were audio recorded. The in-depth interview guide had been developed by the research team and pilot-tested before actual interviews. Results: This study result showed the association between quality of life and socio-demographic variables. Among all the variables under socio-demographic variable of this study, age (ꭓ2=14.445, p=0.001), gender (ꭓ2=14.323, p=<0.001), marital status (ꭓ2=10.816, p=0.001), education status (ꭓ2=23.948, p=<0.001), household income (ꭓ2=13.493, p=0.001), personal income (ꭓ2=14.129, p=0.001), source of personal income (ꭓ2=28.332,p=<0.001), social security allowance (ꭓ2=18.005,p=<0.001), alcohol consumption (ꭓ2=9.397,p=0.002) are significantly associated with quality of life of elderly. In addition, affordability (ꭓ2=12.088, p=0.001), physical activity (ꭓ2=9.314, p=0.002), emotional support (ꭓ2=9.122, p=0.003), and economic support (ꭓ2=8.104, p=0.004) are associated with quality of life of elderly people. Conclusion: In conclusion, this mixed method study provides insight into the attributes of the quality of life of elderly people in Nepal and similar settings. As the geriatric population is growing in full swing, maintaining a high quality of life has become a major challenge. This study showed that determinants such as age, gender, marital status, education status, household income, personal income, source of personal income, social security allowance and alcohol consumption, economic support, emotional support, affordability and physical activity have an association with quality of life of the elderly.

Keywords: ageing, chitwan, elderly, health status, quality of life

Procedia PDF Downloads 25
623 Characterization of Aerosol Particles in Ilorin, Nigeria: Ground-Based Measurement Approach

Authors: Razaq A. Olaitan, Ayansina Ayanlade

Abstract:

Understanding aerosol properties is the main goal of global research in order to lower the uncertainty associated with climate change in the trends and magnitude of aerosol particles. In order to identify aerosol particle types, optical properties, and the relationship between aerosol properties and particle concentration between 2019 and 2021, a study conducted in Ilorin, Nigeria, examined the aerosol robotic network's ground-based sun/sky scanning radiometer. The AERONET algorithm version 2 was utilized to retrieve monthly data on aerosol optical depth and angstrom exponent. The version 3 algorithm, which is an almucantar level 2 inversion, was employed to retrieve daily data on single scattering albedo and aerosol size distribution. Excel 2016 was used to analyze the data's monthly, seasonal, and annual mean averages. The distribution of different types of aerosols was analyzed using scatterplots, and the optical properties of the aerosol were investigated using pertinent mathematical theorems. To comprehend the relationships between particle concentration and properties, correlation statistics were employed. Based on the premise that aerosol characteristics must remain constant in both magnitude and trend across time and space, the study's findings indicate that the types of aerosols identified between 2019 and 2021 are as follows: 29.22% urban industrial (UI) aerosol type, 37.08% desert (D) aerosol type, 10.67% biomass burning (BB), and 23.03% urban mix (Um) aerosol type. Convective wind systems, which frequently carry particles as they blow over long distances in the atmosphere, have been responsible for the peak-of-the-columnar aerosol loadings, which were observed during August of the study period. The study has shown that while coarse mode particles dominate, fine particles are increasing in seasonal and annual trends. Burning biomass and human activities in the city are linked to these trends. The study found that the majority of particles are highly absorbing black carbon, with the fine mode having a volume median radius of 0.08 to 0.12 meters. The investigation also revealed that there is a positive coefficient of correlation (r = 0.57) between changes in aerosol particle concentration and changes in aerosol properties. Human activity is rapidly increasing in Ilorin, causing changes in aerosol properties, indicating potential health risks from climate change and human influence on geological and environmental systems.

Keywords: aerosol loading, aerosol types, health risks, optical properties

Procedia PDF Downloads 21
622 Study on Adding Story and Seismic Strengthening of Old Masonry Buildings

Authors: Youlu Huang, Huanjun Jiang

Abstract:

A large number of old masonry buildings built in the last century still remain in the city. It generates the problems of unsafety, obsolescence, and non-habitability. In recent years, many old buildings have been reconstructed through renovating façade, strengthening, and adding floors. However, most projects only provide a solution for a single problem. It is difficult to comprehensively solve problems of poor safety and lack of building functions. Therefore, a comprehensive functional renovation program of adding reinforced concrete frame story at the bottom via integrally lifting the building and then strengthening the building was put forward. Based on field measurement and YJK calculation software, the seismic performance of an actual three-story masonry structure in Shanghai was identified. The results show that the material strength of masonry is low, and the bearing capacity of some masonry walls could not meet the code requirements. The elastoplastic time history analysis of the structure was carried out by using SAP2000 software. The results show that under the 7 degrees rare earthquake, the seismic performance of the structure reaches 'serious damage' performance level. Based on the code requirements of the stiffness ration of the bottom frame (lateral stiffness ration of the transition masonry story and frame story), the bottom frame story was designed. The integral lifting process of the masonry building was introduced based on many engineering examples. The reinforced methods for the bottom frame structure strengthened by the steel-reinforced mesh mortar surface layer (SRMM) and base isolators, respectively, were proposed. The time history analysis of the two kinds of structures, under the frequent earthquake, the fortification earthquake, and the rare earthquake, was conducted by SAP2000 software. For the bottom frame structure, the results show that the seismic response of the masonry floor is significantly reduced after reinforced by the two methods compared to the masonry structure. The previous earthquake disaster indicated that the bottom frame is vulnerable to serious damage under a strong earthquake. The analysis results showed that under the rare earthquake, the inter-story displacement angle of the bottom frame floor meets the 1/100 limit value of the seismic code. The inter-story drift of the masonry floor for the base isolated structure under different levels of earthquakes is similar to that of structure with SRMM, while the base-isolated program is better to protect the bottom frame. Both reinforced methods could significantly improve the seismic performance of the bottom frame structure.

Keywords: old buildings, adding story, seismic strengthening, seismic performance

Procedia PDF Downloads 99
621 Simulation of Wet Scrubbers for Flue Gas Desulfurization

Authors: Anders Schou Simonsen, Kim Sorensen, Thomas Condra

Abstract:

Wet scrubbers are used for flue gas desulfurization by injecting water directly into the flue gas stream from a set of sprayers. The water droplets will flow freely inside the scrubber, and flow down along the scrubber walls as a thin wall film while reacting with the gas phase to remove SO₂. This complex multiphase phenomenon can be divided into three main contributions: the continuous gas phase, the liquid droplet phase, and the liquid wall film phase. This study proposes a complete model, where all three main contributions are taken into account and resolved using OpenFOAM for the continuous gas phase, and MATLAB for the liquid droplet and wall film phases. The 3D continuous gas phase is composed of five species: CO₂, H₂O, O₂, SO₂, and N₂, which are resolved along with momentum, energy, and turbulence. Source terms are present for four species, energy and momentum, which are affecting the steady-state solution. The liquid droplet phase experiences breakup, collisions, dynamics, internal chemistry, evaporation and condensation, species mass transfer, energy transfer and wall film interactions. Numerous sub-models have been implemented and coupled to realise the above-mentioned phenomena. The liquid wall film experiences impingement, acceleration, atomization, separation, internal chemistry, evaporation and condensation, species mass transfer, and energy transfer, which have all been resolved using numerous sub-models as well. The continuous gas phase has been coupled with the liquid phases using source terms by an approach, where the two software packages are couples using a link-structure. The complete CFD model has been verified using 16 experimental tests from an existing scrubber installation, where a gradient-based pattern search optimization algorithm has been used to tune numerous model parameters to match the experimental results. The CFD model needed to be fast for evaluation in order to apply this optimization routine, where approximately 1000 simulations were needed. The results show that the complex multiphase phenomena governing wet scrubbers can be resolved in a single model. The optimization routine was able to tune the model to accurately predict the performance of an existing installation. Furthermore, the study shows that a coupling between OpenFOAM and MATLAB is realizable, where the data and source term exchange increases the computational requirements by approximately 5%. This allows for exploiting the benefits of both software programs.

Keywords: desulfurization, discrete phase, scrubber, wall film

Procedia PDF Downloads 230
620 The Antagonistic/Synergistic Effect of Probiotic Yeast Saccharomyces boulardii on Candida glabrata Adhesion

Authors: Zorica Tomičić, Ružica Tomičić, Peter Raspor

Abstract:

Growing resistance of pathogenic yeast Candida glabrata to many classes of antifungal drugs has stimulated efforts to discover new agents to combat a rising number of invasive C. glabrata infections, which deserves a great deal of concern due to the high mortality rate in immunocompromised populations. One promising strategy is the use of probiotic microorganisms, which, when administered in adequate amounts, confers a health benefit. A selected number of probiotic organisms, Saccharomyces boulardii among them, have been tested as potential biotherapeutic agents. The aim of this study was to investigate the effect of the probiotic yeast S. boulardii on the adhesion of clinical isolates of C. glabrata at different temperatures, pH values, and in the presence of three clinically important antifungal drugs, such as fluconazole, itraconazole and amphotericin B. The method used to assess adhesion was crystal violet staining. The selection of antimycotics concentrations used in the adhesion assay was based on minimum inhibitory concentrations (MICs) obtained by the preliminarily performed microdilution modification of the Reference method for broth dilution antifungal susceptibility testing of yeast (Clinical and Laboratory Standards Institute (CLSI), standard M27-A2). the results showed that despite the nonadhesiveness of S. boulardii cells, probiotic yeast significantly suppressed the adhesion of C. glabrata strains. Besides, at specific strain ratios, a slight stimulatory effect was observed in some C. glabrata strains, which highlights the importance of strain specificity and opens up further research interests. When environmental conditions are considered, temperature and pH significantly influenced co-culture adhesion of C. glabrata and S. boulardii. The adhesion of C. glabrata strains was relatively equally reduced over all tested temperature range (28°C, 37°C, 39°C and 42°C) in the presence of S. boulardii cells, while the adhesion of a few C. glabrata strains were significantly stimulated at 28°C and suppressed at 42°C. Further, the adhesion was highly dependent on pH, with the highest adherence at pH 4 and lowest at pH 8.5. It was observed that S. boulardii did not manage to suppress the adhesion of C. glabrata strains at high pH. Antimycotics on the other hand showed a greater impact, since S. boulardii failed to affect co-culture adhesion at higher antimycotics concentrations. As expected, exposure to various concentrations of amphotericin B significantly reduced the adherence ability of C.glabrata strains both in a single culture and co-culture with S. boulardii. Therefore, it can be speculated that S. boulardii could substitute the effect of antimycotics in a range concentrations and with specific type of strains. This would certainly change the view on the treatment of yeast infections in the future.

Keywords: adhesion, antimycotics, candida glabrata, saccharomyces boulardii

Procedia PDF Downloads 36
619 Modeling Aerosol Formation in an Electrically Heated Tobacco Product

Authors: Markus Nordlund, Arkadiusz K. Kuczaj

Abstract:

Philip Morris International (PMI) is developing a range of novel tobacco products with the potential to reduce individual risk and population harm in comparison to smoking cigarettes. One of these products is the Tobacco Heating System 2.2 (THS 2.2), (named as the Electrically Heated Tobacco System (EHTS) in this paper), already commercialized in a number of countries (e.g., Japan, Italy, Switzerland, Russia, Portugal and Romania). During use, the patented EHTS heats a specifically designed tobacco product (Electrically Heated Tobacco Product (EHTP)) when inserted into a Holder (heating device). The EHTP contains tobacco material in the form of a porous plug that undergoes a controlled heating process to release chemical compounds into vapors, from which an aerosol is formed during cooling. The aim of this work was to investigate the aerosol formation characteristics for realistic operating conditions of the EHTS as well as for relevant gas mixture compositions measured in the EHTP aerosol consisting mostly of water, glycerol and nicotine, but also other compounds at much lower concentrations. The nucleation process taking place in the EHTP during use when operated in the Holder has therefore been modeled numerically using an extended Classical Nucleation Theory (CNT) for multicomponent gas mixtures. Results from the performed simulations demonstrate that aerosol droplets are formed only in the presence of an aerosol former being mainly glycerol. Minor compounds in the gas mixture were not able to reach a supersaturated state alone and therefore could not generate aerosol droplets from the multicomponent gas mixture at the operating conditions simulated. For the analytically characterized aerosol composition and estimated operating conditions of the EHTS and EHTP, glycerol was shown to be the main aerosol former triggering the nucleation process in the EHTP. This implies that according to the CNT, an aerosol former, such as glycerol needs to be present in the gas mixture for an aerosol to form under the tested operating conditions. To assess if these conclusions are sensitive to the initial amount of the minor compounds and to include and represent the total mass of the aerosol collected during the analytical aerosol characterization, simulations were carried out with initial masses of the minor compounds increased by as much as a factor of 500. Despite this extreme condition, no aerosol droplets were generated when glycerol, nicotine and water were treated as inert species and therefore not actively contributing to the nucleation process. This implies that according to the CNT, an aerosol cannot be generated without the help of an aerosol former, from the multicomponent gas mixtures at the compositions and operating conditions estimated for the EHTP, even if all minor compounds are released or generated in a single puff.

Keywords: aerosol, classical nucleation theory (CNT), electrically heated tobacco product (EHTP), electrically heated tobacco system (EHTS), modeling, multicomponent, nucleation

Procedia PDF Downloads 239
618 Comparative Hematological Analysis of Blood Profile in Experimentally Infected with Trichinella spiralis, Trichinella britovi and Trichinella pseudospiralis Mice

Authors: Valeria T. Dilcheva, Svetlozara L. Petkova, Ivelin Vladov

Abstract:

Trichinellosis is a food-borne parasitic disease caused by nematodes of the genus Trichinella which are zoonotic parasites with cosmopolitan distribution and major socio-economic importance. Human infection is acquired through consumption of undercooked meat from domestic or wild animal. Penetration of Trichinella larvae into striated skeletal muscle cells results in ultrastructural and metabolic changes. Migration of larvae causes the typical symptoms and signs of the disease. The severity of the symptoms depends on the number of ingested Trichinella larvae and the immune response of the host. Eosinophilia is present, with few exceptions, in most cases of human trichinellosis, inasmuch as it is the earliest and most important host response. Even in human asymptomatic cases, increases in eosinophilia of up to 15% have been observed. Eosinophilia appears at an early stage of infection between the second and fifth weeks of infection. By 2005 it was considered that only two species of Trichinella genus were found in the country. After routine trichinelloscopy procedure disseminated single muscle larvae in samples of wild boars and badger were PCR-identified as T. pseudospiralis. The study aimed to observed hematological changes occurring during experimentally induced infection with Trichinella spiralis, T. britovi and T. pseudospiralis in mice. We performed hematological blood profile, tracking 15 blood indicators. In statistical analysis made by Two-way ANOVA, there were significant differences of HGB, MCHC, PLT, Lymph%, Gran% in all three types of trichinellosis compared to control animals. Capsule-forming T. spiralis showed statistically significant differences in HGB, MCHC, Lymph% and PLT compared to the other two species. Non capsule-forming T. pseudospiralis showed statistically significant differences in Lymph%, Gran% relative to the control and in Gran% relative to T. spiralis. It appears rather substantial the process of capsule formation for prolonged immune response and retention of high content of percentage of lymphocytes(Lymph%) and low of granulocyte(Gran%) in T. pseudospiralis, which is contrary to studies for T. spiralis and eosinophilia. Studies and analyzes of some specific blood profile parameters can provide additional data in favor of early diagnosis and adequate treatment as well as provide a better understanding of acute and chronic trichinosis.

Keywords: hematological test, T. britovi, T. spiralis, T. pseudospiralis

Procedia PDF Downloads 131
617 Facilitating Knowledge Transfer for New Product Development in Portfolio Entrepreneurship: A Case Study of a Sodium-Ion Battery Start-up in China

Authors: Guohong Wang, Hao Huang, Rui Xing, Liyan Tang, Yu Wang

Abstract:

Start-ups are consistently under pressure to overcome liabilities of newness and smallness. They must focus on assembling resource and engaging constant renewal and repeated entrepreneurial activities to survive and grow. As an important form of resource, knowledge is constantly vital to start-ups, which will help start-ups with developing new product in hence forming competitive advantage. However, significant knowledge is usually needed to be identified and exploited from external entities, which makes it difficult to achieve knowledge transfer; with limited resources, it can be quite challenging for start-ups balancing the exploration and exploitation of knowledge. The research on knowledge transfer has become a relatively well-developed domain by indicating that knowledge transfer can be achieved through plenty of patterns, yet it is still under-explored that what processes and organizational practices help start-ups facilitating knowledge transfer for new product in the context portfolio entrepreneurship. Resource orchestration theory emphasizes the initiative and active management of company or the manager to explain the fulfillment of resource utility, which will help understand the process of managing knowledge as a certain kind of resource in start-ups. Drawing on the resource orchestration theory, this research aims to explore how knowledge transfer can be facilitated through resource orchestration. A qualitative single-case study of a sodium-ion battery new venture was conducted. The case company is sampled deliberately from representative industrial agglomeration areas in Liaoning Province, China. It is found that distinctive resource orchestration sub-processes are leveraged to facilitate knowledge transfer: (i) resource structuring makes knowledge available across the portfolio; (ii) resource bundling makes combines internal and external knowledge to form new knowledge; and (iii) resource harmonizing balances specific knowledge configurations across the portfolio. Meanwhile, by purposefully reallocating knowledge configurations to new product development in a certain new venture (exploration) and gradually adjusting knowledge configurations to being applied to existing products across the portfolio (exploitation), resource orchestration processes as a whole make exploration and exploitation of knowledge balanced. This study contributes to the knowledge management literature through proposing a resource orchestration view and depicting how knowledge transfer can be facilitated through different resource orchestration processes and mechanisms. In addition, by revealing the balancing process of exploration and exploitation of knowledge, and laying stress on the significance of the idea of making exploration and exploitation of knowledge balanced in the context of portfolio entrepreneurship, this study also adds specific efforts to entrepreneurship and strategy management literature.

Keywords: exploration and exploitation, knowledge transfer, new product development, portfolio entrepreneur, resource orchestration

Procedia PDF Downloads 100
616 The Perception and Integration of Lexical Tone and Vowel in Mandarin-speaking Children with Autism: An Event-Related Potential Study

Authors: Rui Wang, Luodi Yu, Dan Huang, Hsuan-Chih Chen, Yang Zhang, Suiping Wang

Abstract:

Enhanced discrimination of pure tones but diminished discrimination of speech pitch (i.e., lexical tone) were found in children with autism who speak a tonal language (Mandarin), suggesting a speech-specific impairment of pitch perception in these children. However, in tonal languages, both lexical tone and vowel are phonemic cues and integrally dependent on each other. Therefore, it is unclear whether the presence of phonemic vowel dimension contributes to the observed lexical tone deficits in Mandarin-speaking children with autism. The current study employed a multi-feature oddball paradigm to examine how vowel and tone dimensions contribute to the neural responses for syllable change detection and involuntary attentional orienting in school-age Mandarin-speaking children with autism. In the oddball sequence, syllable /da1/ served as the standard stimulus. There were three deviant stimulus conditions, representing tone-only change (TO, /da4/), vowel-only change (VO, /du1/), and change of tone and vowel simultaneously (TV, /du4/). EEG data were collected from 25 children with autism and 20 age-matched normal controls during passive listening to the stimulation. For each deviant condition, difference waveform measuring mismatch negativity (MMN) was derived from subtracting the ERP waveform to the standard sound from that to the deviant sound for each participant. Additionally, the linear summation of TO and VO difference waveforms was compared to the TV difference waveform, to examine whether neural sensitivity for TV change detection reflects simple summation or nonlinear integration of the two individual dimensions. The MMN results showed that the autism group had smaller amplitude compared with the control group in the TO and VO conditions, suggesting impaired discriminative sensitivity for both dimensions. In the control group, amplitude of the TV difference waveform approximated the linear summation of the TO and VO waveforms only in the early time window but not in the late window, suggesting a time course from dimensional summation to nonlinear integration. In the autism group, however, the nonlinear TV integration was already present in the early window. These findings suggest that speech perception atypicality in children with autism rests not only in the processing of single phonemic dimensions, but also in the dimensional integration process.

Keywords: autism, event-related potentials , mismatch negativity, speech perception

Procedia PDF Downloads 173
615 Relation of Consumer Satisfaction on Organization by Focusing on the Different Aspects of Buying Behavior

Authors: I. Gupta, N. Setia

Abstract:

Introduction. Buyer conduct is a progression of practices or examples that buyers pursue before making a buy. It begins when the shopper ends up mindful of a need or wish for an item, at that point finishes up with the buying exchange. Business visionaries can't generally simply shake hands with their intended interest group people and become more acquainted with them. Research is often necessary, so every organization primarily involves doing continuous research to understand and satisfy consumer needs pattern. Aims and Objectives: The aim of the present study is to examine the different behaviors of the consumer, including pre-purchase, purchase, and post-purchase behavior. Materials and Methods: In order to get results, face to face interview held with 80 people which comprise a larger part of female individuals having upper as well as middle-class status. The prime source of data collection was primary. However, the study has also used the theoretical contribution of many researchers in their respective field. Results: Majority of the respondents were females (70%) from the age group of 20-50. The collected data was analyzed through hypothesis testing statistical techniques such as correlation analysis, single regression analysis, and ANOVA which has rejected the null hypothesis that there is no relation between researching the consumer behavior at different stages and organizational performance. The real finding of this study is that simply focusing on the buying part isn't enough to gain profits and fame, however, understanding the pre, buy and post-buy behavior of consumer performs a huge role in organization success. The outcomes demonstrated that the organization, which deals with the three phases of research of purchasing conduct is able to establish a great brand image as compare to their competitors. Alongside, enterprises can observe customer conduct in a considerably more proficient manner. Conclusion: The analyses of consumer behavior presented in this study is an attempt to understand the factors affecting consumer purchasing behavior. This study has revealed that those corporations are more successful, which work on understanding buying behavior instead to just focus on the selling products. As a result, organizations perform good and grow rapidly because consumers are the one who can make or break the company. The interviews that were conducted face to face, clearly revealed that those organizations become at top-notch whom consumers are satisfied, not just with product but also with services of the company. The study is not targeting the particular class of audience; however, it brings out benefits to the masses, in particular to business organizations.

Keywords: consumer behavior, pre purchase, post purchase, consumer satisfaction

Procedia PDF Downloads 93
614 Knowledge Graph Development to Connect Earth Metadata and Standard English Queries

Authors: Gabriel Montague, Max Vilgalys, Catherine H. Crawford, Jorge Ortiz, Dava Newman

Abstract:

There has never been so much publicly accessible atmospheric and environmental data. The possibilities of these data are exciting, but the sheer volume of available datasets represents a new challenge for researchers. The task of identifying and working with a new dataset has become more difficult with the amount and variety of available data. Datasets are often documented in ways that differ substantially from the common English used to describe the same topics. This presents a barrier not only for new scientists, but for researchers looking to find comparisons across multiple datasets or specialists from other disciplines hoping to collaborate. This paper proposes a method for addressing this obstacle: creating a knowledge graph to bridge the gap between everyday English language and the technical language surrounding these datasets. Knowledge graph generation is already a well-established field, although there are some unique challenges posed by working with Earth data. One is the sheer size of the databases – it would be infeasible to replicate or analyze all the data stored by an organization like The National Aeronautics and Space Administration (NASA) or the European Space Agency. Instead, this approach identifies topics from metadata available for datasets in NASA’s Earthdata database, which can then be used to directly request and access the raw data from NASA. By starting with a single metadata standard, this paper establishes an approach that can be generalized to different databases, but leaves the challenge of metadata harmonization for future work. Topics generated from the metadata are then linked to topics from a collection of English queries through a variety of standard and custom natural language processing (NLP) methods. The results from this method are then compared to a baseline of elastic search applied to the metadata. This comparison shows the benefits of the proposed knowledge graph system over existing methods, particularly in interpreting natural language queries and interpreting topics in metadata. For the research community, this work introduces an application of NLP to the ecological and environmental sciences, expanding the possibilities of how machine learning can be applied in this discipline. But perhaps more importantly, it establishes the foundation for a platform that can enable common English to access knowledge that previously required considerable effort and experience. By making this public data accessible to the full public, this work has the potential to transform environmental understanding, engagement, and action.

Keywords: earth metadata, knowledge graphs, natural language processing, question-answer systems

Procedia PDF Downloads 122
613 Internet Protocol Television: A Research Study of Undergraduate Students Analyze the Effects

Authors: Sabri Serkan Gulluoglu

Abstract:

The study is aimed at examining the effects of internet marketing with IPTV on human beings. Internet marketing with IPTV is emerging as an integral part of business strategies in today’s technologically advanced world and the business activities all over the world are influences with the emergence of this modern marketing tool. As the population of the Internet and on-line users’ increases, new research issues have arisen concerning the demographics and psychographics of the on-line user and the opportunities for a product or service. In recent years, we have seen a tendency of various services converging to the ubiquitous Internet Protocol based networks. Besides traditional Internet applications such as web browsing, email, file transferring, and so forth, new applications have been developed to replace old communication networks. IPTV is one of the solutions. In the future, we expect a single network, the IP network, to provide services that have been carried by different networks today. For finding some important effects of a video based technology market web site on internet, we determine to apply a questionnaire on university students. Recently some researches shows that in Turkey the age of people 20 to 24 use internet when they buy some electronic devices such as cell phones, computers, etc. In questionnaire there are ten categorized questions to evaluate the effects of IPTV when shopping. There were selected 30 students who are filling the question form after watching an IPTV channel video for 10 minutes. This sample IPTV channel is “buy.com”, it look like an e-commerce site with an integrated IPTV channel on. The questionnaire for the survey is constructed by using the Likert scale that is a bipolar scaling method used to measure either positive or negative response to a statement (Likert, R) it is a common system that is used is the surveys. By following the Likert Scale “the respondents are asked to indicate their degree of agreement with the statement or any kind of subjective or objective evaluation of the statement. Traditionally a five-point scale is used under this methodology”. For this study also the five point scale system is used and the respondents were asked to express their opinions about the given statement by picking the answer from the given 5 options: “Strongly disagree, Disagree, Neither agree Nor disagree, Agree and Strongly agree”. These points were also rates from 1-5 (Strongly disagree, Disagree, Neither disagree Nor agree, Agree, Strongly agree). On the basis of the data gathered from the questionnaire some results are drawn in order to get the figures and graphical representation of the study results that can demonstrate the outcomes of the research clearly.

Keywords: IPTV, internet marketing, online, e-commerce, video based technology

Procedia PDF Downloads 216
612 TiO₂ Nanoparticles Induce DNA Damage and Expression of Biomarker of Oxidative Stress on Human Spermatozoa

Authors: Elena Maria Scalisi

Abstract:

The increasing production and the use of TiO₂ nanoparticles (NPs) have inevitably led to their release into the environment, thereby posing a threat to organisms and also for human. Human exposure to TiO₂-NPs may occur during both manufacturing and use. TiO₂-NPs are common in consumer products for dermal application, toothpaste, food colorants, and nutritional supplements, then oral exposure may occur during use of such products. Into the body, TiO₂-NPs thanks to their small size (<100 nm), can, through testicular blood barrier inducing effect on testis and then on male reproductive health. The nanoscale size of TiO₂ increase the surface-to-volume ratio making them more reactive in a cell, then TiO₂ NPs increase their ability to produce reactive oxygen species (ROS). In male germ cells, ROS may have important implications in maintaining the normal functions of mature spermatozoa at physiological levels, moreover, in spermatozoa they are important signaling molecules for their hyperactivation and acrosome reaction. Nevertheless, an excess of ROS by external inputs such as NPs can increased the oxidative stress (OS), which results in damage DNA and apoptosis. The aim of our study has been investigate the impact of TiO₂ NPs on human spermatozoa, evaluating DNA damage and the expression of proteins involved in cell stress. According WHO guidelines 2021, we have exposed human spermatozoa in vitro to TiO₂ NP at concentrations 50 ppm, 100 ppm, 250 ppm, and 500 ppm for 1 hour (at 37°C and CO₂ at 5%). DNA damage was evaluated by Sperm Chromatin Dispersion Test (SCD) and TUNEL assay; moreover, we have evaluated the expression of biomarkers of oxidative stress like Heat Shock Protein 70 (HSP70) and Metallothioneins (MTs). Also, sperm parameters as motility viability have been evaluated. Our results not report a significant reduction in motility of spermatozoa at the end of the exposure. On the contrary, the progressive motility was increased at the highest concentration (500 ppm) and was statistically significant compared to control (p <0.05). Also, viability was not changed by exposure to TiO₂-NPs (p <0.05). However, increased DNA damage was observed at all concentrations, and the TUNEL assay highlighted the presence of single strand breaks in the DNA. The spermatozoa responded to the presence of TiO₂-NPs with the expression of Hsp70, which have a protective function because they allow the maintenance of cellular homeostasis in stressful/ lethal conditions. A positivity for MTs was observed mainly for the concentration of 4 mg/L. Although the biological and physiological function of the metallothionein (MTs) in the male genital organs is unclear, our results highlighted that the MTs expressed by spermatozoa maintain their biological role of detoxification from metals. Our results can give additional information to the data in the literature on the toxicity of TiO₂-NPs and reproduction.

Keywords: human spermatozoa, DNA damage, TiO₂-NPs, biomarkers

Procedia PDF Downloads 118
611 Adaption to Climate Change as a Challenge for the Manufacturing Industry: Finding Business Strategies by Game-Based Learning

Authors: Jan Schmitt, Sophie Fischer

Abstract:

After the Corona pandemic, climate change is a further, long-lasting challenge the society must deal with. An ongoing climate change need to be prevented. Nevertheless, the adoption tothe already changed climate conditionshas to be focused in many sectors. Recently, the decisive role of the economic sector with high value added can be seen in the Corona crisis. Hence, manufacturing industry as such a sector, needs to be prepared for climate change and adaption. Several examples from the manufacturing industry show the importance of a strategic effort in this field: The outsourcing of a major parts of the value chain to suppliers in other countries and optimizing procurement logistics in a time-, storage- and cost-efficient manner within a network of global value creation, can lead vulnerable impacts due to climate-related disruptions. E.g. the total damage costs after the 2011 flood disaster in Thailand, including costs for delivery failures, were estimated at 45 billion US dollars worldwide. German car manufacturers were also affected by supply bottlenecks andhave close its plant in Thailand for a short time. Another OEM must reduce the production output. In this contribution, a game-based learning approach is presented, which should enable manufacturing companies to derive their own strategies for climate adaption out of a mix of different actions. Based on data from a regional study of small, medium and large manufacturing companies in Mainfranken, a strongly industrialized region of northern Bavaria (Germany) the game-based learning approach is designed. Out of this, the actual state of efforts due to climate adaption is evaluated. First, the results are used to collect single actions for manufacturing companies and second, further actions can be identified. Then, a variety of climate adaption activities can be clustered according to the scope of activity of the company. The combination of different actions e.g. the renewal of the building envelope with regard to thermal insulation, its benefits and drawbacks leads to a specific strategy for climate adaption for each company. Within the game-based approach, the players take on different roles in a fictionalcompany and discuss the order and the characteristics of each action taken into their climate adaption strategy. Different indicators such as economic, ecologic and stakeholder satisfaction compare the success of the respective measures in a competitive format with other virtual companies deriving their own strategy. A "play through" climate change scenarios with targeted adaptation actions illustrate the impact of different actions and their combination onthefictional company.

Keywords: business strategy, climate change, climate adaption, game-based learning

Procedia PDF Downloads 177