Search results for: inquiry- based instruction
694 Moral Decision-Making in the Criminal Justice System: The Influence of Gruesome Descriptions
Authors: Michel Patiño-Sáenz, Martín Haissiner, Jorge Martínez-Cotrina, Daniel Pastor, Hernando Santamaría-García, Maria-Alejandra Tangarife, Agustin Ibáñez, Sandra Baez
Abstract:
It has been shown that gruesome descriptions of harm can increase the punishment given to a transgressor. This biasing effect is mediated by negative emotions, which are elicited upon the presentation of gruesome descriptions. However, there is a lack of studies inquiring the influence of such descriptions on moral decision-making in people involved in the criminal justice system. Such populations are of special interest since they have experience dealing with gruesome evidence, but also formal education on how to assess evidence and gauge the appropriate punishment according to the law. Likewise, they are expected to be objective and rational when performing their duty, because their decisions can impact profoundly people`s lives. Considering these antecedents, the objective of this study was to explore the influence gruesome written descriptions on moral decision-making in this group of people. To that end, we recruited attorneys, judges and public prosecutors (Criminal justice group, CJ, n=30) whose field of specialty is criminal law. In addition, we included a control group of people who did not have a formal education in law (n=30), but who were paired in age and years of education with the CJ group. All participants completed an online, Spanish-adapted version of a moral decision-making task, which was previously reported in the literature and also standardized and validated in the Latin-American context. A series of text-based stories describing two characters, one inflicting harm on the other, were presented to participants. Transgressor's intentionality (accidental vs. intentional harm) and language (gruesome vs. plain) used to describe harm were manipulated employing a within-subjects and a between-subjects design, respectively. After reading each story, participants were asked to rate (a) the harmful action's moral adequacy, (b) the amount of punishment deserving the transgressor and (c) how damaging was his behavior. Results showed main effects of group, intentionality and type of language on all dependent measures. In both groups, intentional harmful actions were rated as significantly less morally adequate, were punished more severely and were deemed as more damaging. Moreover, control subjects deemed more damaging and punished more severely any type of action than the CJ group. In addition, there was an interaction between intentionality and group. People in the control group rated harmful actions as less morally adequate than the CJ group, but only when the action was accidental. Also, there was an interaction between intentionality and language on punishment ratings. Controls punished more when harm was described using gruesome language. However, that was not the case of people in the CJ group, who assigned the same amount of punishment in both conditions. In conclusion, participants with job experience in the criminal justice system or criminal law differ in the way they make moral decisions. Particularly, it seems that they are less sensitive to the biasing effect of gruesome evidence, which is probably explained by their formal education or their experience in dealing with such evidence. Nonetheless, more studies are needed to determine the impact this phenomenon has on the fulfillment of their duty.Keywords: criminal justice system, emotions, gruesome descriptions, intentionality, moral decision-making
Procedia PDF Downloads 187693 Corrosion Protective Coatings in Machines Design
Authors: Cristina Diaz, Lucia Perez, Simone Visigalli, Giuseppe Di Florio, Gonzalo Fuentes, Roberto Canziani, Paolo Gronchi
Abstract:
During the last 50 years, the selection of materials is one of the main decisions in machine design for different industrial applications. It is due to numerous physical, chemical, mechanical and technological factors to consider in it. Corrosion effects are related with all of these factors and impact in the life cycle, machine incidences and the costs for the life of the machine. Corrosion affects the deterioration or destruction of metals due to the reaction with the environment, generally wet. In food industry, dewatering industry, concrete industry, paper industry, etc. corrosion is an unsolved problem and it might introduce some alterations of some characteristics in the final product. Nowadays, depending on the selected metal, its surface and its environment of work, corrosion prevention might be a change of metal, use a coating, cathodic protection, use of corrosion inhibitors, etc. In the vast majority of the situations, use of a corrosion resistant material or in its defect, a corrosion protection coating is the solution. Stainless steels are widely used in machine design, because of their strength, easily cleaned capacity, corrosion resistance and appearance. Typical used are AISI 304 and AISI 316. However, their benefits don’t fit every application, and some coatings are required against corrosion such as some paintings, galvanizing, chrome plating, SiO₂, TiO₂ or ZrO₂ coatings, etc. In this work, some coatings based in a bilayer made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium or Titanium-Zirconium, have been developed used magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology, for trying to reduce corrosion effects on AISI 304, AISI 316 and comparing it with Titanium alloy substrates. Ti alloy display exceptional corrosion resistance to chlorides, sour and oxidising acidic media and seawater. In this study, Ti alloy (99%) has been included for comparison with coated AISI 304 and AISI 316 stainless steel. Corrosion tests were conducted by a Gamry Instrument under ASTM G5-94 standard, using different electrolytes such as tomato salsa, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl for testing corrosion in different industrial environments. In general, in all tested environments, the results showed an improvement of corrosion resistance of all coated AISI 304 and AISI 316 stainless steel substrates when they were compared to uncoated stainless steel substrates. After that, comparing these results with corrosion studies on uncoated Ti alloy substrate, it was observed that in some cases, coated stainless steel substrates, reached similar current density that uncoated Ti alloy. Moreover, Titanium-Zirconium and Titanium-Tantalum coatings showed for all substrates in study including coated Ti alloy substrates, a reduction in current density more than two order in magnitude. As conclusion, Ti-Ta, Ti-Zr, Ti-Nb and Ti-Hf coatings have been developed for improving corrosion resistance of AISI 304 and AISI 316 materials. After corrosion tests in several industry environments, substrates have shown improvements on corrosion resistance. Similar processes have been carried out in Ti alloy (99%) substrates. Coated AISI 304 and AISI 316 stainless steel, might reach similar corrosion protection on the surface than uncoated Ti alloy (99%). Moreover, coated Ti Alloy (99%) might increase its corrosion resistance using these coatings.Keywords: coatings, corrosion, PVD, stainless steel
Procedia PDF Downloads 158692 Development of Biosensor Chip for Detection of Specific Antibodies to HSV-1
Authors: Zatovska T. V., Nesterova N. V., Baranova G. V., Zagorodnya S. D.
Abstract:
In recent years, biosensor technologies based on the phenomenon of surface plasmon resonance (SPR) are becoming increasingly used in biology and medicine. Their application facilitates exploration in real time progress of binding of biomolecules and identification of agents that specifically interact with biologically active substances immobilized on the biosensor surface (biochips). Special attention is paid to the use of Biosensor analysis in determining the antibody-antigen interaction in the diagnostics of diseases caused by viruses and bacteria. According to WHO, the diseases that are caused by the herpes simplex virus (HSV), take second place (15.8%) after influenza as a cause of death from viral infections. Current diagnostics of HSV infection include PCR and ELISA assays. The latter allows determination the degree of immune response to viral infection and respective stages of its progress. In this regard, the searches for new and available diagnostic methods are very important. This work was aimed to develop Biosensor chip for detection of specific antibodies to HSV-1 in the human blood serum. The proteins of HSV1 (strain US) were used as antigens. The viral particles were accumulated in cell culture MDBK and purified by differential centrifugation in cesium chloride density gradient. Analysis of the HSV1 proteins was performed by polyacrylamide gel electrophoresis and ELISA. The protein concentration was measured using De Novix DS-11 spectrophotometer. The device for detection of antigen-antibody interactions was an optoelectronic two-channel spectrometer ‘Plasmon-6’, using the SPR phenomenon in the Krechman optical configuration. It was developed at the Lashkarev Institute of Semiconductor Physics of NASU. The used carrier was a glass plate covered with 45 nm gold film. Screening of human blood serums was performed using the test system ‘HSV-1 IgG ELISA’ (GenWay, USA). Development of Biosensor chip included optimization of conditions of viral antigen sorption and analysis steps. For immobilization of viral proteins 0.2% solution of Dextran 17, 200 (Sigma, USA) was used. Sorption of antigen took place at 4-8°C within 18-24 hours. After washing of chip, three times with citrate buffer (pH 5,0) 1% solution of BSA was applied to block the sites not occupied by viral antigen. It was found direct dependence between the amount of immobilized HSV1 antigen and SPR response. Using obtained biochips, panels of 25 positive and 10 negative for the content of antibodies to HSV-1 human sera were analyzed. The average value of SPR response was 185 a.s. for negative sera and from 312 to. 1264 a.s. for positive sera. It was shown that SPR data were agreed with ELISA results in 96% of samples proving the great potential of SPR in such researches. It was investigated the possibility of biochip regeneration and it was shown that application of 10 mM NaOH solution leads to rupture of intermolecular bonds. This allows reuse the chip several times. Thus, in this study biosensor chip for detection of specific antibodies to HSV1 was successfully developed expanding a range of diagnostic methods for this pathogen.Keywords: biochip, herpes virus, SPR
Procedia PDF Downloads 417691 Keeping under the Hat or Taking off the Lid: Determinants of Social Enterprise Transparency
Abstract:
Transparency could be defined as the voluntary release of information by institutions that is relevant to their own evaluation. Transparency based on information disclosure is recognised to be vital for the Third Sector, as civil society organisations are under pressure to become more transparent to answer the call for accountability. The growing importance of social enterprises as hybrid organisations emerging from the nexus of the public, the private and the Third Sector makes their transparency a topic worth exploring. However, transparency for social enterprises has not yet been studied: as a new form of organisation that combines non-profit missions with commercial means, it is unclear to both the practical and the academic world if the shift in operational logics from non-profit motives to for-profit pursuits has significantly altered their transparency. This is especially so in China, where informational governance and practices of information disclosure by local governments, industries and civil society are notably different from other countries. This study investigates the transparency-seeking behaviour of social enterprises in Greater China to understand what factors at the organisational level may affect their transparency, measured by their willingness to disclose financial information. We make use of the Survey on the Models and Development Status of Social Enterprises in the Greater China Region (MDSSGCR) conducted in 2015-2016. The sample consists of more than 300 social enterprises from the Mainland, Hong Kong and Taiwan. While most respondents have provided complete answers to most of the questions, there is tremendous variation in the respondents’ demonstrated level of transparency in answering those questions related to the financial aspects of their organisations, such as total revenue, net profit, source of revenue and expense. This has led to a lot of missing data on such variables. In this study, we take missing data as data. Specifically, we use missing values as a proxy for an organisation’s level of transparency. Our dependent variables are constructed from missing data on total revenue, net profit, source of revenue and cost breakdown. In addition, we also take into consideration the quality of answers in coding the dependent variables. For example, to be coded as being transparent, an organization must report the sources of at least 50% of its revenue. We have four groups of predictors of transparency, namely nature of organization, decision making body, funding channel and field of concentration. Furthermore, we control for an organisation’s stage of development, self-identity and region. The results show that social enterprises that are at their later stages of organisational development and are funded by financial means are significantly more transparent than others. There is also some evidence that social enterprises located in the Northeast region in China are less transparent than those located in other regions probably because of local political economy features. On the other hand, the nature of the organisation, the decision-making body and field of concentration do not systematically affect the level of transparency. This study provides in-depth empirical insights into the information disclosure behaviour of social enterprises under specific social context. It does not only reveal important characteristics of Third Sector development in China, but also contributes to the general understanding of hybrid institutions.Keywords: China, information transparency, organisational behaviour, social enterprise
Procedia PDF Downloads 184690 Analysis of the Evolution of Techniques and Review in Cleft Surgery
Authors: Tomaz Oliveira, Rui Medeiros, André Lacerda
Abstract:
Introduction: Cleft lip and/or palate are the most frequent forms of congenital craniofacial anomalies, affecting mainly the middle third of the face and manifesting by functional and aesthetic changes. Bilateral cleft lip represents a reconstructive surgical challenge, not only for the labial component but also for the associated nasal deformation. Recently, the paradigm of the approach to this pathology has changed, placing the focus on muscle reconstruction and anatomical repositioning of the nasal cartilages in order to obtain the best aesthetic and functional results. The aim of this study is to carry out a systematic review of the surgical approach to bilateral cleft lip, retrospectively analyzing the case series of Plastic Surgery Service at Hospital Santa Maria (Lisbon, Portugal) regarding this pathology, the global assessment of the characteristics of the operated patients and the study of the different surgical approaches and their complications in the last 20 years. Methods: The present work demonstrates a retrospective and descriptive study of patients who underwent at least one reconstructive surgery for cleft lip and/or palate, in the CPRE service of the HSM, in the period between January 1 of 1997 and December 31 of 2017, in which the data relating to 361 individuals were analyzed who, after applying the exclusion criteria, constituted a sample of 212 participants. The variables analyzed were the year of the first surgery, gender, age, type of orofacial cleft, surgical approach, and its complications. Results: There was a higher overall prevalence in males, with cleft lip and cleft palate occurring in greater proportion in males, with the cleft palate being more common in females. The most frequently recorded malformation was cleft lip and palate, which is complete in most cases. Regarding laterality, alterations with a unilateral labial component were the most commonly observed, with the left lip being described as the most affected. It was found that the vast majority of patients underwent primary intervention up to 12 months of age. The surgical techniques used in the approach to this pathology showed an important chronological variation over the years. Discussion: Cleft lip and/or palate is a medical condition associated with high aesthetic and functional morbidity, which requires early treatment in order to optimize the long-term outcome. The existence of a nasolabial component and its respective surgical correction plays a central role in the treatment of this pathology. The high rates of post-surgical complications and unconvincing aesthetic results have motivated an evolution of the surgical technique, increasingly evident in recent years, allowing today to achieve satisfactory aesthetic results, even in bilateral cleft lip with high deformation complexity. The introduction of techniques that favor nasolabial reconstruction based on anatomical principles has been producing increasingly convincing results. The analyzed sample shows that most of the results obtained in this study are, in general, compatible with the results published in the literature. Conclusion: This work showed that the existence of small variations in the surgical technique can bring significant improvements in the functional and aesthetic results in the treatment of bilateral cleft lip.Keywords: cleft lip, palate lip, congenital abnormalities, cranofacial malformations
Procedia PDF Downloads 110689 Risking Injury: Exploring the Relationship between Risk Propensity and Injuries among an Australian Rules Football Team
Authors: Sarah A. Harris, Fleur L. McIntyre, Paola T. Chivers, Benjamin G. Piggott, Fiona H. Farringdon
Abstract:
Australian Rules Football (ARF) is an invasion based, contact field sport with over one million participants. The contact nature of the game increases exposure to all injuries, including head trauma. Evidence suggests that both concussion and sub-concussive traumas such as head knocks may damage the brain, in particular the prefrontal cortex. The prefrontal cortex may not reach full maturity until a person is in their early twenties with males taking longer to mature than females. Repeated trauma to the pre-frontal cortex during maturation may lead to negative social, cognitive and emotional effects. It is also during this period that males exhibit high levels of risk taking behaviours. Risk propensity and the incidence of injury is an unexplored area of research. Little research has considered if the level of player’s (especially younger players) risk propensity in everyday life places them at an increased risk of injury. Hence the current study, investigated if a relationship exists between risk propensity and self-reported injuries including diagnosed concussion and head knocks, among male ARF players aged 18 to 31 years. Method: The study was conducted over 22 weeks with one West Australian Football League (WAFL) club during the 2015 competition. Pre-season risk propensity was measured using the 7-item self-report Risk Propensity Scale. Possible scores ranged from 9 to 63, with higher scores indicating higher risk propensity. Players reported their self-perceived injuries (concussion, head knocks, upper body and lower body injuries) fortnightly using the WAFL Injury Report Survey (WIRS). A unique ID code was used to ensure player anonymity, which also enabled linkage of survey responses and injury data tracking over the season. A General Linear Model (GLM) was used to analyse whether there was a relationship between risk propensity score and total number of injuries for each injury type. Results: Seventy one players (N=71) with an age range of 18.40 to 30.48 years and a mean age of 21.92 years (±2.96 years) participated in the study. Player’s mean risk propensity score was 32.73, SD ±8.38. Four hundred and ninety five (495) injuries were reported. The most frequently reported injury was head knocks representing 39.19% of total reported injuries. The GLM identified a significant relationship between risk propensity and head knocks (F=4.17, p=.046). No other injury types were significantly related to risk propensity. Discussion: A positive relationship between risk propensity and head trauma in contact sports (specifically WAFL) was discovered. Assessing player’s risk propensity therefore, may identify those more at risk of head injuries. Potentially leading to greater monitoring and education of these players throughout the season, regarding self-identification of head knocks and symptoms that may indicate trauma to the brain. This is important because many players involved in WAFL are in their late teens or early 20’s hence, may be at greater risk of negative outcomes if they experience repeated head trauma. Continued education and research into the risks associated with head injuries has the potential to improve player well-being.Keywords: football, head injuries, injury identification, risk
Procedia PDF Downloads 333688 Multi-Criteria Decision Making Network Optimization for Green Supply Chains
Authors: Bandar A. Alkhayyal
Abstract:
Modern supply chains are typically linear, transforming virgin raw materials into products for end consumers, who then discard them after use to landfills or incinerators. Nowadays, there are major efforts underway to create a circular economy to reduce non-renewable resource use and waste. One important aspect of these efforts is the development of Green Supply Chain (GSC) systems which enables a reverse flow of used products from consumers back to manufacturers, where they can be refurbished or remanufactured, to both economic and environmental benefit. This paper develops novel multi-objective optimization models to inform GSC system design at multiple levels: (1) strategic planning of facility location and transportation logistics; (2) tactical planning of optimal pricing; and (3) policy planning to account for potential valuation of GSC emissions. First, physical linear programming was applied to evaluate GSC facility placement by determining the quantities of end-of-life products for transport from candidate collection centers to remanufacturing facilities while satisfying cost and capacity criteria. Second, disassembly and remanufacturing processes have received little attention in industrial engineering and process cost modeling literature. The increasing scale of remanufacturing operations, worth nearly $50 billion annually in the United States alone, have made GSC pricing an important subject of research. A non-linear physical programming model for optimization of pricing policy for remanufactured products that maximizes total profit and minimizes product recovery costs were examined and solved. Finally, a deterministic equilibrium model was used to determine the effects of internalizing a cost of GSC greenhouse gas (GHG) emissions into optimization models. Changes in optimal facility use, transportation logistics, and pricing/profit margins were all investigated against a variable cost of carbon, using case study system created based on actual data from sites in the Boston area. As carbon costs increase, the optimal GSC system undergoes several distinct shifts in topology as it seeks new cost-minimal configurations. A comprehensive study of quantitative evaluation and performance of the model has been done using orthogonal arrays. Results were compared to top-down estimates from economic input-output life cycle assessment (EIO-LCA) models, to contrast remanufacturing GHG emission quantities with those from original equipment manufacturing operations. Introducing a carbon cost of $40/t CO2e increases modeled remanufacturing costs by 2.7% but also increases original equipment costs by 2.3%. The assembled work advances the theoretical modeling of optimal GSC systems and presents a rare case study of remanufactured appliances.Keywords: circular economy, extended producer responsibility, greenhouse gas emissions, industrial ecology, low carbon logistics, green supply chains
Procedia PDF Downloads 160687 Li-Ion Batteries vs. Synthetic Natural Gas: A Life Cycle Analysis Study on Sustainable Mobility
Authors: Guido Lorenzi, Massimo Santarelli, Carlos Augusto Santos Silva
Abstract:
The growth of non-dispatchable renewable energy sources in the European electricity generation mix is promoting the research of technically feasible and cost-effective solutions to make use of the excess energy, produced when the demand is low. The increasing intermittent renewable capacity is becoming a challenge to face especially in Europe, where some countries have shares of wind and solar on the total electricity produced in 2015 higher than 20%, with Denmark around 40%. However, other consumption sectors (mainly transportation) are still considerably relying on fossil fuels, with a slow transition to other forms of energy. Among the opportunities for different mobility concepts, electric (EV) and biofuel-powered vehicles (BPV) are the options that currently appear more promising. The EVs are targeting mainly the light duty users because of their zero (Full electric) or reduced (Hybrid) local emissions, while the BPVs encourage the use of alternative resources with the same technologies (thermal engines) used so far. The batteries which are applied to EVs are based on ions of Lithium because of their overall good performance in energy density, safety, cost and temperature performance. Biofuels, instead, can be various and the major difference is in their physical state (liquid or gaseous). In this study gaseous biofuels are considered and, more specifically, Synthetic Natural Gas (SNG) produced through a process of Power-to-Gas consisting in an electrochemical upgrade (with Solid Oxide Electrolyzers) of biogas with CO2 recycling. The latter process combines a first stage of electrolysis, where syngas is produced, and a second stage of methanation in which the product gas is turned into methane and then made available for consumption. A techno-economic comparison between the two alternatives is possible, but it does not capture all the different aspects involved in the two routes for the promotion of a more sustainable mobility. For this reason, a more comprehensive methodology, i.e. Life Cycle Assessment, is adopted to describe the environmental implications of using excess electricity (directly or indirectly) for new vehicle fleets. The functional unit of the study is 1 km and the two options are compared in terms of overall CO2 emissions, both considering Cradle to Gate and Cradle to Grave boundaries. Showing how production and disposal of materials affect the environmental performance of the analyzed routes is useful to broaden the perspective on the impacts that different technologies produce, in addition to what is emitted during the operational life. In particular, this applies to batteries for which the decommissioning phase has a larger impact on the environmental balance compared to electrolyzers. The lower (more than one order of magnitude) energy density of Li-ion batteries compared to SNG implies that for the same amount of energy used, more material resources are needed to obtain the same effect. The comparison is performed in an energy system that simulates the Western European one, in order to assess which of the two solutions is more suitable to lead the de-fossilization of the transport sector with the least resource depletion and the mildest consequences for the ecosystem.Keywords: electrical energy storage, electric vehicles, power-to-gas, life cycle assessment
Procedia PDF Downloads 178686 Design and Development of Graphene Oxide Modified by Chitosan Nanosheets Showing pH-Sensitive Surface as a Smart Drug Delivery System for Control Release of Doxorubicin
Authors: Parisa Shirzadeh
Abstract:
Drug delivery systems in which drugs are traditionally used, multi-stage and at specified intervals by patients, do not meet the needs of the world's up-to-date drug delivery. In today's world, we are dealing with a huge number of recombinant peptide and protean drugs and analogues of hormones in the body, most of which are made with genetic engineering techniques. Most of these drugs are used to treat critical diseases such as cancer. Due to the limitations of the traditional method, researchers sought to find ways to solve the problems of the traditional method to a large extent. Following these efforts, controlled drug release systems were introduced, which have many advantages. Using controlled release of the drug in the body, the concentration of the drug is kept at a certain level, and in a short time, it is done at a higher rate. Graphene is a natural material that is biodegradable, non-toxic, and natural compared to carbon nanotubes; its price is lower than carbon nanotubes and is cost-effective for industrialization. On the other hand, the presence of highly effective surfaces and wide surfaces of graphene plates makes it more effective to modify graphene than carbon nanotubes. Graphene oxide is often synthesized using concentrated oxidizers such as sulfuric acid, nitric acid, and potassium permanganate based on Hummer 1 method. In comparison with the initial graphene, the resulting graphene oxide is heavier and has carboxyl, hydroxyl, and epoxy groups. Therefore, graphene oxide is very hydrophilic and easily dissolves in water and creates a stable solution. On the other hand, because the hydroxyl, carboxyl, and epoxy groups created on the surface are highly reactive, they have the ability to work with other functional groups such as amines, esters, polymers, etc. Connect and bring new features to the surface of graphene. In fact, it can be concluded that the creation of hydroxyl groups, Carboxyl, and epoxy and in fact graphene oxidation is the first step and step in creating other functional groups on the surface of graphene. Chitosan is a natural polymer and does not cause toxicity in the body. Due to its chemical structure and having OH and NH groups, it is suitable for binding to graphene oxide and increasing its solubility in aqueous solutions. Graphene oxide (GO) has been modified by chitosan (CS) covalently, developed for control release of doxorubicin (DOX). In this study, GO is produced by the hummer method under acidic conditions. Then, it is chlorinated by oxalyl chloride to increase its reactivity against amine. After that, in the presence of chitosan, the amino reaction was performed to form amide transplantation, and the doxorubicin was connected to the carrier surface by π-π interaction in buffer phosphate. GO, GO-CS, and GO-CS-DOX characterized by FT-IR, RAMAN, TGA, and SEM. The ability to load and release is determined by UV-Visible spectroscopy. The loading result showed a high capacity of DOX absorption (99%) and pH dependence identified as a result of DOX release from GO-CS nanosheet at pH 5.3 and 7.4, which show a fast release rate in acidic conditions.Keywords: graphene oxide, chitosan, nanosheet, controlled drug release, doxorubicin
Procedia PDF Downloads 120685 Improved Operating Strategies for the Optimization of Proton Exchange Membrane Fuel Cell System Performance
Authors: Guillaume Soubeyran, Fabrice Micoud, Benoit Morin, Jean-Philippe Poirot-Crouvezier, Magali Reytier
Abstract:
Proton Exchange Membrane Fuel Cell (PEMFC) technology is considered as a solution for the reduction of CO2 emissions. However, this technology still meets several challenges for high-scale industrialization. In this context, the increase of durability remains a critical aspect for competitiveness of this technology. Fortunately, performance degradations in nominal operating conditions is partially reversible, meaning that if specific conditions are applied, a partial recovery of fuel cell performance can be achieved, while irreversible degradations can only be mitigated. Thus, it is worth studying the optimal conditions to rejuvenate these reversible degradations and assessing the long-term impact of such procedures on the performance of the cell. Reversible degradations consist mainly of anode Pt active sites poisoning by carbon monoxide at the anode, heterogeneities in water management during use, and oxidation/deactivation of Pt active sites at the cathode. The latter is identified as a major source of reversible performance loss caused by the presence oxygen, high temperature and high cathode potential that favor platinum oxidation, especially in high efficiency operating points. Hence, we studied here a recovery procedure aiming at reducing the platinum oxides by decreasing cathode potential during operation. Indeed, the application of short air starvation phase leads to a drop of cathode potential. Cell performances are temporarily increased afterwards. Nevertheless, local temperature and current heterogeneities within the cells are favored and shall be minimized. The consumption of fuel during the recovery phase shall also be considered to evaluate the global efficiency. Consequently, the purpose of this work is to find an optimal compromise between the recovery of reversible degradations by air starvation, the increase of global cell efficiency and the mitigation of irreversible degradations effects. Different operating parameters have first been studied such as cell voltage, temperature and humidity in single cell set-up. Considering the global PEMFC system efficiency, tests showed that reducing duration of recovery phase and reducing cell voltage was the key to ensure an efficient recovery. Recovery phase frequency was a major factor as well. A specific method was established to find the optimal frequency depending on the duration and voltage of the recovery phase. Then, long-term degradations have also been studied by applying FC-DLC cycles based on NEDC cycles on a 4-cell short stack by alternating test sequences with and without recovery phases. Depending on recovery phase timing, cell efficiency during the cycle was increased up to 2% thanks to a mean voltage increase of 10 mV during test sequences with recovery phases. However, cyclic voltammetry tests results suggest that the implementation of recovery phases causes an acceleration of the decrease of platinum active areas that could be due to the high potential variations applied to the cathode electrode during operation.Keywords: durability, PEMFC, recovery procedure, reversible degradation
Procedia PDF Downloads 134684 Burial Findings in Prehistory Qatar: Archaeological Perspective
Authors: Sherine El-Menshawy
Abstract:
Death, funerary beliefs and customs form an essential feature of belief systems and practices in many cultures. It is evident that during the pre-historical periods, various techniques of corpses burial and funerary rituals were conducted. Occasionally, corpses were merely buried in the sand, or in a grave where the body is placed in a contracted position- with knees drawn up under the chin and hands normally lying before the face- with mounds of sand, marking the grave or the bodies were burnt. However, common practice, that was demonstrable in the archaeological record, was burial. The earliest graves were very simple consisting of a shallow circular or oval pits in the ground. The current study focuses on the material culture at Qatar during the pre-historical period, specifically their funerary architecture and burial practices. Since information about burial customs and funerary practices in Qatar prehistory is both scarce and fragmentary, the importance of such study is to answer research questions related to funerary believes and burial habits during the early stages of civilization transformations at prehistory Qatar compared with Mesopotamia, since chronologically, the earliest pottery discovered in Qatar belongs to prehistoric Ubaid culture of Mesopotamia, that was collected from the excavations. This will lead to deep understanding of life and social status in pre-historical period at Qatar. The research also explores the relationship between pre-history Qatar funerary traditions and those of neighboring cultures in the Mesopotamia and Ancient Egypt, with the aim of ascertaining the distinctive aspects of pre-history Qatar culture, the reception of classical culture and the role it played in the creation of local cultural identities in the Near East. Methodologies of this study based on published books and articles in addition to unpublished reports of the Danish excavation team that excavated in and around Doha, Qatar archaeological sites from the 50th. The study is also constructed on compared material related to burial customs found in Mesopotamia. Therefore this current research: (i) Advances knowledge of the burial customs of the ancient people who inhabited Qatar, a study which is unknown recently to scholars, the study though will apply deep understanding of the history of ancient Qatar and its culture and values with an aim to share this invaluable human heritage. (ii) The study is of special significance for the field of studies, since evidence derived from the current study has great value for the study of living conditions, social structure, religious beliefs and ritual practices. (iii) Excavations brought to light burials of different categories. The graves date to the bronze and Iron ages. Their structure varies between mounds above the ground or burials below the ground level. Evidence comes from sites such as Al-Da’asa, Ras Abruk, and Al-Khor. Painted Ubaid sherds of Mesopotamian culture have been discovered in Qatar from sites such as Al-Da’asa, Ras Abruk, and Bir Zekrit. In conclusion, there is no comprehensive study which has been done and lack of general synthesis of information about funerary practices is problematic. Therefore, the study will fill in the gaps in the area.Keywords: archaeological, burial, findings, prehistory, Qatar
Procedia PDF Downloads 150683 Polarization as a Proxy of Misinformation Spreading
Authors: Michela Del Vicario, Walter Quattrociocchi, Antonio Scala, Ana Lucía Schmidt, Fabiana Zollo
Abstract:
Information, rumors, and debates may shape and impact public opinion heavily. In the latest years, several concerns have been expressed about social influence on the Internet and the outcome that online debates might have on real-world processes. Indeed, on online social networks users tend to select information that is coherent to their system of beliefs and to form groups of like-minded people –i.e., echo chambers– where they reinforce and polarize their opinions. In this way, the potential benefits coming from the exposure to different points of view may be reduced dramatically, and individuals' views may become more and more extreme. Such a context fosters misinformation spreading, which has always represented a socio-political and economic risk. The persistence of unsubstantiated rumors –e.g., the hypothetical and hazardous link between vaccines and autism– suggests that social media do have the power to misinform, manipulate, or control public opinion. As an example, current approaches such as debunking efforts or algorithmic-driven solutions based on the reputation of the source seem to prove ineffective against collective superstition. Indeed, experimental evidence shows that confirmatory information gets accepted even when containing deliberately false claims while dissenting information is mainly ignored, influences users’ emotions negatively and may even increase group polarization. Moreover, confirmation bias has been shown to play a pivotal role in information cascades, posing serious warnings about the efficacy of current debunking efforts. Nevertheless, mitigation strategies have to be adopted. To generalize the problem and to better understand social dynamics behind information spreading, in this work we rely on a tight quantitative analysis to investigate the behavior of more than 300M users w.r.t. news consumption on Facebook over a time span of six years (2010-2015). Through a massive analysis on 920 news outlets pages, we are able to characterize the anatomy of news consumption on a global and international scale. We show that users tend to focus on a limited set of pages (selective exposure) eliciting a sharp and polarized community structure among news outlets. Moreover, we find similar patterns around the Brexit –the British referendum to leave the European Union– debate, where we observe the spontaneous emergence of two well segregated and polarized groups of users around news outlets. Our findings provide interesting insights into the determinants of polarization and the evolution of core narratives on online debating. Our main aim is to understand and map the information space on online social media by identifying non-trivial proxies for the early detection of massive informational cascades. Furthermore, by combining users traces, we are finally able to draft the main concepts and beliefs of the core narrative of an echo chamber and its related perceptions.Keywords: information spreading, misinformation, narratives, online social networks, polarization
Procedia PDF Downloads 289682 Sustainable Development Goals and Gender Equality: Impact of Unpaid Labor on Women’s Leadership in India
Authors: Swati Vohra
Abstract:
A genuine economic and social transformation requires equal contribution and participation from both men and women; however, achieving this gender parity is a global concern. In the patriarchal societies around the world, women have been silenced, oppressed, and subjugated. Girls and women comprise half of the world’s population. This, however, must not be the lone reason for recognizing and providing equal opportunities to them. Every individual has a right to develop through opportunities without the biases of gender, caste, race, or ethnicity. The world today is confronted by pressing issues of climate change, economic crisis, violence against women and children, escalating conflicts, to name a few. Achieving gender parity is thus an essential component in meeting this wide array of challenges in order to create just, robust and inclusive societies. In 2015, The United Nation enunciated achieving 17 Sustainable Development Goals by 2030, one of which is SGD#5- Gender Equality, that is not merely a stand-alone goal. It is central to the achievement of all 17 SDG’s. Without progress on gender equality, the global community will not only fail to achieve the SDG5, but it will also lose the impetus towards achieving the broad 2030 agenda. This research is based on a hypothesis that aims to connect the targets laid by the UN under SDG#5 - 5.4 (Recognize and value unpaid care and domestic work) and 5.5 (Ensure women participation for leadership at all levels of decision-making). The study evaluates the impact of unpaid household responsibilities on women’s leadership in India. In Indian society, women have experienced a low social status for centuries, which is reflected throughout the Indian history with preference of a male child and common occurrences of female infanticides that are still prevalent in many parts of the country. Insistence on the traditional gender roles builds patriarchal inequalities into the structure of Indian society. It is argued that a burden of unpaid labor on women is placed, which narrows the opportunities and life chances women are given and the choices they are able to make, thereby shutting them from shared participation in public and economic leadership. The study investigates theoretical framework of social construction of gender, unpaid labor, challenges to women leaders and peace theorist perspective as the core components. The methodology used is qualitative research of comprehensive literature, accompanied by the data collected through interviews of representatives of women leaders from various fields within Delhi-National Capital Region (NCR). The women leaders interviewed had the privilege of receiving good education and a conducive family support; however, post marriage and children it was not the case and the social obligations weighed heavy on them. The research concludes by recommending the importance of gender-neutral parenting and education along with government ratified paternal leaves for at least six months and childcare facilities available for both the parents at workplace.Keywords: gender equality, gender roles, peace studies, sustainable development goals, social construction, unpaid labor, women’s leadership
Procedia PDF Downloads 122681 Exploring the Application of IoT Technology in Lower Limb Assistive Devices for Rehabilitation during the Golden Period of Stroke Patients with Hemiplegia
Authors: Ching-Yu Liao, Ju-Joan Wong
Abstract:
Recent years have shown a trend of younger stroke patients and an increase in ischemic strokes with the rise in stroke incidence. This has led to a growing demand for telemedicine, particularly during the COVID-19 pandemic, which has made the need for telemedicine even more urgent. This shift in healthcare is also closely related to advancements in Internet of Things (IoT) technology. Stroke-induced hemiparesis is a significant issue for patients. The medical community believes that if intervention occurs within three to six months of stroke onset, 80% of the residual effects can be restored to normal, a period known as the stroke golden period. During this time, patients undergo treatment and rehabilitation, and neural plasticity is at its best. Lower limb rehabilitation for stroke generally includes exercises such as support standing and walking posture, typically involving the healthy limb to guide the affected limb to achieve rehabilitation goals. Existing gait training aids in hospitals usually involve balance gait, sitting posture training, and precise muscle control, effectively addressing issues of poor gait, insufficient muscle activity, and inability to train independently during recovery. However, home training aids, such as braced and wheeled devices, often rely on the healthy limb to pull the affected limb, leading to lower usage of the affected limb, worsening circular walking, and compensatory movement issues. IoT technology connects devices via the internet to record, receive data, provide feedback, and adjust equipment for intelligent effects. Therefore, this study aims to explore how IoT can be integrated into existing gait training aids to monitor and sensor home rehabilitation movements, improve gait training compensatory issues through real-time feedback, and enable healthcare professionals to quickly understand patient conditions and enhance medical communication. To understand the needs of hemiparetic patients, a review of relevant literature from the past decade will be conducted. From the perspective of user experience, participant observation will be used to explore the use of home training aids by stroke patients and therapists, and interviews with physical therapists will be conducted to obtain professional opinions and practical experiences. Design specifications for home training aids for hemiparetic patients will be summarized. Applying IoT technology to lower limb training aids for stroke hemiparesis can help promote walking function recovery in hemiparetic patients, reduce muscle atrophy, and allow healthcare professionals to immediately grasp patient conditions and adjust gait training plans based on collected and analyzed information. Exploring these potential development directions provides a valuable reference for the further application of IoT technology in the field of medical rehabilitation.Keywords: stroke, hemiplegia, rehabilitation, gait training, internet of things technology
Procedia PDF Downloads 29680 Analyzing the Investment Decision and Financing Method of the French Small and Medium-Sized Enterprises
Authors: Eliane Abdo, Olivier Colot
Abstract:
SMEs are always considered as a national priority due to their contribution to job creation, innovation and growth. Once the start-up phase is crossed with encouraging results, the company enters the phase of growth. In order to improve its competitiveness, maintain and increase its market share, the company is in the necessity even the obligation to develop its tangible and intangible investments. SMEs are generally closed companies with special and critical financial situation, limited resources and difficulty to access the capital markets; their shareholders are always living in a conflict between their independence and their need to increase capital that leads to the entry of new shareholder. The capital structure was always considered the core of research in corporate finance; moreover, the financial crisis and its repercussions on the credit’s availability, especially for SMEs make SME financing a hot topic. On the other hand, financial theories do not provide answers to capital structure’s questions; they offer tools and mode of financing that are more accessible to larger companies. Yet, SME’s capital structure can’t be independent of their governance structure. The classic financial theory supposes independence between the investment decision and the financing decision. Thus, investment determines the volume of funding, but not the split between internal or external funds. In this context, we find interesting to study the hypothesis that SMEs respond positively to the financial theories applied to large firms and to check if they are constrained by conventional solutions used by large companies. In this context, this research focuses on the analysis of the resource’s structure of SME in parallel with their investments’ structure, in order to highlight a link between their assets and liabilities structure. We founded our conceptual model based on two main theoretical frameworks: the Pecking order theory, and the Trade Off theory taking into consideration the SME’s characteristics. Our data were generated from DIANE database. Five hypotheses were tested via a panel regression to understand the type of dependence between the financing methods of 3,244 French SMEs and the development of their investment over a period of 10 years (2007-2016). The results show dependence between equity and internal financing in case of intangible investments development. Moreover, this type of business is constraint to financial debts since the guarantees provided are not sufficient to meet the banks' requirements. However, for tangible investments development, SMEs count sequentially on internal financing, bank borrowing, and new shares issuance or hybrid financing. This is compliant to the Pecking Order Theory. We, therefore, conclude that unlisted SMEs incur more financial debts to finance their tangible investments more than their intangible. However, they always prefer internal financing as a first choice. This seems to be confirmed by the assumption that the profitability of the company is negatively related to the increase of the financial debt. Thus, the Pecking Order Theory predictions seem to be the most plausible. Consequently, SMEs primarily rely on self-financing and then go, into debt as a priority to finance their financial deficit.Keywords: capital structure, investments, life cycle, pecking order theory, trade off theory
Procedia PDF Downloads 112679 A Taxonomy of Professional Engineering Attributes for Tackling Global Humanitarian Challenges
Authors: Georgia Kremmyda, Angelos Georgoulas, Yiannis Koumpouros, James T. Mottram
Abstract:
There is a growing interest in enhancing the creativity and problem-solving ability of engineering students by expanding their engagement to complex, interdisciplinary problems such as environmental issues, resilience to man-made and natural disasters, global health matters, water needs, increased energy demands, and other global humanitarian challenges. Tackling societal challenges requires knowledgeable and erudite engineers who can handle, combine, transform and create innovative, affordable and sustainable solutions. This view simultaneously complements and challenges current conceptions of an emerging educational movement that, almost without exception, are underpinned by calls for competitive economic growth and technological development. This article reveals a taxonomy of humanitarian attributes to be enabled to professional engineers, through reformed curricula and innovative pedagogies, which once implemented and integrated efficiently in higher engineering education, they will provide students and educators with opportunities to explore interdependencies and connections between resources, sustainable design, societal needs, and the natural environment and to critically engage with implicit and explicit facets of disciplinary identity. The research involves carrying out a study on (a) current practices, best practices and barriers in knowledge organisation, content, and hierarchy in graduate engineering programmes, (b) best practices associated with teaching and research in engineering education around the world, (c) opportunities inherent in general reforms of graduate engineering education and inherent in integrating the humanitarian context throughout engineering education programmes, and, (d) an overarching taxonomy of professional attributes for tackling humanitarian challenges. Research methods involve state-of-the-art literature review on engineering education and pedagogy to resource thematic findings on current status in engineering education worldwide, and qualitative research through three practice dialogue workshops, run in Asia (Vietnam, Indonesia and Bangladesh) involving a variety of national, international and local stakeholders (industries; NGOs, governmental organisations). Findings from this study provide evidence on: (a) what are the professional engineering attributes (skills, experience, knowledge) needed for tackling humanitarian challenges; (b) how we can integrate other disciplines and professions to engineering while defining the professional attributes of engineers who are capable of tackling humanitarian challenges. The attributes will be linked to those discipline(s) and profession(s) that are more likely to enforce the attributes (removing the assumption that engineering education as it stands at the moment can provide all attributes), and; (c) how these attributes shall be supplied; what kind of pedagogies or training shall take place beyond current practices. Acknowledgment: The study is currently in progress and is being undertaken in the framework of the project ENHANCE - ENabling Humanitarian Attributes for Nurturing Community-based Engineering (project No: 598502-EEP-1-2018-1-UK-EPPKA2-CBHE-JP (2018-2582/001-001), funded by the Erasmus + KA2 Cooperation for innovation and the exchange of good practices – Capacity building in the field of Higher Education.Keywords: professional engineering attributes, engineering education, taxonomy, humanitarian challenges, humanitarian engineering
Procedia PDF Downloads 191678 Use of End-Of-Life Footwear Polymer EVA (Ethylene Vinyl Acetate) and PU (Polyurethane) for Bitumen Modification
Authors: Lucas Nascimento, Ana Rita, Margarida Soares, André Ribeiro, Zlatina Genisheva, Hugo Silva, Joana Carvalho
Abstract:
The footwear industry is an essential fashion industry, focusing on producing various types of footwear, such as shoes, boots, sandals, sneakers, and slippers. Global footwear consumption has doubled every 20 years since the 1950s. It is estimated that in 1950, each person consumed one new pair of shoes yearly; by 2005, over 20 billion pairs of shoes were consumed. To meet global footwear demand, production reached $24.2 billion, equivalent to about $74 per person in the United States. This means three new pairs of shoes per person worldwide. The issue of footwear waste is related to the fact that shoe production can generate a large amount of waste, much of which is difficult to recycle or reuse. This waste includes scraps of leather, fabric, rubber, plastics, toxic chemicals, and other materials. The search for alternative solutions for waste treatment and valorization is increasingly relevant in the current context, mainly when focused on utilizing waste as a source of substitute materials. From the perspective of the new circular economy paradigm, this approach is of utmost importance as it aims to preserve natural resources and minimize the environmental impact associated with sending waste to landfills. In this sense, the incorporation of waste into industrial sectors that allow for the recovery of large volumes, such as road construction, becomes an urgent and necessary solution from an environmental standpoint. This study explores the use of plastic waste from the footwear industry as a substitute for virgin polymers in bitumen modification, a solution that presents a more sustainable future. Replacing conventional polymers with plastic waste in asphalt composition reduces the amount of waste sent to landfills and offers an opportunity to extend the lifespan of road infrastructures. By incorporating waste into construction materials, reducing the consumption of natural resources and the emission of pollutants is possible, promoting a more circular and efficient economy. In the initial phase of this study, waste materials from end-of-life footwear were selected, and plastic waste with the highest potential for application was separated. Based on a literature review, EVA (ethylene vinyl acetate) and PU (polyurethane) were identified as the polymers suitable for modifying 50/70 classification bitumen. Each polymer was analysed at concentrations of 3% and 5%. The production process involved the polymer's fragmentation to a size of 4 millimetres after heating the materials to 180 ºC and mixing for 10 minutes at low speed. After was mixed for 30 minutes in a high-speed mixer. The tests included penetration, softening point, viscosity, and rheological assessments. With the results obtained from the tests, the mixtures with EVA demonstrated better results than those with PU, as EVA had more resistance to temperature, a better viscosity curve and a greater elastic recovery in rheology.Keywords: footwear waste, hot asphalt pavement, modified bitumen, polymers
Procedia PDF Downloads 15677 Concentration of Droplets in a Transient Gas Flow
Authors: Timur S. Zaripov, Artur K. Gilfanov, Sergei S. Sazhin, Steven M. Begg, Morgan R. Heikal
Abstract:
The calculation of the concentration of inertial droplets in complex flows is encountered in the modelling of numerous engineering and environmental phenomena; for example, fuel droplets in internal combustion engines and airborne pollutant particles. The results of recent research, focused on the development of methods for calculating concentration and their implementation in the commercial CFD code, ANSYS Fluent, is presented here. The study is motivated by the investigation of the mixture preparation processes in internal combustion engines with direct injection of fuel sprays. Two methods are used in our analysis; the Fully Lagrangian method (also known as the Osiptsov method) and the Eulerian approach. The Osiptsov method predicts droplet concentrations along path lines by solving the equations for the components of the Jacobian of the Eulerian-Lagrangian transformation. This method significantly decreases the computational requirements as it does not require counting of large numbers of tracked droplets as in the case of the conventional Lagrangian approach. In the Eulerian approach the average droplet velocity is expressed as a function of the carrier phase velocity as an expansion over the droplet response time and transport equation can be solved in the Eulerian form. The advantage of the method is that droplet velocity can be found without solving additional partial differential equations for the droplet velocity field. The predictions from the two approaches were compared in the analysis of the problem of a dilute gas-droplet flow around an infinitely long, circular cylinder. The concentrations of inertial droplets, with Stokes numbers of 0.05, 0.1, 0.2, in steady-state and transient laminar flow conditions, were determined at various Reynolds numbers. In the steady-state case, flows with Reynolds numbers of 1, 10, and 100 were investigated. It has been shown that the results predicted using both methods are almost identical at small Reynolds and Stokes numbers. For larger values of these numbers (Stokes — 0.1, 0.2; Reynolds — 10, 100) the Eulerian approach predicted a wider spread in concentration in the perturbations caused by the cylinder that can be attributed to the averaged droplet velocity field. The transient droplet flow case was investigated for a Reynolds number of 200. Both methods predicted a high droplet concentration in the zones of high strain rate and low concentrations in zones of high vorticity. The maxima of droplet concentration predicted by the Osiptsov method was up to two orders of magnitude greater than that predicted by the Eulerian method; a significant variation for an approach widely used in engineering applications. Based on the results of these comparisons, the Osiptsov method has resulted in a more precise description of the local properties of the inertial droplet flow. The method has been applied to the analysis of the results of experimental observations of a liquid gasoline spray at representative fuel injection pressure conditions. The preliminary results show good qualitative agreement between the predictions of the model and experimental data.Keywords: internal combustion engines, Eulerian approach, fully Lagrangian approach, gasoline fuel sprays, droplets and particle concentrations
Procedia PDF Downloads 257676 Low Cost LiDAR-GNSS-UAV Technology Development for PT Garam’s Three Dimensional Stockpile Modeling Needs
Authors: Mohkammad Nur Cahyadi, Imam Wahyu Farid, Ronny Mardianto, Agung Budi Cahyono, Eko Yuli Handoko, Daud Wahyu Imani, Arizal Bawazir, Luki Adi Triawan
Abstract:
Unmanned aerial vehicle (UAV) technology has cost efficiency and data retrieval time advantages. Using technologies such as UAV, GNSS, and LiDAR will later be combined into one of the newest technologies to cover each other's deficiencies. This integration system aims to increase the accuracy of calculating the volume of the land stockpile of PT. Garam (Salt Company). The use of UAV applications to obtain geometric data and capture textures that characterize the structure of objects. This study uses the Taror 650 Iron Man drone with four propellers, which can fly for 15 minutes. LiDAR can classify based on the number of image acquisitions processed in the software, utilizing photogrammetry and structural science principles from Motion point cloud technology. LiDAR can perform data acquisition that enables the creation of point clouds, three-dimensional models, Digital Surface Models, Contours, and orthomosaics with high accuracy. LiDAR has a drawback in the form of coordinate data positions that have local references. Therefore, researchers use GNSS, LiDAR, and drone multi-sensor technology to map the stockpile of salt on open land and warehouses every year, carried out by PT. Garam twice, where the previous process used terrestrial methods and manual calculations with sacks. Research with LiDAR needs to be combined with UAV to overcome data acquisition limitations because it only passes through the right and left sides of the object, mainly when applied to a salt stockpile. The UAV is flown to assist data acquisition with a wide coverage with the help of integration of the 200-gram LiDAR system so that the flying angle taken can be optimal during the flight process. Using LiDAR for low-cost mapping surveys will make it easier for surveyors and academics to obtain pretty accurate data at a more economical price. As a survey tool, LiDAR is included in a tool with a low price, around 999 USD; this device can produce detailed data. Therefore, to minimize the operational costs of using LiDAR, surveyors can use Low-Cost LiDAR, GNSS, and UAV at a price of around 638 USD. The data generated by this sensor is in the form of a visualization of an object shape made in three dimensions. This study aims to combine Low-Cost GPS measurements with Low-Cost LiDAR, which are processed using free user software. GPS Low Cost generates data in the form of position-determining latitude and longitude coordinates. The data generates X, Y, and Z values to help georeferencing process the detected object. This research will also produce LiDAR, which can detect objects, including the height of the entire environment in that location. The results of the data obtained are calibrated with pitch, roll, and yaw to get the vertical height of the existing contours. This study conducted an experimental process on the roof of a building with a radius of approximately 30 meters.Keywords: LiDAR, unmanned aerial vehicle, low-cost GNSS, contour
Procedia PDF Downloads 94675 Three-Stage Least Squared Models of a Station-Level Subway Ridership: Incorporating an Analysis on Integrated Transit Network Topology Measures
Authors: Jungyeol Hong, Dongjoo Park
Abstract:
The urban transit system is a critical part of a solution to the economic, energy, and environmental challenges. Furthermore, it ultimately contributes the improvement of people’s quality of lives. For taking these kinds of advantages, the city of Seoul has tried to construct an integrated transit system including both subway and buses. The effort led to the fact that approximately 6.9 million citizens use the integrated transit system every day for their trips. Diagnosing the current transit network is a significant task to provide more convenient and pleasant transit environment. Therefore, the critical objective of this study is to establish a methodological framework for the analysis of an integrated bus-subway network and to examine the relationship between subway ridership and parameters such as network topology measures, bus demand, and a variety of commercial business facilities. Regarding a statistical approach to estimate subway ridership at a station level, many previous studies relied on Ordinary Least Square regression, but there was lack of studies considering the endogeneity issues which might show in the subway ridership prediction model. This study focused on both discovering the impacts of integrated transit network topology measures and endogenous effect of bus demand on subway ridership. It could ultimately contribute to developing more accurate subway ridership estimation accounting for its statistical bias. The spatial scope of the study covers Seoul city in South Korea, and it includes 243 subway stations and 10,120 bus stops with the temporal scope set during twenty-four hours with one-hour interval time panels each. The subway and bus ridership information in detail was collected from the Seoul Smart Card data in 2015 and 2016. First, integrated subway-bus network topology measures which have characteristics regarding connectivity, centrality, transitivity, and reciprocity were estimated based on the complex network theory. The results of integrated transit network topology analysis were compared to subway-only network topology. Also, the non-recursive approach which is Three-Stage Least Square was applied to develop the daily subway ridership model as capturing the endogeneity between bus and subway demands. Independent variables included roadway geometry, commercial business characteristics, social-economic characteristics, safety index, transit facility attributes, and dummies for seasons and time zone. Consequently, it was found that network topology measures were significant size effect. Especially, centrality measures showed that the elasticity was a change of 4.88% for closeness centrality, 24.48% for betweenness centrality while the elasticity of bus ridership was 8.85%. Moreover, it was proved that bus demand and subway ridership were endogenous in a non-recursive manner as showing that predicted bus ridership and predicted subway ridership is statistically significant in OLS regression models. Therefore, it shows that three-stage least square model appears to be a plausible model for efficient subway ridership estimation. It is expected that the proposed approach provides a reliable guideline that can be used as part of the spectrum of tools for evaluating a city-wide integrated transit network.Keywords: integrated transit system, network topology measures, three-stage least squared, endogeneity, subway ridership
Procedia PDF Downloads 177674 Compromising Quality of Life in Low Income Settlement's: The Case of Ashrayan Prakalpa, Khulna
Authors: Salma Akter, Md. Kamal Uddin
Abstract:
This study aims to demonstrate how top-down shelter policy and its resultant dwelling environment leads to ‘everyday compromise’ by the grassroots according to subjective (satisfaction) and objective (physical design elements and physical environmental elements) indicators, which are measured across three levels of the settlement; macro (Community), meso (Neighborhood or shelter/built environment) and micro (family). Ashrayan Prakalpa is a resettlement /housing project of Government of Bangladesh for providing shelters and human resources development activities like education, microcredit, and training programme to landless, homeless and rootless people. Despite the integrated nature of the shelter policies (comprises poverty alleviation, employment opportunity, secured tenure, and livelihood training), the ‘quality of life’ issue at the different levels of settlements becomes questionable. As dwellers of shelter units (although formally termed as ‘barracks’ rather shelter or housing) remain on the receiving end of government’s resettlement policies, they often involve with spatial-physical and socio-economic negotiation and assume curious forms of spatial practice, which often upholds contradiction with policy planning. Thus, policy based shelter force dwellers to persistently compromise with their provided built environments both in overtly and covertly. Compromising with prescribed designed space and facilities across living places articulated their negotiation with the quality of allocated space, built form and infrastructures, which in turn exert as less quality of life. The top-down shelter project, Dakshin Chandani Mahal Ashrayan Prakalpa at Dighalia Upazila, the study area located at the Eastern fringe area of Khulna, Bangladesh, is still in progress to resettle internally displaced and homeless people. In terms of methodology, this research is primarily exploratory and adopts a case study method, and an analytical framework is developed through the deductive approach for evaluating the quality of life. Secondary data have been obtained from housing policy analysis and relevant literature review, while key informant interview, focus group discussion, necessary drawings and photographs and participant observation across dwelling, neighborhood, and community level have also been administered as primary data collection methodology. Findings have revealed that various shortages, inadequacies, and negligence of policymakers force to compromise with allocated designed space, physical infrastructure and economic opportunities across dwelling, neighborhood and mostly community level. Thus, the outcome of this study can be beneficial for a global-level understating of the compromising the ‘quality of life’ under top-down shelter policy. Locally, for instance, in the context of Bangladesh, it can help policymakers and concerned authorities to formulate the shelter policies and take initiatives to improve the well-being of marginalized.Keywords: Ashrayan Prakalpa, compromise, displaced people, quality of life
Procedia PDF Downloads 151673 Transport of Reactive Carbo-Iron Composite Particles for in situ Groundwater Remediation Investigated at Laboratory and Field Scale
Authors: Sascha E. Oswald, Jan Busch
Abstract:
The in-situ dechlorination of contamination by chlorinated solvents in groundwater via zero-valent iron (nZVI) is potentially an efficient and prompt remediation method. A key requirement is that nZVI has to be introduced in the subsurface in a way that substantial quantities of the contaminants are actually brought into direct contact with the nZVI in the aquifer. Thus it could be a more flexible and precise alternative to permeable reactive barrier techniques using granular iron. However, nZVI are often limited by fast agglomeration and sedimentation in colloidal suspensions, even more so in the aquifer sediments, which is a handicap for the application to treat source zones or contaminant plumes. Colloid-supported nZVI show promising characteristics to overcome these limitations and Carbo-Iron Colloids is a newly developed composite material aiming for that. The nZVI is built onto finely ground activated carbon of about a micrometer diameter acting as a carrier for it. The Carbo-Iron Colloids are often suspended with a polyanionic stabilizer, and carboxymethyl cellulose is one with good properties for that. We have investigated the transport behavior of Carbo-Iron Colloids (CIC) on different scales and for different conditions to assess its mobility in aquifer sediments as a key property for making its application feasible. The transport properties were tested in one-dimensional laboratory columns, a two-dimensional model aquifer and also an injection experiment in the field. Those experiments were accompanied by non-invasive tomographic investigations of the transport and filtration processes of CIC suspensions. The laboratory experiments showed that a larger part of the CIC can travel at least scales of meters for favorable but realistic conditions. Partly this is even similar to a dissolved tracer. For less favorable conditions this can be much smaller and in all cases a particular fraction of the CIC injected is retained mainly shortly after entering the porous medium. As field experiment a horizontal flow field was established, between two wells with a distance of 5 meters, in a confined, shallow aquifer at a contaminated site in North German lowlands. First a tracer test was performed and a basic model was set up to define the design of the CIC injection experiment. Then CIC suspension was introduced into the aquifer at the injection well while the second well was pumped and samples taken there to observe the breakthrough of CIC. This was based on direct visual inspection and total particle and iron concentrations of water samples analyzed in the laboratory later. It could be concluded that at least 12% of the CIC amount injected reached the extraction well in due course, some of it traveling distances larger than 10 meters in the non-uniform dipole flow field. This demonstrated that these CIC particles have a substantial mobility for reaching larger volumes of a contaminated aquifer and for interacting there by their reactivity with dissolved contaminants in the pore space. Therefore they seem suited well for groundwater remediation by in-situ formation of reactive barriers for chlorinated solvent plumes or even source removal.Keywords: carbo-iron colloids, chlorinated solvents, in-situ remediation, particle transport, plume treatment
Procedia PDF Downloads 246672 Hydration Evaluation In A Working Population in Greece
Authors: Aikaterini-Melpomeni Papadopoulou, Kyriaki Apergi, Margarita-Vasiliki Panagopoulou, Olga Malisova
Abstract:
Introduction: Adequate hydration is a vital factor that enhances concentration, memory, and decision-making abilities throughout the workday. Various factors may affect hydration status in workplace settings, and many variables, such as age, gender and activity level affect hydration needs. Employees frequently overlook their hydration needs amid busy schedules and demanding tasks, leading to dehydration that can negatively affect cognitive function, productivity, and overall well-being In addition, dietary habits, including fluid intake and food choices, can either support or hinder optimal hydration. However, factors that affect hydration balance among workers in Greece have not been adequately studied. Objective: This study aims to evaluate the hydration status of the working population in Greece and investigate the various factors that impact hydration status in workplace settings, considering demographic, dietary, and occupational influences in a Greek sample of employees from diverse working environments Materials & Methods: The study included 212 participants (46.2% women) from the working population in Greece. Water intake from both solid and liquid foods was recorded using a semi-quantified drinking frequency questionnaire the validated Water Balance Questionnaire was used to evaluate hydration status. The calculation of water from solid and liquid foods was based on data from the USDA National Nutrient Database. Water balance was calculated subtracting the total fluid loss from the total fluid intake in the body. Furthermore, the questionnaire including additional questions on drinking habits and work-related factors.volunteers answered questions of different categories such as a) demographic socio-economic b) work style characteristics c) health, d) physical activity, e) food and fluid intake, f) fluid excretion and g) trends on fluid and water intake. Individual and multivariate regression analyses were performed to assess the relationships between demographic, work-related factors, and hydration balance. Results: Analysis showed that demographic factors like gender, age, and BMI, as well as certain work-related factors, had a weak and statistically non-significant effect on hydration balance. However, the use of a bottle or water container during work hours (b = 944.93, p < 0.001) and engaging in intense physical activity outside of work (b = -226.28, p < 0.001) were found to have a significant impact. Additionally, the consumption of beverages other than water (b = -416.14, p = 0.059) could negatively impact hydration balance. On average, the total consumption of the sample is 3410 ml of water daily, with men consuming approximately 440 ml / day more water (3470 ml / day) compared to women (3030 ml / day) with this difference also being statistically significant. Finally, the water balance, defined as the difference between water intake and water excretion, was found to be negative on average for the entire sample. Conclusions: This study is among the first to explore hydration status within the Greek working population. Findings indicate that awareness of adequate hydration and individual actions, such as using a water bottle during work, may influence hydration balance.Keywords: hydration, working population, water balance, workplace behavior
Procedia PDF Downloads 11671 Mondoc: Informal Lightweight Ontology for Faceted Semantic Classification of Hypernymy
Authors: M. Regina Carreira-Lopez
Abstract:
Lightweight ontologies seek to concrete union relationships between a parent node, and a secondary node, also called "child node". This logic relation (L) can be formally defined as a triple ontological relation (LO) equivalent to LO in ⟨LN, LE, LC⟩, and where LN represents a finite set of nodes (N); LE is a set of entities (E), each of which represents a relationship between nodes to form a rooted tree of ⟨LN, LE⟩; and LC is a finite set of concepts (C), encoded in a formal language (FL). Mondoc enables more refined searches on semantic and classified facets for retrieving specialized knowledge about Atlantic migrations, from the Declaration of Independence of the United States of America (1776) and to the end of the Spanish Civil War (1939). The model looks forward to increasing documentary relevance by applying an inverse frequency of co-ocurrent hypernymy phenomena for a concrete dataset of textual corpora, with RMySQL package. Mondoc profiles archival utilities implementing SQL programming code, and allows data export to XML schemas, for achieving semantic and faceted analysis of speech by analyzing keywords in context (KWIC). The methodology applies random and unrestricted sampling techniques with RMySQL to verify the resonance phenomena of inverse documentary relevance between the number of co-occurrences of the same term (t) in more than two documents of a set of texts (D). Secondly, the research also evidences co-associations between (t) and their corresponding synonyms and antonyms (synsets) are also inverse. The results from grouping facets or polysemic words with synsets in more than two textual corpora within their syntagmatic context (nouns, verbs, adjectives, etc.) state how to proceed with semantic indexing of hypernymy phenomena for subject-heading lists and for authority lists for documentary and archival purposes. Mondoc contributes to the development of web directories and seems to achieve a proper and more selective search of e-documents (classification ontology). It can also foster on-line catalogs production for semantic authorities, or concepts, through XML schemas, because its applications could be used for implementing data models, by a prior adaptation of the based-ontology to structured meta-languages, such as OWL, RDF (descriptive ontology). Mondoc serves to the classification of concepts and applies a semantic indexing approach of facets. It enables information retrieval, as well as quantitative and qualitative data interpretation. The model reproduces a triple tuple ⟨LN, LE, LT, LCF L, BKF⟩ where LN is a set of entities that connect with other nodes to concrete a rooted tree in ⟨LN, LE⟩. LT specifies a set of terms, and LCF acts as a finite set of concepts, encoded in a formal language, L. Mondoc only resolves partial problems of linguistic ambiguity (in case of synonymy and antonymy), but neither the pragmatic dimension of natural language nor the cognitive perspective is addressed. To achieve this goal, forthcoming programming developments should target at oriented meta-languages with structured documents in XML.Keywords: hypernymy, information retrieval, lightweight ontology, resonance
Procedia PDF Downloads 125670 Neuroanatomical Specificity in Reporting & Diagnosing Neurolinguistic Disorders: A Functional & Ethical Primer
Authors: Ruairi J. McMillan
Abstract:
Introduction: This critical analysis aims to ascertain how well neuroanatomical aetiologies are communicated within 20 case reports of aphasia. Neuroanatomical visualisations based on dissected brain specimens were produced and combined with white matter tract and vascular taxonomies of function in order to address the most consistently underreported features found within the aphasic case study reports. Together, these approaches are intended to integrate aphasiological knowledge from the past 20 years with aphasiological diagnostics, and to act as prototypal resources for both researchers and clinical professionals. The medico-legal precedent for aphasia diagnostics under Canadian, US and UK case law and the neuroimaging/neurological diagnostics relative to the functional capacity of aphasic patients are discussed in relation to the major findings of the literary analysis, neuroimaging protocols in clinical use today, and the neuroanatomical aetiologies of different aphasias. Basic Methodology: Literature searches of relevant scientific databases (e.g, OVID medline) were carried out using search terms such as aphasia case study (year) & stroke induced aphasia case study. A series of 7 diagnostic reporting criteria were formulated, and the resulting case studies were scored / 7 alongside clinical stroke criteria. In order to focus on the diagnostic assessment of the patient’s condition, only the case report proper (not the discussion) was used to quantify results. Statistical testing established if specific reporting criteria were associated with higher overall scores and potentially inferable increases in quality of reporting. Statistical testing of whether criteria scores were associated with an unclear/adjusted diagnosis were also tested, as well as the probability of a given criterion deviating from an expected estimate. Major Findings: The quantitative analysis of neuroanatomically driven diagnostics in case studies of aphasia revealed particularly low scores in the connection of neuroanatomical functions to aphasiological assessment (10%), and in the inclusion of white matter tracts within neuroimaging or assessment diagnostics (30%). Case studies which included clinical mention of white matter tracts within the report itself were distributed among higher scoring cases, as were case studies which (as clinically indicated) related the affected vascular region to the brain parenchyma of the language network. Concluding Statement: These findings indicate that certain neuroanatomical functions are integrated less often within the patient report than others, despite a precedent for well-integrated neuroanatomical aphasiology also being found among the case studies sampled, and despite these functions being clinically essential in diagnostic neuroimaging and aphasiological assessment. Therefore, ultimately the integration and specificity of aetiological neuroanatomy may contribute positively to the capacity and autonomy of aphasic patients as well as their clinicians. The integration of a full aetiological neuroanatomy within the reporting of aphasias may improve patient outcomes and sustain autonomy in the event of medico-ethical investigation.Keywords: aphasia, language network, functional neuroanatomy, aphasiological diagnostics, medico-legal ethics
Procedia PDF Downloads 67669 Family Firm Internationalization: Identification of Alternative Success Pathways
Authors: Sascha Kraus, Wolfgang Hora, Philipp Stieg, Thomas Niemand, Ferdinand Thies, Matthias Filser
Abstract:
In most countries, small and medium-sized enterprises (SME) are the backbone of the economy due to their impact on job creation, innovation and wealth creation. Moreover, the ongoing globalization makes it inevitable – even for SME that traditionally focused on their domestic markets – to internationalize their business activities to realize further growth and survive in international markets. Thus, internationalization has become one of the most common growth strategies for SME and has received increasing scholarly attention over the last two decades. One the downside internationalization can be also regarded as the most complex strategy that a firm can undertake. Particularly for family firms, that are often characterized by limited financial capital, a risk-averse nature and limited growth aspirations, it could be argued that family firms are more likely to face greater challenges when taking the pathway to internationalization. Especially the triangulation of family, ownership, and management (so-called ‘familiness’) manifests in a unique behavior and decision-making process which is often characterized by the importance given to noneconomic goals and distinguishes a family firm from other businesses. Taking this into account, the concept of socio-emotional wealth (SEW) has been evolved to describe the behavior of family firms. In order to investigate how different internal and external firm characteristics shape internationalization success of family firms, we drew on a sample consisting of 297 small and medium-sized family firms from Germany, Austria, Switzerland, and Liechtenstein. Thus, we include SEW as essential family firm characteristic and added the two major intra-organizational characteristics, entrepreneurial orientation (EO), absorptive capacity (AC) as well as collaboration intensity (CI) and relational knowledge (RK) as two major external network characteristics. Based on previous research we assume that these characteristics are important to explain internationalization success of family firm SME. Regarding the data analysis, we applied a Fuzzy Set Qualitative Comparative Analysis (fsQCA), an approach that allows identifying configurations of firm characteristics, specifically used to study complex causal relationships where traditional regression techniques reach their limits. Results indicate that several combinations of these family firm characteristics can lead to international success, with no permanently required key characteristic. Instead, there are many roads to walk down for family firms to achieve internationalization success. Consequently, our data states that family owned SME are heterogeneous and internationalization is a complex and dynamic process. Results further show that network related characteristics occur in all sets, thus represent an essential element in the internationalization process of family owned SME. The contribution of our study is twofold, as we investigate different forms of international expansion for family firms and how to improve them. First, we are able to broaden the understanding of the intersection between family firm and SME internationalization with respect to major intra-organizational and network-related variables. Second, from a practical perspective, we offer family firm owners a basis for setting up internal capabilities to achieve international success.Keywords: entrepreneurial orientation, family firm, fsQCA, internationalization, socio-emotional wealth
Procedia PDF Downloads 241668 Application of Low Frequency Ac Magnetic Field for Controlled Delivery of Drugs by Magnetic Nanoparticles
Authors: K. Yu Vlasova, M. A. Abakumov, H. Wishwarsao, M. Sokolsky, N. V. Nukolova, A. G. Majouga, Y. I. Golovin, N. L. Klyachko, A. V. Kabanov
Abstract:
Introduction:Nowadays pharmaceutical medicine is aimed to create systems for combined therapy, diagnostic, drug delivery and controlled release of active molecules to target cells. Magnetic nanoparticles (MNPs) are used to achieve this aim. MNPs can be applied in molecular diagnostics, magnetic resonance imaging (T1/T2 contrast agents), drug delivery, hyperthermia and could improve therapeutic effect of drugs. The most common drug containers, containing MNPs, are liposomes, micelles and polymeric molecules bonded to the MNPs surface. Usually superparamagnetic nanoparticles are used (the general diameter is about 5-6 nm) and all effects of high frequency magnetic field (MF) application are based on Neel relaxation resulting in heating of surrounded media. In this work we try to develop a new method to improve drug release from MNPs under super low frequency MF. We suppose that under low frequency MF exposures the Brown’s relaxation dominates and MNPs rotation could occur leading to conformation changes and release of bioactive molecules immobilized on MNPs surface.The aim of this work was to synthesize different systems with active drug (biopolymers coated MNPs nanoclusters with immobilized enzymes and doxorubicin (Dox) loaded magnetic liposomes/micelles) and investigate the effect of super low frequency MF on these drug containers. Methods: We have synthesized MNPs of magnetite with magnetic core diameter 7-12 nm . The MNPs were coated with block-copolymer of polylysine and polyethylene glycol. Superoxide dismutase 1 (SOD1) was electrostatically adsorbed on the surface of the clusters. Liposomes were prepared as follow: MNPs, phosphatidylcholine and cholesterol were dispersed in chloroform, dried to get film and then dispersed in distillated water, sonicated. Dox was added to the solution, pH was adjusted to 7.4 and excess of drug was removed by centrifugation through 3 kDa filters. Results: Polylysine coated MNPs formed nanosized clusters (as observed by TEM) with intensity average diameter of 112±5 nm and zeta potential 12±3 mV. After low frequency AC MF exposure we observed change of immobilized enzyme activity and hydrodynamic size of clusters. We suppose that the biomolecules (enzymes) are released from the MNPs surface followed with additional aggregation of complexes at the MF in medium. Centrifugation of the nanosuspension after AC MF exposures resulted in increase of positive charge of clusters and change in enzyme concentration in comparison with control sample without MF, thus confirming desorption of negatively charged enzyme from the positively charged surface of MNPs. Dox loaded magnetic liposomes had average diameter of 160±8 nm and polydispersity index (PDI) 0.25±0.07. Liposomes were stable in DW and PBS at pH=7.4 at 370C during a week. After MF application (10 min of exposure, 50 Hz, 230 mT) diameter of liposomes raised to 190±10 nm and PDI was 0.38±0.05. We explain this by destroying and/or reorganization of lipid bilayer, that leads to changes in release of drug in comparison with control without MF exposure. Conclusion: A new application of low frequency AC MF for drug delivery and controlled drug release was shown. Investigation was supported by RSF-14-13-00731 grant, K1-2014-022 grant.Keywords: magnetic nanoparticles, low frequency magnetic field, drug delivery, controlled drug release
Procedia PDF Downloads 481667 In vitro and in vivo Effects of 'Sonneratia alba' Extract against the Fish Pathogen 'Aphanomyces invadans'
Authors: S. F. Afzali, W. L. Wong
Abstract:
The epizootic ulcerative syndrome (EUS) causes by the oomycete fungus, Aphanomyces invadans; known to be one of the infectious fish diseases for farmed and wild fishes in fresh and brackish-water from the Asia-pacific region, America and Africa. Although, EUS had been documented by the Office International des Epizooties (OIE) since 1995, hitherto, there is neither standard chemical agents that can be used for successful treatment of this destructive infection in the time of outbreak; nor available vaccine for prevention. Plant-based remedies in controlling fish diseases are gaining much attention recently as an alternative to chemical treatments, which possess negative effects to the environment and human. In present study, Sonneratia alba, a mangrove plant belongs to the Sonneratiaceae family, was screened in vitro and in vivo for its antifungal activity against A. invadans mycelium growth and its effects on fish innate immune system and disease resistant. The in vitro tests was performed using the disc diffusion methods with measurements of minimum inhibitory concentration (MIC) and inhibition zone. For in vivo study, the S. alba extract supplemented diets were administrated at 0.0, 1.0%, 3.0%, and 5.0% on healthy goldfish, Carassius auratus, which challenged with A. invadans zoospores (100 spores/ml). To compare the significant differences in the hematological and immunological parameters obtained from the experiments, the data were analysed using the SPSS. The methanol extract of S. alba effectively inhibited the mycelial growth of A. invadans at a minimum concentration of 1000 ppm for agar and filter paper diffusion experiments. In the agar diffusion test, 500 ppm of the extract inhibited the fungus mycelial growth up to 96 hours after exposure. The mycelial growth from the edge of the pre-inoculated A. invadans agar discs treated with S. alba extracts at concentrations of 100, 500 and 1000 ppm were 15, 8 and 0 mm respectively. The results of the filter paper disc test showed that the S. alba extract at its minimal inhibitory concentration (1000 ppm) has similar qualitative inhibitory effect as malachite green at 1 ppm and formalin at 250 ppm. According to the in vivo tests findings, in the infected fish fed with 3.0% and 5.0% supplementation diet, the numbers of white blood cell and myeloperoxidase activity significantly increased after the second week of treatment. Whilst the numbers of red blood cell significantly decreased in the infected fish fed with 0.0 and 1.0% supplementation diet. After the third week of feeding, significant increases in the total protein, albumin level, lysozyme activity were recorded in the infected fish fed with 3.0% and 5.0% supplementation diet. Also, the enriched diets increased the survival rate as compared to the untreated group that suffered from 90% mortality. The present study indicated that S. alba extract may inhibit the mycelial growth of A. invadans effectively, suggesting an alternative to other chemotherapeutic agents, which brought much environmental and health concerns to the public, for EUS treatment.Keywords: fungal pathogen, goldfish, organic extract, treatment
Procedia PDF Downloads 288666 Index of Suitability for Culex pipiens sl. Mosquitoes in Portugal Mainland
Authors: Maria C. Proença, Maria T. Rebelo, Marília Antunes, Maria J. Alves, Hugo Osório, Sofia Cunha, REVIVE team
Abstract:
The environment of the mosquitoes complex Culex pipiens sl. in Portugal mainland is evaluated based in its abundance, using a data set georeferenced, collected during seven years (2006-2012) from May to October. The suitability of the different regions can be delineated using the relative abundance areas; the suitablility index is directly proportional to disease transmission risk and allows focusing mitigation measures in order to avoid outbreaks of vector-borne diseases. The interest in the Culex pipiens complex is justified by its medical importance: the females bite all warm-blooded vertebrates and are involved in the circulation of several arbovirus of concern to human health, like West Nile virus, iridoviruses, rheoviruses and parvoviruses. The abundance of Culex pipiens mosquitoes were documented systematically all over the territory by the local health services, in a long duration program running since 2006. The environmental factors used to characterize the vector habitat are land use/land cover, distance to cartographed water bodies, altitude and latitude. Focus will be on the mosquito females, which gonotrophic cycle mate-bloodmeal-oviposition is responsible for the virus transmission; its abundance is the key for the planning of non-aggressive prophylactic countermeasures that may eradicate the transmission risk and simultaneously avoid chemical ambient degradation. Meteorological parameters such as: air relative humidity, air temperature (minima, maxima and mean daily temperatures) and daily total rainfall were gathered from the weather stations network for the same dates and crossed with the standardized females’ abundance in a geographic information system (GIS). Mean capture and percentage of above average captures related to each variable are used as criteria to compute a threshold for each meteorological parameter; the difference of the mean capture above/below the threshold was statistically assessed. The meteorological parameters measured at the net of weather stations all over the country are averaged by month and interpolated to produce raster maps that can be segmented according to the meaningful thresholds for each parameter. The intersection of the maps of all the parameters obtained for each month show the evolution of the suitable meteorological conditions through the mosquito season, considered as May to October, although the first and last month are less relevant. In parallel, mean and above average captures were related to the physiographic parameters – the land use/land cover classes most relevant in each month, the altitudes preferred and the most frequent distance to water bodies, a factor closely related with the mosquito biology. The maps produced with these results were crossed with the meteorological maps previously segmented, in order to get an index of suitability for the complex Culex pipiens evaluated all over the country, and its evolution from the beginning to the end of the mosquitoes season.Keywords: suitability index, Culex pipiens, habitat evolution, GIS model
Procedia PDF Downloads 576665 Non-Invasive Characterization of the Mechanical Properties of Arterial Walls
Authors: Bruno RamaëL, GwenaëL Page, Catherine Knopf-Lenoir, Olivier Baledent, Anne-Virginie Salsac
Abstract:
No routine technique currently exists for clinicians to measure the mechanical properties of vascular walls non-invasively. Most of the data available in the literature come from traction or dilatation tests conducted ex vivo on native blood vessels. The objective of the study is to develop a non-invasive characterization technique based on Magnetic Resonance Imaging (MRI) measurements of the deformation of vascular walls under pulsating blood flow conditions. The goal is to determine the mechanical properties of the vessels by inverse analysis, coupling imaging measurements and numerical simulations of the fluid-structure interactions. The hyperelastic properties are identified using Solidworks and Ansys workbench (ANSYS Inc.) solving an optimization technique. The vessel of interest targeted in the study is the common carotid artery. In vivo MRI measurements of the vessel anatomy and inlet velocity profiles was acquired along the facial vascular network on a cohort of 30 healthy volunteers: - The time-evolution of the blood vessel contours and, thus, of the cross-section surface area was measured by 3D imaging angiography sequences of phase-contrast MRI. - The blood flow velocity was measured using a 2D CINE MRI phase contrast (PC-MRI) method. Reference arterial pressure waveforms were simultaneously measured in the brachial artery using a sphygmomanometer. The three-dimensional (3D) geometry of the arterial network was reconstructed by first creating an STL file from the raw MRI data using the open source imaging software ITK-SNAP. The resulting geometry was then transformed with Solidworks into volumes that are compatible with Ansys softwares. Tetrahedral meshes of the wall and fluid domains were built using the ANSYS Meshing software, with a near-wall mesh refinement method in the case of the fluid domain to improve the accuracy of the fluid flow calculations. Ansys Structural was used for the numerical simulation of the vessel deformation and Ansys CFX for the simulation of the blood flow. The fluid structure interaction simulations showed that the systolic and diastolic blood pressures of the common carotid artery could be taken as reference pressures to identify the mechanical properties of the different arteries of the network. The coefficients of the hyperelastic law were identified using Ansys Design model for the common carotid. Under large deformations, a stiffness of 800 kPa is measured, which is of the same order of magnitude as the Young modulus of collagen fibers. Areas of maximum deformations were highlighted near bifurcations. This study is a first step towards patient-specific characterization of the mechanical properties of the facial vessels. The method is currently applied on patients suffering from facial vascular malformations and on patients scheduled for facial reconstruction. Information on the blood flow velocity as well as on the vessel anatomy and deformability will be key to improve surgical planning in the case of such vascular pathologies.Keywords: identification, mechanical properties, arterial walls, MRI measurements, numerical simulations
Procedia PDF Downloads 319