Search results for: practical input
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4801

Search results for: practical input

241 Use of Analytic Hierarchy Process for Plant Site Selection

Authors: Muzaffar Shaikh, Shoaib Shaikh, Mark Moyou, Gaby Hawat

Abstract:

This paper presents the use of Analytic Hierarchy Process (AHP) in evaluating the site selection of a new plant by a corporation. Due to intense competition at a global level, multinational corporations are continuously striving to minimize production and shipping costs of their products. One key factor that plays significant role in cost minimization is where the production plant is located. In the U.S. for example, labor and land costs continue to be very high while they are much cheaper in countries such as India, China, Indonesia, etc. This is why many multinational U.S. corporations (e.g. General Electric, Caterpillar Inc., Ford, General Motors, etc.), have shifted their manufacturing plants outside. The continued expansion of the Internet and its availability along with technological advances in computer hardware and software all around the globe have facilitated U.S. corporations to expand abroad as they seek to reduce production cost. In particular, management of multinational corporations is constantly engaged in concentrating on countries at a broad level, or cities within specific countries where certain or all parts of their end products or the end products themselves can be manufactured cheaper than in the U.S. AHP is based on preference ratings of a specific decision maker who can be the Chief Operating Officer of a company or his/her designated data analytics engineer. It serves as a tool to first evaluate the plant site selection criteria and second, alternate plant sites themselves against these criteria in a systematic manner. Examples of site selection criteria are: Transportation Modes, Taxes, Energy Modes, Labor Force Availability, Labor Rates, Raw Material Availability, Political Stability, Land Costs, etc. As a necessary first step under AHP, evaluation criteria and alternate plant site countries are identified. Depending upon the fidelity of analysis, specific cities within a country can also be chosen as alternative facility locations. AHP experience in this type of analysis indicates that the initial analysis can be performed at the Country-level. Once a specific country is chosen via AHP, secondary analyses can be performed by selecting specific cities or counties within a country. AHP analysis is usually based on preferred ratings of a decision-maker (e.g., 1 to 5, 1 to 7, or 1 to 9, etc., where 1 means least preferred and a 5 means most preferred). The decision-maker assigns preferred ratings first, criterion vs. criterion and creates a Criteria Matrix. Next, he/she assigns preference ratings by alternative vs. alternative against each criterion. Once this data is collected, AHP is applied to first get the rank-ordering of criteria. Next, rank-ordering of alternatives is done against each criterion resulting in an Alternative Matrix. Finally, overall rank ordering of alternative facility locations is obtained by matrix multiplication of Alternative Matrix and Criteria Matrix. The most practical aspect of AHP is the ‘what if’ analysis that the decision-maker can conduct after the initial results to provide valuable sensitivity information of specific criteria to other criteria and alternatives.

Keywords: analytic hierarchy process, multinational corporations, plant site selection, preference ratings

Procedia PDF Downloads 265
240 The Effect of Framework Structure on N2O Formation over Cu-Based Zeolites during NH3-SCR Reactions

Authors: Ghodsieh Isapour Toutizad, Aiyong Wang, Joonsoo Han, Derek Creaser, Louise Olsson, Magnus Skoglundh, Hanna HaRelind

Abstract:

Nitrous oxide (N2O), which is generally formed as a byproduct of industrial chemical processes and fossil fuel combustion, has attracted considerable attention due to its destructive role in global warming and ozone layer depletion. From various developed technologies used for lean NOx reduction, the selective catalytic reduction (SCR) of NOx with ammonia is presently the most applied method. Therefore, the development of catalysts for efficient lean NOx reduction without forming N2O in the process, or only forming it to a very small extent from the exhaust gases is of crucial significance. One type of catalysts that nowadays are used for this aim are zeolite-based catalysts. It is owing to their remarkable catalytic performance under practical reaction conditions such as high thermal stability and high N2 selectivity. Among all zeolites, copper ion-exchanged zeolites, with CHA, MFI, and BEA framework structure (like SSZ-13, ZSM-5 and Beta, respectively), represent higher hydrothermal stability, high activity and N2 selectivity. This work aims at investigating the effect of the zeolite framework structure on the formation of N2O during NH3-SCR reaction conditions over three Cu-based zeolites ranging from small-pore to large-pore framework structure. In the zeolite framework, Cu exists in two cationic forms, that can catalyze the SCR reaction by activating NO to form NO+ and/or surface nitrate species. The nitrate species can thereafter react with NH3 to form another intermediate, ammonium nitrate, which seems to be one source for N2O formation at low temperatures. The results from in situ diffuse reflectance infrared Fourier transform spectroscopy (DRIFTS) indicate that during the NO oxidation step, mainly NO+ and nitrate species are formed on the surface of the catalysts. The intensity of the absorption peak attributed to NO+ species is higher for the Cu-CHA sample compared to the other two samples, indicating a higher stability of this species in small cages. Furthermore, upon the addition of NH3, through the standard SCR reaction conditions, absorption peaks assigned to N-H stretching and bending vibrations are building up. At the same time, negative peaks are evolving in the O-H stretching region, indicating blocking/replacement of surface OH-groups by NH3 and NH4+. By removing NH3 and adding NO2 to the inlet gas composition, the peaks in the N-H stretching and bending vibration regions show a decreasing trend in intensity, with the decrease being more pronounced for increasing pore size. It can probably be owing to the higher accumulation of ammonia species in the small-pore size zeolite compared to the other two samples. Furthermore, it is worth noting that the ammonia surface species are strongly bonded to the CHA zeolite structure, which makes it more difficult to react with NO2. To conclude, the framework structure of the zeolite seems to play an important role in the formation and reactivity of surface species relevant for the SCR process. Here we intend to discuss the connection between the zeolite structure, the surface species, and the formation of N2O during ammonia-SCR.

Keywords: fast SCR, nitrous oxide, NOx, standard SCR, zeolites

Procedia PDF Downloads 207
239 Small and Medium-Sized Enterprises, Flash Flooding and Organisational Resilience Capacity: Qualitative Findings on Implications of the Catastrophic 2017 Flash Flood Event in Mandra, Greece

Authors: Antonis Skouloudis, Georgios Deligiannakis, Panagiotis Vouros, Konstantinos Evangelinos, Loannis Nikolaou

Abstract:

On November 15th, 2017, a catastrophic flash flood devastated the city of Mandra in Central Greece, resulting in 24 fatalities and extensive damages to the built environment and infrastructure. It was Greece's deadliest and most destructive flood event for the past 40 years. In this paper, we examine the consequences of this event too small and medium-sized enterprises (SMEs) operating in Mandra during the flood event, which were affected by the floodwaters to varying extents. In this context, we conducted semi-structured interviews with business owners-managers of 45 SMEs located in flood inundated areas and are still active nowadays, based on an interview guide that spanned 27 topics. The topics pertained to the disaster experience of the business and business owners-managers, knowledge and attitudes towards climate change and extreme weather, aspects of disaster preparedness and related assistance needs. Our findings reveal that the vast majority of the affected businesses experienced heavy damages in equipment and infrastructure or total destruction, which resulted in business interruption from several weeks up to several months. Assistance from relatives or friends helped for the damage repairs and business recovery, while state compensations were deemed insufficient compared to the extent of the damages. Most interviewees pinpoint flooding as one of the most critical risks, and many connect it with the climate crisis. However, they are either not willing or unable to apply property-level prevention measures in their businesses due to cost considerations or complex and cumbersome bureaucratic processes. In all cases, the business owners are fully aware of the flood hazard implications, and since the recovery from the event, they have engaged in basic mitigation measures and contingency plans in case of future flood events. Such plans include insurance contracts whenever possible (as the vast majority of the affected SMEs were uninsured at the time of the 2017 event) as well as simple relocations of critical equipment within their property. The study offers fruitful insights on latent drivers and barriers of SMEs' resilience capacity to flash flooding. In this respect, findings such as ours, highlighting tensions that underpin behavioral responses and experiences, can feed into a) bottom-up approaches for devising actionable and practical guidelines, manuals and/or standards on business preparedness to flooding, and, ultimately, b) policy-making for an enabling environment towards a flood-resilient SME sector.

Keywords: flash flood, small and medium-sized enterprises, organizational resilience capacity, disaster preparedness, qualitative study

Procedia PDF Downloads 114
238 Numerical Modeling of Timber Structures under Varying Humidity Conditions

Authors: Sabina Huč, Staffan Svensson, Tomaž Hozjan

Abstract:

Timber structures may be exposed to various environmental conditions during their service life. Often, the structures have to resist extreme changes in the relative humidity of surrounding air, with simultaneously carrying the loads. Wood material response for this load case is seen as increasing deformation of the timber structure. Relative humidity variations cause moisture changes in timber and consequently shrinkage and swelling of the material. Moisture changes and loads acting together result in mechano-sorptive creep, while sustained load gives viscoelastic creep. In some cases, magnitude of the mechano-sorptive strain can be about five times the elastic strain already at low stress levels. Therefore, analyzing mechano-sorptive creep and its influence on timber structures’ long-term behavior is of high importance. Relatively many one-dimensional rheological models for rheological behavior of wood can be found in literature, while a number of models coupling creep response in each material direction is limited. In this study, mathematical formulation of a coupled two-dimensional mechano-sorptive model and its application to the experimental results are presented. The mechano-sorptive model constitutes of a moisture transport model and a mechanical model. Variation of the moisture content in wood is modelled by multi-Fickian moisture transport model. The model accounts for processes of the bound-water and water-vapor diffusion in wood, that are coupled through sorption hysteresis. Sorption defines a nonlinear relation between moisture content and relative humidity. Multi-Fickian moisture transport model is able to accurately predict unique, non-uniform moisture content field within the timber member over time. Calculated moisture content in timber members is used as an input to the mechanical analysis. In the mechanical analysis, the total strain is assumed to be a sum of the elastic strain, viscoelastic strain, mechano-sorptive strain, and strain due to shrinkage and swelling. Mechano-sorptive response is modelled by so-called spring-dashpot type of a model, that proved to be suitable for describing creep of wood. Mechano-sorptive strain is dependent on change of moisture content. The model includes mechano-sorptive material parameters that have to be calibrated to the experimental results. The calibration is made to the experiments carried out on wooden blocks subjected to uniaxial compressive loaded in tangential direction and varying humidity conditions. The moisture and the mechanical model are implemented in a finite element software. The calibration procedure gives the required, distinctive set of mechano-sorptive material parameters. The analysis shows that mechano-sorptive strain in transverse direction is present, though its magnitude and variation are substantially lower than the mechano-sorptive strain in the direction of loading. The presented mechano-sorptive model enables observing real temporal and spatial distribution of the moisture-induced strains and stresses in timber members. Since the model’s suitability for predicting mechano-sorptive strains is shown and the required material parameters are obtained, a comprehensive advanced analysis of the stress-strain state in timber structures, including connections subjected to constant load and varying humidity is possible.

Keywords: mechanical analysis, mechano-sorptive creep, moisture transport model, timber

Procedia PDF Downloads 221
237 Management of Mycotoxin Production and Fungicide Resistance by Targeting Stress Response System in Fungal Pathogens

Authors: Jong H. Kim, Kathleen L. Chan, Luisa W. Cheng

Abstract:

Control of fungal pathogens, such as foodborne mycotoxin producers, is problematic as effective antimycotic agents are often very limited. Mycotoxin contamination significantly interferes with the safe production of foods or crops worldwide. Moreover, expansion of fungal resistance to commercial drugs or fungicides is a global human health concern. Therefore, there is a persistent need to enhance the efficacy of commercial antimycotic agents or to develop new intervention strategies. Disruption of the cellular antioxidant system should be an effective method for pathogen control. Such disruption can be achieved with safe, redox-active compounds. Natural phenolic derivatives are potent redox cyclers that inhibit fungal growth through destabilization of the cellular antioxidant system. The goal of this study is to identify novel, redox-active compounds that disrupt the fungal antioxidant system. The identified compounds could also function as sensitizing agents to conventional antimycotics (i.e., chemosensitization) to improve antifungal efficacy. Various benzo derivatives were tested against fungal pathogens. Gene deletion mutants of the yeast Saccharomyces cerevisiae were used as model systems for identifying molecular targets of benzo analogs. The efficacy of identified compounds as potent antifungal agents or as chemosensitizing agents to commercial drugs or fungicides was examined with methods outlined by the Clinical Laboratory Standards Institute or the European Committee on Antimicrobial Susceptibility Testing. Selected benzo derivatives possessed potent antifungal or antimycotoxigenic activity. Molecular analyses by using S. cerevisiae mutants indicated antifungal activity of benzo derivatives was through disruption of cellular antioxidant or cell wall integrity system. Certain benzo analogs screened overcame tolerance of Aspergillus signaling mutants, namely mitogen-activated protein kinase mutants, to fludioxonil fungicide. Synergistic antifungal chemosensitization greatly lowered minimum inhibitory or fungicidal concentrations of test compounds, including inhibitors of mitochondrial respiration. Of note, salicylaldehyde is a potent antimycotic volatile that has some practical application as a fumigant. Altogether, benzo derivatives targeting cellular antioxidant system of fungi (along with cell wall integrity system) effectively suppress fungal growth. Candidate compounds possess the antifungal, antimycotoxigenic or chemosensitizing capacity to augment the efficacy of commercial antifungals. Therefore, chemogenetic approaches can lead to the development of novel antifungal intervention strategies, which enhance the efficacy of established microbe intervention practices and overcome drug/fungicide resistance. Chemosensitization further reduces costs and alleviates negative side effects associated with current antifungal treatments.

Keywords: antifungals, antioxidant system, benzo derivatives, chemosensitization

Procedia PDF Downloads 229
236 Single Pass Design of Genetic Circuits Using Absolute Binding Free Energy Measurements and Dimensionless Analysis

Authors: Iman Farasat, Howard M. Salis

Abstract:

Engineered genetic circuits reprogram cellular behavior to act as living computers with applications in detecting cancer, creating self-controlling artificial tissues, and dynamically regulating metabolic pathways. Phenemenological models are often used to simulate and design genetic circuit behavior towards a desired behavior. While such models assume that each circuit component’s function is modular and independent, even small changes in a circuit (e.g. a new promoter, a change in transcription factor expression level, or even a new media) can have significant effects on the circuit’s function. Here, we use statistical thermodynamics to account for the several factors that control transcriptional regulation in bacteria, and experimentally demonstrate the model’s accuracy across 825 measurements in several genetic contexts and hosts. We then employ our first principles model to design, experimentally construct, and characterize a family of signal amplifying genetic circuits (genetic OpAmps) that expand the dynamic range of cell sensors. To develop these models, we needed a new approach to measuring the in vivo binding free energies of transcription factors (TFs), a key ingredient of statistical thermodynamic models of gene regulation. We developed a new high-throughput assay to measure RNA polymerase and TF binding free energies, requiring the construction and characterization of only a few constructs and data analysis (Figure 1A). We experimentally verified the assay on 6 TetR-homolog repressors and a CRISPR/dCas9 guide RNA. We found that our binding free energy measurements quantitatively explains why changing TF expression levels alters circuit function. Altogether, by combining these measurements with our biophysical model of translation (the RBS Calculator) as well as other measurements (Figure 1B), our model can account for changes in TF binding sites, TF expression levels, circuit copy number, host genome size, and host growth rate (Figure 1C). Model predictions correctly accounted for how these 8 factors control a promoter’s transcription rate (Figure 1D). Using the model, we developed a design framework for engineering multi-promoter genetic circuits that greatly reduces the number of degrees of freedom (8 factors per promoter) to a single dimensionless unit. We propose the Ptashne (Pt) number to encapsulate the 8 co-dependent factors that control transcriptional regulation into a single number. Therefore, a single number controls a promoter’s output rather than these 8 co-dependent factors, and designing a genetic circuit with N promoters requires specification of only N Pt numbers. We demonstrate how to design genetic circuits in Pt number space by constructing and characterizing 15 2-repressor OpAmp circuits that act as signal amplifiers when within an optimal Pt region. We experimentally show that OpAmp circuits using different TFs and TF expression levels will only amplify the dynamic range of input signals when their corresponding Pt numbers are within the optimal region. Thus, the use of the Pt number greatly simplifies the genetic circuit design, particularly important as circuits employ more TFs to perform increasingly complex functions.

Keywords: transcription factor, synthetic biology, genetic circuit, biophysical model, binding energy measurement

Procedia PDF Downloads 448
235 The Ethics of Physical Restraints in Geriatric Care

Authors: Bei Shan Lin, Chun Mei Lu, Ya Ping Chen, Li Chen Lu

Abstract:

This study explores the ethical issues concerning the use of physical restraint in geriatric care. Physical restraint use in a medical care setting is seen as a controversial form of treatment that has occurred over decades. There is no doubt that people nowadays are living longer than previous generations. The ageing process is inevitable. Common disease such as impaired comprehension, memory loss, and trouble expressing one’s self contribute to the difficulty that these older patients have in adapting to medical institution. For these reasons, physical restraint is often used in reducing the risk of falling, managing wandering behaviour, preventing agitation, and promoting patient compliance in geriatric care. It can mean that physical restraints are considered as a common practice that is used in the care of older patients. It is most commonly used for three specific purposes, including procedural restraint, restraint to prevent falls, and behavioural restraints. Although there have been well documented instances of morbidity and mortality recognised as being potential risks associated with physical restraint use, it continues to be permitted and used in healthcare, often in the name of safety. However, there is insufficient evidence supporting the effectiveness of physical restraint use reducing injuries from falls and controlling challenging behaviour in geriatric care settings. There is barely any empirical evidence of either a scientific basis or clinical trials have evaluated the improvement in patient safety following physical restraint. In difficult clinical situations, guidelines and practical suggestions for Healthcare professionals to comply requirements can help those making appropriate decisions and to facilitate better judgement regarding physical restraint use. The following recommendations are given for physical restraint use in long-term care settings: an interdisciplinary team approach to assess, evaluate, and treat underlying diseases to determine if treatment can ease issues precipitating physical restraint use; a clearly stated purpose of treatment plan should be made after weighing up the risk of physical restraint use against the risk of without physical restraint use; a care plan for physical restraint has to include individualised treatment planning, informed consent, identification and remedial action to avoid negative consequences, regular assessment and modification, reduction and removal of risks; patients and their families must have the opportunity to consider and give voluntary informed consent prior to physical restraint utilisation; patients, family members, and Healthcare professionals should be educated on use and adverse consequences of physical restraints in order to make raise awareness of potential risks and to take appropriate steps to prevent unnecessary harm; after physical restraint removal, Healthcare professionals should discuss with patients and family members about their experience, feelings, and any anxieties regarding the treatment. Physical restraint should always be considered a last resort as deprive patient’s freedom, control, and individuality. Healthcare professionals should emphasise on providing individualized care, interdisciplinary decision-making process, and creative and collaborative alternatives to promote older patient’s rights, dignity and overall well-being as much as possible.

Keywords: ethics healthcare, geriatric care, healthcare, physical restraint

Procedia PDF Downloads 118
234 Rapid Sexual and Reproductive Health Pathways for Women Accessing Drug and Alcohol Treatment

Authors: Molly Parker

Abstract:

Unintended pregnancy rates in Australia are amongst the highest in the developed world. Women with Substance Use Disorder often have riskier sexual behavior with nil contraceptive use and face disproportionately higher unintended pregnancies and Sexually Transmitted Infections, alongside Substance Use in Pregnancy (SUP) climbing at an alarming rate. In an inner-city Drug and Alcohol (D&A) service, significant barriers to sexual and reproductive health services have been identified, aligning with research. Rapid pathways were created for women seeking D&A treatment to be referred to Sexual and Reproductive Health services for the administration of Long-acting reversible contraception (LARC) and sexual health screening. For clients attending a D&A service, this is an opportunistic time to offer sexual and reproductive health services. Collaboration and multidisciplinary team input between D&A and sexual health and reproductive services are paramount, with rapid referral pathways being identified as the main strategy to improve access to sexual and reproductive health support for this population. With this evidence, a rapid referral pathway was created for women using the D&A service to access LARC, particularly in view of fertility often returning once stable on D&A treatment. A closed-ended survey was used for D&A staff to identify gaps in reproductive health knowledge and views of referral accessibility. Results demonstrated a lack of knowledge of contraception and appropriate referral processes. A closed-ended survey for clients was created to establish the need and access to services and to quantify data. A follow-up data collection will be reviewed to access uptake and satisfaction of the intervention from clients. Sexual health screening access was also identified as a deficit, particularly concerning due to the higher rates of STIs in this cohort. A rapid referral pathway will be undergoing implementation, reducing risks of untreated STIS both pre and post-conception. Similarly, pre and post-intervention structured surveys will be used to identify client satisfaction from the pathway. Although currently in progress, the research and pathway aim to be completed by December 2023. This research and implementation of sexual and reproductive health pathways from the D&A service have significant health and well-being benefits to clients and the wider community, including possible fetal/infancy outcomes. Women now have rapid access to sexual and reproductive health services, with the aim of reducing unplanned pregnancies, poor outcomes associated with SUP, client/staff trauma from termination of pregnancy, and client/staff trauma following the assumption of care of the child due to substance use, the financial cost for out of home care as required, the poor outcomes of untreated STIs to the fetus in pregnancy and the spread of STIs in the wider community. As evidence suggests, the implementation of a streamlined referral process is required between D&A and sexual and reproductive health services and has positive feedback from both clinicians and clients in improving care.

Keywords: substance use in pregnancy, drug and alcohol, substance use disorder, sexual health, reproductive health, contraception, long-acting reversible contraception, neonatal abstinence syndrome, FASD, sexually transmitted infections, sexually transmitted infections pregnancy

Procedia PDF Downloads 33
233 A Multipurpose Inertial Electrostatic Magnetic Confinement Fusion for Medical Isotopes Production

Authors: Yasser R. Shaban

Abstract:

A practical multipurpose device for medical isotopes production is most wanted for clinical centers and researches. Unfortunately, the major supply of these radioisotopes currently comes from aging sources, and there is a great deal of uneasiness in the domestic market. There are also many cases where the cost of certain radioisotopes is too high for their introduction on a commercial scale even though the isotopes might have great benefits for society. The medical isotopes such as radiotracers PET (Positron Emission Tomography), Technetium-99 m, and Iodine-131, Lutetium-177 by is feasible to be generated by a single unit named IEMC (Inertial Electrostatic Magnetic Confinement). The IEMC fusion vessel is the upgrading unit of the Inertial Electrostatic Confinement IEC fusion vessel. Comprehensive experimental works on IEC were carried earlier with promising results. The principle of inertial electrostatic magnetic confinement IEMC fusion is based on forcing the binary fuel ions to interact in the opposite directions in ions cyclotrons orbits with different kinetic energies in order to have equal compression (forces) and with different ion cyclotron frequency ω in order to increase the rate of intersection. The IEMC features greater fusion volume than IEC by several orders of magnitude. The particles rate from the IEMC approach are projected to be 8.5 x 10¹¹ (p/s), ~ 0.2 microampere proton, for D/He-3 fusion reaction and 4.2 x 10¹² (n/s) for D/T fusion reaction. The projected values of particles yield (neutrons and protons) are suitable for medical isotope productions on-site by a single unit without any change in the fusion vessel but only the fuel gas. The PET radiotracers are usually produced on-site by medical ion accelerator whereas Technetium-99m (Tc-99m) is usually produced off-site from the irradiation facilities of nuclear power plants. Typically, hospitals receive molybdenum-99 isotope container; the isotope decays to Tc-99mwith half-life time 2.75 days. Even though the projected current from IEMC is lesser than the proton current from the medical ion accelerator but still the IEMC vessel is simpler, and reduced in components and power consumption which add a new value of populating the PET radiotracers in most clinical centers. On the other hand, the projected neutrons flux from the IEMC is lesser than the thermal neutron flux at the irradiation facilities of nuclear power plants, but in the IEMC case the productions of Technetium-99m is suggested to be at the resonance region of which the resonance integral cross section is two orders of magnitude higher than the thermal flux. Thus it can be said the net activity from both is evened. Besides, the particle accelerator cannot be considered a multipurpose particles production unless a significant change is made to the accelerator to change from neutrons mode to protons mode or vice versa. In conclusion, the projected fusion yield from IEMC is a straightforward since slightly change in the primer IEC and ion source is required.

Keywords: electrostatic versus magnetic confinement fusion vessel, ion source, medical isotopes productions, neutron activation

Procedia PDF Downloads 326
232 Feasibility of Applying a Hydrodynamic Cavitation Generator as a Method for Intensification of Methane Fermentation Process of Virginia Fanpetals (Sida hermaphrodita) Biomass

Authors: Marcin Zieliński, Marcin Dębowski, Mirosław Krzemieniewski

Abstract:

The anaerobic degradation of substrates is limited especially by the rate and effectiveness of the first (hydrolytic) stage of fermentation. This stage may be intensified through pre-treatment of substrate aimed at disintegration of the solid phase and destruction of substrate tissues and cells. The most frequently applied criterion of disintegration outcomes evaluation is the increase in biogas recovery owing to the possibility of its use for energetic purposes and, simultaneously, recovery of input energy consumed for the pre-treatment of substrate before fermentation. Hydrodynamic cavitation is one of the methods for organic substrate disintegration that has a high implementation potential. Cavitation is explained as the phenomenon of the formation of discontinuity cavities filled with vapor or gas in a liquid induced by pressure drop to the critical value. It is induced by a varying field of pressures. A void needs to occur in the flow in which the pressure first drops to the value close to the pressure of saturated vapor and then increases. The process of cavitation conducted under controlled conditions was found to significantly improve the effectiveness of anaerobic conversion of organic substrates having various characteristics. This phenomenon allows effective damage and disintegration of cellular and tissue structures. Disintegration of structures and release of organic compounds to the dissolved phase has a direct effect on the intensification of biogas production in the process of anaerobic fermentation, on reduced dry matter content in the post-fermentation sludge as well as a high degree of its hygienization and its increased susceptibility to dehydration. A device the efficiency of which was confirmed both in laboratory conditions and in systems operating in the technical scale is a hydrodynamic generator of cavitation. Cavitators, agitators and emulsifiers constructed and tested worldwide so far have been characterized by low efficiency and high energy demand. Many of them proved effective under laboratory conditions but failed under industrial ones. The only task successfully realized by these appliances and utilized on a wider scale is the heating of liquids. For this reason, their usability was limited to the function of heating installations. Design of the presented cavitation generator allows achieving satisfactory energy efficiency and enables its use under industrial conditions in depolymerization processes of biomass with various characteristics. Investigations conducted on the laboratory and industrial scale confirmed the effectiveness of applying cavitation in the process of biomass destruction. The use of the cavitation generator in laboratory studies for disintegration of sewage sludge allowed increasing biogas production by ca. 30% and shortening the treatment process by ca. 20 - 25%. The shortening of the technological process and increase of wastewater treatment plant effectiveness may delay investments aimed at increasing system output. The use of a mechanical cavitator and application of repeated cavitation process (4-6 times) enables significant acceleration of the biogassing process. In addition, mechanical cavitation accelerates increases in COD and VFA levels.

Keywords: hydrodynamic cavitation, pretreatment, biomass, methane fermentation, Virginia fanpetals

Procedia PDF Downloads 413
231 Examining the Effects of Ticket Bundling Strategies and Team Identification on Purchase of Hedonic and Utilitarian Options

Authors: Young Ik Suh, Tywan G. Martin

Abstract:

Bundling strategy is a common marketing practice today. In the past decades, both academicians and practitioners have increasingly emphasized the strategic importance of bundling in today’s markets. The reason for increased interest in bundling strategy is that they normally believe that it can significantly increase profits on organization’s sales over time and it is convenient for the customer. However, little efforts has been made on ticket bundling and purchase considerations in hedonic and utilitarian options in sport consumer behavior context. Consumers often face choices between utilitarian and hedonic alternatives in decision making. When consumers purchase certain products, they are only interested in the functional dimensions, which are called utilitarian dimensions. On the other hand, others focus more on hedonic features such as fun, excitement, and pleasure. Thus, the current research examines how utilitarian and hedonic consumption can vary in typical ticket purchasing process. The purpose of this research is to understand the following two research themes: (1) the differential effect of discount framing on ticket bundling: utilitarian and hedonic options and (2) moderating effect of team identification on ticket bundling. In order to test the research hypotheses, an experimental study using a two-way ANOVA, 3 (team identification: low, medium, and high) X 2 (discount frame: ticket bundle sales with utilitarian product, and hedonic product), with mixed factorial design will be conducted to determine whether there is a statistical significance between purchasing intentions of two discount frames of ticket bundle sales within different team identification levels. To compare mean differences among the two different settings, we will create two conditions of ticket bundles: (1) offering a discount on a ticket ($5 off) if they would purchase it along with utilitarian product (e.g., iPhone8 case, t-shirt, cap), and (2) offering a discount on a ticket ($5 off) if they would purchase it along with hedonic product (e.g., pizza, drink, fans featured on big screen). The findings of the current ticket bundling study are expected to have many theoretical and practical contributions and implications by extending the research and literature pertaining to the relationship between team identification and sport consumer behavior. Specifically, this study can provide a reliable and valid framework to understanding the role of team identification as a moderator on behavioral intentions such as purchase intentions. From an academic perspective, the study will be the first known attempt to understand consumer reactions toward different discount frames related to ticket bundling. Even though the game ticket itself is the major commodity of sport event attendance and significantly related to teams’ revenue streams, most recent ticket pricing research has been done in terms of economic or cost-oriented pricing and not from a consumer psychological perspective. For sport practitioners, this study will also provide significant implications. The result will imply that sport marketers may need to develop two different ticketing promotions for loyal fan and non-loyal fans. Since loyal fans concern ticket price than tie-in products when they see ticket bundle sales, advertising campaign should be more focused on discounting ticket price.

Keywords: ticket bundling, hedonic, utilitarian, team identification

Procedia PDF Downloads 139
230 The Interactive Wearable Toy "+Me", for the Therapy of Children with Autism Spectrum Disorders: Preliminary Results

Authors: Beste Ozcan, Valerio Sperati, Laura Romano, Tania Moretta, Simone Scaffaro, Noemi Faedda, Federica Giovannone, Carla Sogos, Vincenzo Guidetti, Gianluca Baldassarre

Abstract:

+me is an experimental interactive toy with the appearance of a soft, pillow-like, panda. Shape and consistency are designed to arise emotional attachment in young children: a child can wear it around his/her neck and treat it as a companion (i.e. a transitional object). When caressed on paws or head, the panda emits appealing, interesting outputs like colored lights or amusing sounds, thanks to embedded electronics. Such sensory patterns can be modified through a wirelessly connected tablet: by this, an adult caregiver can adapt +me responses to a child's reactions or requests, for example, changing the light hue or the type of sound. The toy control is therefore shared, as it depends on both the child (who handles the panda) and the adult (who manages the tablet and mediates the sensory input-output contingencies). These features make +me a potential tool for therapy with children with Neurodevelopmental Disorders (ND), characterized by impairments in the social area, like Autism Spectrum Disorders (ASD) and Language Disorders (LD): as a proposal, the toy could be used together with a therapist, in rehabilitative play activities aimed at encouraging simple social interactions and reinforcing basic relational and communication skills. +me was tested in two pilot experiments, the first one involving 15 Typically Developed (TD) children aged in 8-34 months, the second one involving 7 children with ASD, and 7 with LD, aged in 30-48 months. In both studies a researcher/caregiver, during a one-to-one, ten-minute activity plays with the panda and encourages the child to do the same. The purpose of both studies was to ascertain the general acceptability of the device as an interesting toy that is an object able to capture the child's attention and to maintain a high motivation to interact with it and with the adult. Behavioral indexes for estimating the interplay between the child, +me and caregiver were rated from the video recording of the experimental sessions. Preliminary results show how -on average- participants from 3 groups exhibit a good engagement: they touch, caress, explore the panda and show enjoyment when they manage to trigger luminous and sound responses. During the experiments, children tend to imitate the caregiver's actions on +me, often looking (and smiling) at him/her. Interesting behavioral differences between TD, ASD, and LD groups are scored: for example, ASD participants produce a fewer number of smiles both to panda and to a caregiver with respect to TD group, while LD scores stand between ASD and TD subjects. These preliminary observations suggest that the interactive toy +me is able to raise and maintain the interest of toddlers and therefore it can be reasonably used as a supporting tool during therapy, to stimulate pivotal social skills as imitation, turn-taking, eye contact, and social smiles. Interestingly, the young age of participants, along with the behavioral differences between groups, seem to suggest a further potential use of the device: a tool for early differential diagnosis (the average age of a child

Keywords: autism spectrum disorders, interactive toy, social interaction, therapy, transitional wearable companion

Procedia PDF Downloads 95
229 Crafting Robust Business Model Innovation Path with Generative Artificial Intelligence in Start-up SMEs

Authors: Ignitia Motjolopane

Abstract:

Small and medium enterprises (SMEs) play an important role in economies by contributing to economic growth and employment. In the fourth industrial revolution, the convergence of technologies and the changing nature of work created pressures on economies globally. Generative artificial intelligence (AI) may support SMEs in exploring, exploiting, and transforming business models to align with their growth aspirations. SMEs' growth aspirations fall into four categories: subsistence, income, growth, and speculative. Subsistence-oriented firms focus on meeting basic financial obligations and show less motivation for business model innovation. SMEs focused on income, growth, and speculation are more likely to pursue business model innovation to support growth strategies. SMEs' strategic goals link to distinct business model innovation paths depending on whether SMEs are starting a new business, pursuing growth, or seeking profitability. Integrating generative artificial intelligence in start-up SME business model innovation enhances value creation, user-oriented innovation, and SMEs' ability to adapt to dynamic changes in the business environment. The existing literature may lack comprehensive frameworks and guidelines for effectively integrating generative AI in start-up reiterative business model innovation paths. This paper examines start-up business model innovation path with generative artificial intelligence. A theoretical approach is used to examine start-up-focused SME reiterative business model innovation path with generative AI. Articulating how generative AI may be used to support SMEs to systematically and cyclically build the business model covering most or all business model components and analyse and test the BM's viability throughout the process. As such, the paper explores generative AI usage in market exploration. Moreover, market exploration poses unique challenges for start-ups compared to established companies due to a lack of extensive customer data, sales history, and market knowledge. Furthermore, the paper examines the use of generative AI in developing and testing viable value propositions and business models. In addition, the paper looks into identifying and selecting partners with generative AI support. Selecting the right partners is crucial for start-ups and may significantly impact success. The paper will examine generative AI usage in choosing the right information technology, funding process, revenue model determination, and stress testing business models. Stress testing business models validate strong and weak points by applying scenarios and evaluating the robustness of individual business model components and the interrelation between components. Thus, the stress testing business model may address these uncertainties, as misalignment between an organisation and its environment has been recognised as the leading cause of company failure. Generative AI may be used to generate business model stress-testing scenarios. The paper is expected to make a theoretical and practical contribution to theory and approaches in crafting a robust business model innovation path with generative artificial intelligence in start-up SMEs.

Keywords: business models, innovation, generative AI, small medium enterprises

Procedia PDF Downloads 47
228 Protonic Conductivity Highlighted by Impedance Measurement of Y-Doped BaZrO3 Synthesized by Supercritical Hydrothermal Process

Authors: Melanie Francois, Gilles Caboche, Frederic Demoisson, Francois Maeght, Maria Paola Carpanese, Lionel Combemale, Pascal Briois

Abstract:

Finding new clean, and efficient way for energy production is one of the actual global challenges. Advances in fuel cell technology have shown that, for few years, Protonic Ceramic Fuel Cell (PCFC) has attracted much attention in the field of new hydrogen energy thanks to their lower working temperature, possible higher efficiency, and better durability than classical SOFC. On the contrary of SOFC, where O²⁻ oxygen ion is the charge carrier, PCFC works with H⁺ proton as a charge carrier. Consequently, the lower activation energy of proton diffusion compared to the one of oxygen ion explains those benefits and allows PCFC to work in the 400-600°C temperature range. Doped-BaCeO₃ is currently the most chosen material for this application because of its high protonic conductivity; for example, BaCe₀.₉Y₀.₁O₃ δ exhibits a total conductivity of 1.5×10⁻² S.cm⁻¹ at 600°C in wet H₂. However, BaCeO₃ based perovskite has low stability in H₂O and/or CO₂ containing atmosphere, which limits their practical application. On the contrary, BaZrO₃ based perovskite exhibits good chemical stability but lower total conductivity than BaCeO₃ due to its larger grain boundary resistance. By substituting zirconium with 20% of yttrium, it is possible to achieve a total conductivity of 2.5×10⁻² S.cm⁻¹ at 600°C in wet H₂. However, the high refractory property of BaZr₀.₈Y₀.₂O₃-δ (noted BZY20) causes problems to obtain a dense membrane with large grains. Thereby, using a synthesis process that gives fine particles could allow better sinterability and thus decrease the number of grain boundaries leading to a higher total conductivity. In this work, BaZr₀.₈Y₀.₂O₃-δ have been synthesized by classical batch hydrothermal device and by a continuous hydrothermal device developed at ICB laboratory. The two variants of this process are able to work in supercritical conditions, leading to the formation of nanoparticles, which could be sintered at a lower temperature. The as-synthesized powder exhibits the right composition for the perovskite phase, impurities such as BaCO₃ and YO-OH were detected at very low concentration. Microstructural investigation and densification rate measurement showed that the addition of 1 wt% of ZnO as sintering aid and a sintering at 1550°C for 5 hours give high densified electrolyte material. Furthermore, it is necessary to heat the synthesized powder prior to the sintering to prevent the formation of secondary phases. It is assumed that this thermal treatment homogenizes the crystal structure of the powder and reduces the number of defects into the bulk grains. Electrochemical impedance spectroscopy investigations in various atmospheres and a large range of temperature (200-700°C) were then performed on sintered samples, and the protonic conductivity of BZY20 has been highlighted. Further experiments on half-cell, NiO-BZY20 as anode and BZY20 as electrolyte, are in progress.

Keywords: hydrothermal synthesis, impedance measurement, Y-doped BaZrO₃, proton conductor

Procedia PDF Downloads 112
227 Satisfaction Among Preclinical Medical Students with Low-Fidelity Simulation-Based Learning

Authors: Shilpa Murthy, Hazlina Binti Abu Bakar, Juliet Mathew, Chandrashekhar Thummala Hlly Sreerama Reddy, Pathiyil Ravi Shankar

Abstract:

Simulation is defined as a technique that replaces or expands real experiences with guided experiences that interactively imitate real-world processes or systems. Simulation enables learners to train in a safe and non-threatening environment. For decades, simulation has been considered an integral part of clinical teaching and learning strategy in medical education. The several types of simulation used in medical education and the clinical environment can be applied to several models, including full-body mannequins, task trainers, standardized simulated patients, virtual or computer-generated simulation, or Hybrid simulation that can be used to facilitate learning. Simulation allows healthcare practitioners to acquire skills and experience while taking care of patient safety. The recent COVID pandemic has also led to an increase in simulation use, as there were limitations on medical student placements in hospitals and clinics. The learning is tailored according to the educational needs of students to make the learning experience more valuable. Simulation in the pre-clinical years has challenges with resource constraints, effective curricular integration, student engagement and motivation, and evidence of educational impact, to mention a few. As instructors, we may have more reliance on the use of simulation for pre-clinical students while the students’ confidence levels and perceived competence are to be evaluated. Our research question was whether the implementation of simulation-based learning positively influences preclinical medical students' confidence levels and perceived competence. This study was done to align the teaching activities with the student’s learning experience to introduce more low-fidelity simulation-based teaching sessions for pre-clinical years and to obtain students’ input into the curriculum development as part of inclusivity. The study was carried out at International Medical University, involving pre-clinical year (Medical) students who were started with low-fidelity simulation-based medical education from their first semester and were gradually introduced to medium fidelity, too. The Student Satisfaction and Self-Confidence in Learning Scale questionnaire from the National League of Nursing was employed to collect the responses. The internal consistency reliability for the survey items was tested with Cronbach’s alpha using an Excel file. IBM SPSS for Windows version 28.0 was used to analyze the data. Spearman’s rank correlation was used to analyze the correlation between students’ satisfaction and self-confidence in learning. The significance level was set at p value less than 0.05. The results from this study have prompted the researchers to undertake a larger-scale evaluation, which is currently underway. The current results show that 70% of students agreed that the teaching methods used in the simulation were helpful and effective. The sessions are dependent on the learning materials that are provided and how the facilitators engage the students and make the session more enjoyable. The feedback provided inputs on the following areas to focus on while designing simulations for pre-clinical students. There are quality learning materials, an interactive environment, motivating content, skills and knowledge of the facilitator, and effective feedback.

Keywords: low-fidelity simulation, pre-clinical simulation, students satisfaction, self-confidence

Procedia PDF Downloads 40
226 Lean Comic GAN (LC-GAN): a Light-Weight GAN Architecture Leveraging Factorized Convolution and Teacher Forcing Distillation Style Loss Aimed to Capture Two Dimensional Animated Filtered Still Shots Using Mobile Phone Camera and Edge Devices

Authors: Kaustav Mukherjee

Abstract:

In this paper we propose a Neural Style Transfer solution whereby we have created a Lightweight Separable Convolution Kernel Based GAN Architecture (SC-GAN) which will very useful for designing filter for Mobile Phone Cameras and also Edge Devices which will convert any image to its 2D ANIMATED COMIC STYLE Movies like HEMAN, SUPERMAN, JUNGLE-BOOK. This will help the 2D animation artist by relieving to create new characters from real life person's images without having to go for endless hours of manual labour drawing each and every pose of a cartoon. It can even be used to create scenes from real life images.This will reduce a huge amount of turn around time to make 2D animated movies and decrease cost in terms of manpower and time. In addition to that being extreme light-weight it can be used as camera filters capable of taking Comic Style Shots using mobile phone camera or edge device cameras like Raspberry Pi 4,NVIDIA Jetson NANO etc. Existing Methods like CartoonGAN with the model size close to 170 MB is too heavy weight for mobile phones and edge devices due to their scarcity in resources. Compared to the current state of the art our proposed method which has a total model size of 31 MB which clearly makes it ideal and ultra-efficient for designing of camera filters on low resource devices like mobile phones, tablets and edge devices running OS or RTOS. .Owing to use of high resolution input and usage of bigger convolution kernel size it produces richer resolution Comic-Style Pictures implementation with 6 times lesser number of parameters and with just 25 extra epoch trained on a dataset of less than 1000 which breaks the myth that all GAN need mammoth amount of data. Our network reduces the density of the Gan architecture by using Depthwise Separable Convolution which does the convolution operation on each of the RGB channels separately then we use a Point-Wise Convolution to bring back the network into required channel number using 1 by 1 kernel.This reduces the number of parameters substantially and makes it extreme light-weight and suitable for mobile phones and edge devices. The architecture mentioned in the present paper make use of Parameterised Batch Normalization Goodfellow etc al. (Deep Learning OPTIMIZATION FOR TRAINING DEEP MODELS page 320) which makes the network to use the advantage of Batch Norm for easier training while maintaining the non-linear feature capture by inducing the learnable parameters

Keywords: comic stylisation from camera image using GAN, creating 2D animated movie style custom stickers from images, depth-wise separable convolutional neural network for light-weight GAN architecture for EDGE devices, GAN architecture for 2D animated cartoonizing neural style, neural style transfer for edge, model distilation, perceptual loss

Procedia PDF Downloads 106
225 Application of Harris Hawks Optimization Metaheuristic Algorithm and Random Forest Machine Learning Method for Long-Term Production Scheduling Problem under Uncertainty in Open-Pit Mines

Authors: Kamyar Tolouei, Ehsan Moosavi

Abstract:

In open-pit mines, the long-term production scheduling optimization problem (LTPSOP) is a complicated problem that contains constraints, large datasets, and uncertainties. Uncertainty in the output is caused by several geological, economic, or technical factors. Due to its dimensions and NP-hard nature, it is usually difficult to find an ideal solution to the LTPSOP. The optimal schedule generally restricts the ore, metal, and waste tonnages, average grades, and cash flows of each period. Past decades have witnessed important measurements of long-term production scheduling and optimal algorithms since researchers have become highly cognizant of the issue. In fact, it is not possible to consider LTPSOP as a well-solved problem. Traditional production scheduling methods in open-pit mines apply an estimated orebody model to produce optimal schedules. The smoothing result of some geostatistical estimation procedures causes most of the mine schedules and production predictions to be unrealistic and imperfect. With the expansion of simulation procedures, the risks from grade uncertainty in ore reserves can be evaluated and organized through a set of equally probable orebody realizations. In this paper, to synthesize grade uncertainty into the strategic mine schedule, a stochastic integer programming framework is presented to LTPSOP. The objective function of the model is to maximize the net present value and minimize the risk of deviation from the production targets considering grade uncertainty simultaneously while satisfying all technical constraints and operational requirements. Instead of applying one estimated orebody model as input to optimize the production schedule, a set of equally probable orebody realizations are applied to synthesize grade uncertainty in the strategic mine schedule and to produce a more profitable and risk-based production schedule. A mixture of metaheuristic procedures and mathematical methods paves the way to achieve an appropriate solution. This paper introduced a hybrid model between the augmented Lagrangian relaxation (ALR) method and the metaheuristic algorithm, the Harris Hawks optimization (HHO), to solve the LTPSOP under grade uncertainty conditions. In this study, the HHO is experienced to update Lagrange coefficients. Besides, a machine learning method called Random Forest is applied to estimate gold grade in a mineral deposit. The Monte Carlo method is used as the simulation method with 20 realizations. The results specify that the progressive versions have been considerably developed in comparison with the traditional methods. The outcomes were also compared with the ALR-genetic algorithm and ALR-sub-gradient. To indicate the applicability of the model, a case study on an open-pit gold mining operation is implemented. The framework displays the capability to minimize risk and improvement in the expected net present value and financial profitability for LTPSOP. The framework could control geological risk more effectively than the traditional procedure considering grade uncertainty in the hybrid model framework.

Keywords: grade uncertainty, metaheuristic algorithms, open-pit mine, production scheduling optimization

Procedia PDF Downloads 77
224 A Sociological Qualitative Study: Intimate Relationships as a Social Pressure Around HIV-Related Issues Among Young South African Women and Girls (16-28)

Authors: Sunha Ahn

Abstract:

Intimate relationships have constructed our embodied experiences and emotional memories, which can become grounded as practical knowledge to some extent and play a critical role in social medicine, particularly, in our well-being and mental health. In South Africa, such relational factors are significant for young women and girls in their emotional development period of time, especially, working as the existence of social and relational pressures over feminine sexual health and choices. This, in turn, brings about the absence/lack of communication in intimate relationships, especially with their parents, which leads to a vicious cycle in sexual health behaviour choices. Drawing upon sociological and socio-anthropological understandings of HIV-related issues, this study provides narrative threads of evidence about South African teenage mothers from early-dating debuted to HIV infection. Their stories consist of a visualised figure in chronicle order, illustrating embodied journeys of sexual health choices surrounding uncommunicative relationships and socially-suppressive environments. Methodologically, this qualitative study explored data from mixed online methods: 1) a case study analysing online comments (N = 12,763) on the South African Springster's website, run by the UK-based NGO, namely, Girl Effect; and 2) In-depth online interviews (N = 21) were conducted with young SA women and girls (16-28 ages) recruited in Cape Town, Pretoria, and Johannesburg, SA. Participants consist of both those living with HIV and without. Ethical approval was gained via the College of Social Sciences Ethical Committee at the University of Glasgow, and informed consent was obtained verbally and in writing from participants in due course. Data were thematically applied to an iteratively developed codebook and analysed. There are three kinds of typical pressures as relational factors for them, including peer pressure, partners or boyfriends, and parents’ reactions. Under the patriarchal and religious-devoted social atmospheres, these relationships work as a source of scaredness among young women and girls who could not talk about their sexual health concerns and rights. Such an inability to communicate with intimate relationships, eventually, emerges as a perpetuated or taken-for-granted social environment in South Africa, insistently leading to an increase in unwanted pregnancies or new HIV infections in young South African women and girls. In this sense, this study reveals the pressing need for open communication between generations with accurate information about HIV/AIDS. This also implies that the sociological feminist praxes in South Africa would help eliminate HIV-related stigma as well as construct open space to reduce gender-based violence and sexually-transmitted infection. Ultimately, this will be a road for supporting sexually healthy decisions and well-being across South African generations.

Keywords: HIV, young women, South Africa, intimate relationships, communication, social medicine

Procedia PDF Downloads 41
223 Superoleophobic Nanocellulose Aerogel Membrance as Bioinspired Cargo Carrier on Oil by Sol-Gel Method

Authors: Zulkifli, I. W. Eltara, Anawati

Abstract:

Understanding the complementary roles of surface energy and roughness on natural nonwetting surfaces has led to the development of a number of biomimetic superhydrophobic surfaces, which exhibit apparent contact angles with water greater than 150 degrees and low contact angle hysteresis. However, superoleophobic surfaces—those that display contact angles greater than 150 degrees with organic liquids having appreciably lower surface tensions than that of water—are extremely rare. In addition to chemical composition and roughened texture, a third parameter is essential to achieve superoleophobicity, namely, re-entrant surface curvature in the form of overhang structures. The overhangs can be realized as fibers. Superoleophobic surfaces are appealing for example, antifouling, since purely superhydrophobic surfaces are easily contaminated by oily substances in practical applications, which in turn will impair the liquid repellency. On the other studied have demonstrate that such aqueous nanofibrillar gels are unexpectedly robust to allow formation of highly porous aerogels by direct water removal by freeze-drying, they are flexible, unlike most aerogels that suffer from brittleness, and they allow flexible hierarchically porous templates for functionalities, e.g. for electrical conductivity. No crosslinking, solvent exchange nor supercritical drying are required to suppress the collapse during the aerogel preparation, unlike in typical aerogel preparations. The aerogel used in current work is an ultralight weight solid material composed of native cellulose nanofibers. The native cellulose nanofibers are cleaved from the self-assembled hierarchy of macroscopic cellulose fibers. They have become highly topical, as they are proposed to show extraordinary mechanical properties due to their parallel and grossly hydrogen bonded polysaccharide chains. We demonstrate that superoleophobic nanocellulose aerogels coating by sol-gel method, the aerogel is capable of supporting a weight nearly 3 orders of magnitude larger than the weight of the aerogel itself. The load support is achieved by surface tension acting at different length scales: at the macroscopic scale along the perimeter of the carrier, and at the microscopic scale along the cellulose nanofibers by preventing soaking of the aerogel thus ensuring buoyancy. Superoleophobic nanocellulose aerogels have recently been achieved using unmodified cellulose nanofibers and using carboxy methylated, negatively charged cellulose nanofibers as starting materials. In this work, the aerogels made from unmodified cellulose nanofibers were subsequently treated with fluorosilanes. To complement previous work on superoleophobic aerogels, we demonstrate their application as cargo carriers on oil, gas permeability, plastrons, and drag reduction, and we show that fluorinated nanocellulose aerogels are high-adhesive superoleophobic surfaces. We foresee applications including buoyant, gas permeable, dirt-repellent coatings for miniature sensors and other devices floating on generic liquid surfaces.

Keywords: superoleophobic, nanocellulose, aerogel, sol-gel

Procedia PDF Downloads 321
222 Self-Organizing Maps for Exploration of Partially Observed Data and Imputation of Missing Values in the Context of the Manufacture of Aircraft Engines

Authors: Sara Rejeb, Catherine Duveau, Tabea Rebafka

Abstract:

To monitor the production process of turbofan aircraft engines, multiple measurements of various geometrical parameters are systematically recorded on manufactured parts. Engine parts are subject to extremely high standards as they can impact the performance of the engine. Therefore, it is essential to analyze these databases to better understand the influence of the different parameters on the engine's performance. Self-organizing maps are unsupervised neural networks which achieve two tasks simultaneously: they visualize high-dimensional data by projection onto a 2-dimensional map and provide clustering of the data. This technique has become very popular for data exploration since it provides easily interpretable results and a meaningful global view of the data. As such, self-organizing maps are usually applied to aircraft engine condition monitoring. As databases in this field are huge and complex, they naturally contain multiple missing entries for various reasons. The classical Kohonen algorithm to compute self-organizing maps is conceived for complete data only. A naive approach to deal with partially observed data consists in deleting items or variables with missing entries. However, this requires a sufficient number of complete individuals to be fairly representative of the population; otherwise, deletion leads to a considerable loss of information. Moreover, deletion can also induce bias in the analysis results. Alternatively, one can first apply a common imputation method to create a complete dataset and then apply the Kohonen algorithm. However, the choice of the imputation method may have a strong impact on the resulting self-organizing map. Our approach is to address simultaneously the two problems of computing a self-organizing map and imputing missing values, as these tasks are not independent. In this work, we propose an extension of self-organizing maps for partially observed data, referred to as missSOM. First, we introduce a criterion to be optimized, that aims at defining simultaneously the best self-organizing map and the best imputations for the missing entries. As such, missSOM is also an imputation method for missing values. To minimize the criterion, we propose an iterative algorithm that alternates the learning of a self-organizing map and the imputation of missing values. Moreover, we develop an accelerated version of the algorithm by entwining the iterations of the Kohonen algorithm with the updates of the imputed values. This method is efficiently implemented in R and will soon be released on CRAN. Compared to the standard Kohonen algorithm, it does not come with any additional cost in terms of computing time. Numerical experiments illustrate that missSOM performs well in terms of both clustering and imputation compared to the state of the art. In particular, it turns out that missSOM is robust to the missingness mechanism, which is in contrast to many imputation methods that are appropriate for only a single mechanism. This is an important property of missSOM as, in practice, the missingness mechanism is often unknown. An application to measurements on one type of part is also provided and shows the practical interest of missSOM.

Keywords: imputation method of missing data, partially observed data, robustness to missingness mechanism, self-organizing maps

Procedia PDF Downloads 128
221 Efficacy of Deep Learning for Below-Canopy Reconstruction of Satellite and Aerial Sensing Point Clouds through Fractal Tree Symmetry

Authors: Dhanuj M. Gandikota

Abstract:

Sensor-derived three-dimensional (3D) point clouds of trees are invaluable in remote sensing analysis for the accurate measurement of key structural metrics, bio-inventory values, spatial planning/visualization, and ecological modeling. Machine learning (ML) holds the potential in addressing the restrictive tradeoffs in cost, spatial coverage, resolution, and information gain that exist in current point cloud sensing methods. Terrestrial laser scanning (TLS) remains the highest fidelity source of both canopy and below-canopy structural features, but usage is limited in both coverage and cost, requiring manual deployment to map out large, forested areas. While aerial laser scanning (ALS) remains a reliable avenue of LIDAR active remote sensing, ALS is also cost-restrictive in deployment methods. Space-borne photogrammetry from high-resolution satellite constellations is an avenue of passive remote sensing with promising viability in research for the accurate construction of vegetation 3-D point clouds. It provides both the lowest comparative cost and the largest spatial coverage across remote sensing methods. However, both space-borne photogrammetry and ALS demonstrate technical limitations in the capture of valuable below-canopy point cloud data. Looking to minimize these tradeoffs, we explored a class of powerful ML algorithms called Deep Learning (DL) that show promise in recent research on 3-D point cloud reconstruction and interpolation. Our research details the efficacy of applying these DL techniques to reconstruct accurate below-canopy point clouds from space-borne and aerial remote sensing through learned patterns of tree species fractal symmetry properties and the supplementation of locally sourced bio-inventory metrics. From our dataset, consisting of tree point clouds obtained from TLS, we deconstructed the point clouds of each tree into those that would be obtained through ALS and satellite photogrammetry of varying resolutions. We fed this ALS/satellite point cloud dataset, along with the simulated local bio-inventory metrics, into the DL point cloud reconstruction architectures to generate the full 3-D tree point clouds (the truth values are denoted by the full TLS tree point clouds containing the below-canopy information). Point cloud reconstruction accuracy was validated both through the measurement of error from the original TLS point clouds as well as the error of extraction of key structural metrics, such as crown base height, diameter above root crown, and leaf/wood volume. The results of this research additionally demonstrate the supplemental performance gain of using minimum locally sourced bio-inventory metric information as an input in ML systems to reach specified accuracy thresholds of tree point cloud reconstruction. This research provides insight into methods for the rapid, cost-effective, and accurate construction of below-canopy tree 3-D point clouds, as well as the supported potential of ML and DL to learn complex, unmodeled patterns of fractal tree growth symmetry.

Keywords: deep learning, machine learning, satellite, photogrammetry, aerial laser scanning, terrestrial laser scanning, point cloud, fractal symmetry

Procedia PDF Downloads 68
220 Thermoluminescence Investigations of Tl2Ga2Se3S Layered Single Crystals

Authors: Serdar Delice, Mehmet Isik, Nizami Hasanli, Kadir Goksen

Abstract:

Researchers have donated great interest to ternary and quaternary semiconductor compounds especially with the improvement of the optoelectronic technology. The quaternary compound Tl2Ga2Se3S which was grown by Bridgman method carries the properties of ternary thallium chalcogenides group of semiconductors with layered structure. This compound can be formed from TlGaSe2 crystals replacing the one quarter of selenium atom by sulfur atom. Although Tl2Ga2Se3S crystals are not intentionally doped, some unintended defect types such as point defects, dislocations and stacking faults can occur during growth processes of crystals. These defects can cause undesirable problems in semiconductor materials especially produced for optoelectronic technology. Defects of various types in the semiconductor devices like LEDs and field effect transistor may act as a non-radiative or scattering center in electron transport. Also, quick recombination of holes with electrons without any energy transfer between charge carriers can occur due to the existence of defects. Therefore, the characterization of defects may help the researchers working in this field to produce high quality devices. Thermoluminescence (TL) is an effective experimental method to determine the kinetic parameters of trap centers due to defects in crystals. In this method, the sample is illuminated at low temperature by a light whose energy is bigger than the band gap of studied sample. Thus, charge carriers in the valence band are excited to delocalized band. Then, the charge carriers excited into conduction band are trapped. The trapped charge carriers are released by heating the sample gradually and these carriers then recombine with the opposite carriers at the recombination center. By this way, some luminescence is emitted from the samples. The emitted luminescence is converted to pulses by using an experimental setup controlled by computer program and TL spectrum is obtained. Defect characterization of Tl2Ga2Se3S single crystals has been performed by TL measurements at low temperatures between 10 and 300 K with various heating rate ranging from 0.6 to 1.0 K/s. The TL signal due to the luminescence from trap centers revealed one glow peak having maximum temperature of 36 K. Curve fitting and various heating rate methods were used for the analysis of the glow curve. The activation energy of 13 meV was found by the application of curve fitting method. This practical method established also that the trap center exhibits the characteristics of mixed (general) kinetic order. In addition, various heating rate analysis gave a compatible result (13 meV) with curve fitting as the temperature lag effect was taken into consideration. Since the studied crystals were not intentionally doped, these centers are thought to originate from stacking faults, which are quite possible in Tl2Ga2Se3S due to the weakness of the van der Waals forces between the layers. Distribution of traps was also investigated using an experimental method. A quasi-continuous distribution was attributed to the determined trap centers.

Keywords: chalcogenides, defects, thermoluminescence, trap centers

Procedia PDF Downloads 262
219 Superparamagnetic Sensor with Lateral Flow Immunoassays as Platforms for Biomarker Quantification

Authors: M. Salvador, J. C. Martinez-Garcia, A. Moyano, M. C. Blanco-Lopez, M. Rivas

Abstract:

Biosensors play a crucial role in the detection of molecules nowadays due to their advantages of user-friendliness, high selectivity, the analysis in real time and in-situ applications. Among them, Lateral Flow Immunoassays (LFIAs) are presented among technologies for point-of-care bioassays with outstanding characteristics such as affordability, portability and low-cost. They have been widely used for the detection of a vast range of biomarkers, which do not only include proteins but also nucleic acids and even whole cells. Although the LFIA has traditionally been a positive/negative test, tremendous efforts are being done to add to the method the quantifying capability based on the combination of suitable labels and a proper sensor. One of the most successful approaches involves the use of magnetic sensors for detection of magnetic labels. Bringing together the required characteristics mentioned before, our research group has developed a biosensor to detect biomolecules. Superparamagnetic nanoparticles (SPNPs) together with LFIAs play the fundamental roles. SPMNPs are detected by their interaction with a high-frequency current flowing on a printed micro track. By means of the instant and proportional variation of the impedance of this track provoked by the presence of the SPNPs, quantitative and rapid measurement of the number of particles can be obtained. This way of detection requires no external magnetic field application, which reduces the device complexity. On the other hand, the major limitations of LFIAs are that they are only qualitative or semiquantitative when traditional gold or latex nanoparticles are used as color labels. Moreover, the necessity of always-constant ambient conditions to get reproducible results, the exclusive detection of the nanoparticles on the surface of the membrane, and the short durability of the signal are drawbacks that can be advantageously overcome with the design of magnetically labeled LFIAs. The approach followed was to coat the SPIONs with a specific monoclonal antibody which targets the protein under consideration by chemical bonds. Then, a sandwich-type immunoassay was prepared by printing onto the nitrocellulose membrane strip a second antibody against a different epitope of the protein (test line) and an IgG antibody (control line). When the sample flows along the strip, the SPION-labeled proteins are immobilized at the test line, which provides magnetic signal as described before. Preliminary results using this practical combination for the detection and quantification of the Prostatic-Specific Antigen (PSA) shows the validity and consistency of the technique in the clinical range, where a PSA level of 4.0 ng/mL is the established upper normal limit. Moreover, a LOD of 0.25 ng/mL was calculated with a confident level of 3 according to the IUPAC Gold Book definition. Its versatility has also been proved with the detection of other biomolecules such as troponin I (cardiac injury biomarker) or histamine.

Keywords: biosensor, lateral flow immunoassays, point-of-care devices, superparamagnetic nanoparticles

Procedia PDF Downloads 212
218 Exploration of Barriers and Challenges to Innovation Process for SMEs: Possibilities to Promote Cooperation Between Scientific and Business Institutions to Address it

Authors: Indre Brazauskaite, Vilte Auruskeviciene

Abstract:

Significance of the study is outlined through current strategic management challenges faced by SMEs. First, innovation is recognized as competitive advantage in the market, having ever changing market conditions. It is of constant interest from both practitioners and academics to capture and capitalize on business opportunities or mitigate the foreseen risks. Secondly, it is recognized that integrated system is needed for proper implementation of innovation process, especially during the period of business incubation, associated with relatively high risks of new product failure. Finally, ability to successful commercialize innovations leads to tangible business results that allow to grow organizations further. This is particularly relevant to SMEs due to limited structures, resources, or capabilities. Cooperation between scientific and business institutions could be a tool of mutual interest to observe, address, and further develop innovations during the incubation period, which is the most demanding and challenging during the innovation process. Material aims to address the following problematics: i) indicate the major barriers and challenges in innovation process that SMEs are facing, ii) outline the possibilities for these barriers and challenges to be addressed by cooperation between scientific and business institutions. Basis for this research is stage-by-stage integrated innovation management process which presents existing challenges and needed aid in operational decision making. The stage-by-stage innovation management process exploration highlights relevant research opportunities that have high practical relevance in the field. It is expected to reveal the possibility for business incubation programs that could combine interest from both – practices and academia. Methodology. Scientific meta-analysis of to-date scientific literature that explores innovation process. Research model is built on the combination of stage-gate model and lean six sigma approach. It outlines the following steps: i) pre-incubation (discovery and screening), ii) incubation (scoping, planning, development, and testing), and iii) post-incubation (launch and commercialization) periods. Empirical quantitative research is conducted to address barriers and challenges related to innovation process among SMEs that limits innovations from successful launch and commercialization and allows to identify potential areas for cooperation between scientific and business institutions. Research sample, high level decision makers representing trading SMEs, are approached with structured survey based on the research model to investigate the challenges associated with each of the innovation management step. Expected findings. First, the current business challenges in the innovation process are revealed. It will outline strengths and weaknesses of innovation management practices and systems across SMEs. Secondly, it will present material for relevant business case investigation for scholars to serve as future research directions. It will contribute to a better understanding of quality innovation management systems. Third, it will contribute to the understanding the need for business incubation systems for mutual contribution from practices and academia. It can increase relevance and adaptation of business research.

Keywords: cooperation between scientific and business institutions, innovation barriers and challenges, innovation measure, innovation process, SMEs

Procedia PDF Downloads 128
217 An Engineer-Oriented Life Cycle Assessment Tool for Building Carbon Footprint: The Building Carbon Footprint Evaluation System in Taiwan

Authors: Hsien-Te Lin

Abstract:

The purpose of this paper is to introduce the BCFES (building carbon footprint evaluation system), which is a LCA (life cycle assessment) tool developed by the Low Carbon Building Alliance (LCBA) in Taiwan. A qualified BCFES for the building industry should fulfill the function of evaluating carbon footprint throughout all stages in the life cycle of building projects, including the production, transportation and manufacturing of materials, construction, daily energy usage, renovation and demolition. However, many existing BCFESs are too complicated and not very designer-friendly, creating obstacles in the implementation of carbon reduction policies. One of the greatest obstacle is the misapplication of the carbon footprint inventory standards of PAS2050 or ISO14067, which are designed for mass-produced goods rather than building projects. When these product-oriented rules are applied to building projects, one must compute a tremendous amount of data for raw materials and the transportation of construction equipment throughout the construction period based on purchasing lists and construction logs. This verification method is very cumbersome by nature and unhelpful to the promotion of low carbon design. With a view to provide an engineer-oriented BCFE with pre-diagnosis functions, a component input/output (I/O) database system and a scenario simulation method for building energy are proposed herein. Most existing BCFESs base their calculations on a product-oriented carbon database for raw materials like cement, steel, glass, and wood. However, data on raw materials is meaningless for the purpose of encouraging carbon reduction design without a feedback mechanism, because an engineering project is not designed based on raw materials but rather on building components, such as flooring, walls, roofs, ceilings, roads or cabinets. The LCBA Database has been composited from existing carbon footprint databases for raw materials and architectural graphic standards. Project designers can now use the LCBA Database to conduct low carbon design in a much more simple and efficient way. Daily energy usage throughout a building's life cycle, including air conditioning, lighting, and electric equipment, is very difficult for the building designer to predict. A good BCFES should provide a simplified and designer-friendly method to overcome this obstacle in predicting energy consumption. In this paper, the author has developed a simplified tool, the dynamic Energy Use Intensity (EUI) method, to accurately predict energy usage with simple multiplications and additions using EUI data and the designed efficiency levels for the building envelope, AC, lighting and electrical equipment. Remarkably simple to use, it can help designers pre-diagnose hotspots in building carbon footprint and further enhance low carbon designs. The BCFES-LCBA offers the advantages of an engineer-friendly component I/O database, simplified energy prediction methods, pre-diagnosis of carbon hotspots and sensitivity to good low carbon designs, making it an increasingly popular carbon management tool in Taiwan. To date, about thirty projects have been awarded BCFES-LCBA certification and the assessment has become mandatory in some cities.

Keywords: building carbon footprint, life cycle assessment, energy use intensity, building energy

Procedia PDF Downloads 121
216 Strength Properties of Ca-Based Alkali Activated Fly Ash System

Authors: Jung-Il Suh, Hong-Gun Park, Jae-Eun Oh

Abstract:

Recently, the use of long-span precast concrete (PC) construction has increased in modular construction such as storage buildings and parking facilities. When applying long span PC member, reducing weight of long span PC member should be conducted considering lifting capacity of crane and self-weight of PC member and use of structural lightweight concrete made by lightweight aggregate (LWA) can be considered. In the process of lightweight concrete production, segregation and bleeding could occur due to difference of specific gravity between cement (3.3) and lightweight aggregate (1.2~1.8) and reducing weight of binder is needed to prevent the segregation between binder and aggregate. Also, lightweight precast concrete made by cementitious materials such as fly ash and ground granulated blast furnace (GGBFS) which is lower than specific gravity of cement as a substitute for cement has been studied. When only using fly ash for cementless binder alkali-activation of fly ash is most important chemical process in which the original fly ash is dissolved by a strong alkaline medium in steam curing with high-temperature condition. Because curing condition is similar with environment of precast member production, additional process is not needed. Na-based chloride generally used as a strong alkali activator has a practical problem such as high pH toxicity and high manufacturing cost. Instead of Na-based alkali activator calcium hydroxide [Ca(OH)2] and sodium hydroxide [Na2CO3] might be used because it has a lower pH and less expensive than Na-based alkali activator. This study explored the influences on Ca(OH)2-Na2CO3-activated fly ash system in its microstructural aspects and strength and permeability using powder X-ray analysis (XRD), thermogravimetry (TGA), mercury intrusion porosimetry (MIP). On the basis of microstructural analysis, the conclusions are made as follows. Increase of Ca(OH)2/FA wt.% did not affect improvement of compressive strength. Also, Ca(OH)2/FA wt.% and Na2CO3/FA wt.% had little effect on specific gravity of saturated surface dry (SSD) and absolute dry (AD) condition to calculate water absorption. Especially, the binder is appropriate for structural lightweight concrete because specific gravity of the hardened paste has no difference with that of lightweight aggregate. The XRD and TGA/DTG results did not present considerable difference for the types and quantities of hydration products depending on w/b ratio, Ca(OH)2 wt.%, and Na2CO3 wt.%. In the case of higher molar quantity of Ca(OH)2 to Na2CO3, XRD peak indicated unreacted Ca(OH)2 while DTG peak was not presented because of small quantity. Thus, presence of unreacted Ca(OH)2 is too small quantity to effect on mechanical performance. As a result of MIP, the porosity volume related to capillary pore depends on the w/b ratio. In the same condition of w/b ratio, quantities of Ca(OH)2 and Na2CO3 have more influence on pore size distribution rather than total porosity. While average pore size decreased as Na2CO3/FA w.t% increased, the average pore size increased over 20 nm as Ca(OH)2/FA wt.% increased which has inverse proportional relationship between pore size and mechanical properties such as compressive strength and water permeability.

Keywords: Ca(OH)2, compressive strength, microstructure, fly ash, Na2CO3, water absorption

Procedia PDF Downloads 200
215 Cyber-Med: Practical Detection Methodology of Cyber-Attacks Aimed at Medical Devices Eco-Systems

Authors: Nir Nissim, Erez Shalom, Tomer Lancewiki, Yuval Elovici, Yuval Shahar

Abstract:

Background: A Medical Device (MD) is an instrument, machine, implant, or similar device that includes a component intended for the purpose of the diagnosis, cure, treatment, or prevention of disease in humans or animals. Medical devices play increasingly important roles in health services eco-systems, including: (1) Patient Diagnostics and Monitoring; Medical Treatment and Surgery; and Patient Life Support Devices and Stabilizers. MDs are part of the medical device eco-system and are connected to the network, sending vital information to the internal medical information systems of medical centers that manage this data. Wireless components (e.g. Wi-Fi) are often embedded within medical devices, enabling doctors and technicians to control and configure them remotely. All these functionalities, roles, and uses of MDs make them attractive targets of cyber-attacks launched for many malicious goals; this trend is likely to significantly increase over the next several years, with increased awareness regarding MD vulnerabilities, the enhancement of potential attackers’ skills, and expanded use of medical devices. Significance: We propose to develop and implement Cyber-Med, a unique collaborative project of Ben-Gurion University of the Negev and the Clalit Health Services Health Maintenance Organization. Cyber-Med focuses on the development of a comprehensive detection framework that relies on a critical attack repository that we aim to create. Cyber-Med will allow researchers and companies to better understand the vulnerabilities and attacks associated with medical devices as well as providing a comprehensive platform for developing detection solutions. Methodology: The Cyber-Med detection framework will consist of two independent, but complementary detection approaches: one for known attacks, and the other for unknown attacks. These modules incorporate novel ideas and algorithms inspired by our team's domains of expertise, including cyber security, biomedical informatics, and advanced machine learning, and temporal data mining techniques. The establishment and maintenance of Cyber-Med’s up-to-date attack repository will strengthen the capabilities of Cyber-Med’s detection framework. Major Findings: Based on our initial survey, we have already found more than 15 types of vulnerabilities and possible attacks aimed at MDs and their eco-system. Many of these attacks target individual patients who use devices such pacemakers and insulin pumps. In addition, such attacks are also aimed at MDs that are widely used by medical centers such as MRIs, CTs, and dialysis engines; the information systems that store patient information; protocols such as DICOM; standards such as HL7; and medical information systems such as PACS. However, current detection tools, techniques, and solutions generally fail to detect both the known and unknown attacks launched against MDs. Very little research has been conducted in order to protect these devices from cyber-attacks, since most of the development and engineering efforts are aimed at the devices’ core medical functionality, the contribution to patients’ healthcare, and the business aspects associated with the medical device.

Keywords: medical device, cyber security, attack, detection, machine learning

Procedia PDF Downloads 338
214 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges

Authors: Pedram Saadati, Zahra Nazari

Abstract:

Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.

Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation

Procedia PDF Downloads 22
213 Development and Evaluation of Economical Self-cleaning Cement

Authors: Anil Saini, Jatinder Kumar Ratan

Abstract:

Now a day, the key issue for the scientific community is to devise the innovative technologies for sustainable control of urban pollution. In urban cities, a large surface area of the masonry structures, buildings, and pavements is exposed to the open environment, which may be utilized for the control of air pollution, if it is built from the photocatalytically active cement-based constructional materials such as concrete, mortars, paints, and blocks, etc. The photocatalytically active cement is formulated by incorporating a photocatalyst in the cement matrix, and such cement is generally known as self-cleaning cement In the literature, self-cleaning cement has been synthesized by incorporating nanosized-TiO₂ (n-TiO₂) as a photocatalyst in the formulation of the cement. However, the utilization of n-TiO₂ for the formulation of self-cleaning cement has the drawbacks of nano-toxicity, higher cost, and agglomeration as far as the commercial production and applications are concerned. The use of microsized-TiO₂ (m-TiO₂) in place of n-TiO₂ for the commercial manufacture of self-cleaning cement could avoid the above-mentioned problems. However, m-TiO₂ is less photocatalytically active as compared to n- TiO₂ due to smaller surface area, higher band gap, and increased recombination rate. As such, the use of m-TiO₂ in the formulation of self-cleaning cement may lead to a reduction in photocatalytic activity, thus, reducing the self-cleaning, depolluting, and antimicrobial abilities of the resultant cement material. So improvement in the photoactivity of m-TiO₂ based self-cleaning cement is the key issue for its practical applications in the present scenario. The current work proposes the use of surface-fluorinated m-TiO₂ for the formulation of self-cleaning cement to enhance its photocatalytic activity. The calcined dolomite, a constructional material, has also been utilized as co-adsorbent along with the surface-fluorinated m-TiO₂ in the formulation of self-cleaning cement to enhance the photocatalytic performance. The surface-fluorinated m-TiO₂, calcined dolomite, and the formulated self-cleaning cement were characterized using diffuse reflectance spectroscopy (DRS), X-ray diffraction analysis (XRD), field emission-scanning electron microscopy (FE-SEM), energy dispersive x-ray spectroscopy (EDS), X-ray photoelectron spectroscopy (XPS), scanning electron microscopy (SEM), BET (Brunauer–Emmett–Teller) surface area, and energy dispersive X-ray fluorescence spectrometry (EDXRF). The self-cleaning property of the as-prepared self-cleaning cement was evaluated using the methylene blue (MB) test. The depolluting ability of the formulated self-cleaning cement was assessed through a continuous NOX removal test. The antimicrobial activity of the self-cleaning cement was appraised using the method of the zone of inhibition. The as-prepared self-cleaning cement obtained by uniform mixing of 87% clinker, 10% calcined dolomite, and 3% surface-fluorinated m-TiO₂ showed a remarkable self-cleaning property by providing 53.9% degradation of the coated MB dye. The self-cleaning cement also depicted a noteworthy depolluting ability by removing 5.5% of NOx from the air. The inactivation of B. subtiltis bacteria in the presence of light confirmed the significant antimicrobial property of the formulated self-cleaning cement. The self-cleaning, depolluting, and antimicrobial results are attributed to the synergetic effect of surface-fluorinated m-TiO₂ and calcined dolomite in the cement matrix. The present study opens an idea and route for further research for acile and economical formulation of self-cleaning cement.

Keywords: microsized-titanium dioxide (m-TiO₂), self-cleaning cement, photocatalysis, surface-fluorination

Procedia PDF Downloads 139
212 Radiation Stability of Structural Steel in the Presence of Hydrogen

Authors: E. A. Krasikov

Abstract:

As the service life of an operating nuclear power plant (NPP) increases, the potential misunderstanding of the degradation of aging components must receive more attention. Integrity assurance analysis contributes to the effective maintenance of adequate plant safety margins. In essence, the reactor pressure vessel (RPV) is the key structural component determining the NPP lifetime. Environmentally induced cracking in the stainless steel corrosion-preventing cladding of RPV’s has been recognized to be one of the technical problems in the maintenance and development of light-water reactors. Extensive cracking leading to failure of the cladding was found after 13000 net hours of operation in JPDR (Japan Power Demonstration Reactor). Some of the cracks have reached the base metal and further penetrated into the RPV in the form of localized corrosion. Failures of reactor internal components in both boiling water reactors and pressurized water reactors have increased after the accumulation of relatively high neutron fluences (5´1020 cm–2, E>0,5MeV). Therefore, in the case of cladding failure, the problem arises of hydrogen (as a corrosion product) embrittlement of irradiated RPV steel because of exposure to the coolant. At present when notable progress in plasma physics has been obtained practical energy utilization from fusion reactors (FR) is determined by the state of material science problems. The last includes not only the routine problems of nuclear engineering but also a number of entirely new problems connected with extreme conditions of materials operation – irradiation environment, hydrogenation, thermocycling, etc. Limiting data suggest that the combined effect of these factors is more severe than any one of them alone. To clarify the possible influence of the in-service synergistic phenomena on the FR structural materials properties we have studied hydrogen-irradiated steel interaction including alternating hydrogenation and heat treatment (annealing). Available information indicates that the life of the first wall could be expanded by means of periodic in-place annealing. The effects of neutron fluence and irradiation temperature on steel/hydrogen interactions (adsorption, desorption, diffusion, mechanical properties at different loading velocities, post-irradiation annealing) were studied. Experiments clearly reveal that the higher the neutron fluence and the lower the irradiation temperature, the more hydrogen-radiation defects occur, with corresponding effects on the steel mechanical properties. Hydrogen accumulation analyses and thermal desorption investigations were performed to prove the evidence of hydrogen trapping at irradiation defects. Extremely high susceptibility to hydrogen embrittlement was observed with specimens which had been irradiated at relatively low temperature. However, the susceptibility decreases with increasing irradiation temperature. To evaluate methods for the RPV’s residual lifetime evaluation and prediction, more work should be done on the irradiated metal–hydrogen interaction in order to monitor more reliably the status of irradiated materials.

Keywords: hydrogen, radiation, stability, structural steel

Procedia PDF Downloads 241