Search results for: delayed alternate form
5190 Integrating Inference, Simulation and Deduction in Molecular Domain Analysis and Synthesis with Peculiar Attention to Drug Discovery
Authors: Diego Liberati
Abstract:
Standard molecular modeling is traditionally done through Schroedinger equations via the help of powerful tools helping to manage them atom by atom, often needing High Performance Computing. Here, a full portfolio of new tools, conjugating statistical inference in the so called eXplainable Artificial Intelligence framework (in the form of Machine Learning of understandable rules) to the more traditional modeling and simulation control theory of mixed dynamic logic hybrid processes, is offered as quite a general purpose even if making an example to a popular chemical physics set of problems.Keywords: understandable rules ML, k-means, PCA, PieceWise Affine Auto Regression with eXogenous input
Procedia PDF Downloads 285189 The Need for Automation in the Domestic Food Processing Sector and its Impact
Authors: Shantam Gupta
Abstract:
The objective of this study is to address the critical need for automation in the domestic food processing sector and study its impact. Food is the one of the most basic physiological needs essential for the survival of a living being. Some of them have the capacity to prepare their own food (like most plants) and henceforth are designated as primary food producers; those who depend on these primary food producers for food form the primary consumers’ class (herbivores). Some of the organisms relying on the primary food are the secondary food consumers (carnivores). There is a third class of consumers called tertiary food consumers/apex food consumers that feed on both the primary and secondary food consumers. Humans form an essential part of the apex predators and are generally at the top of the food chain. But still further disintegration of the food habits of the modern human i.e. Homo sapiens, reveals that humans depend on other individuals for preparing their own food. The old notion of eating raw/brute food is long gone and food processing has become very trenchant in lives of modern human. This has led to an increase in dependence on other individuals for ‘processing’ the food before it can be actually consumed by the modern human. This has led to a further shift of humans in the classification of food chain of consumers. The effects of the shifts shall be systematically investigated in this paper. The processing of food has a direct impact on the economy of the individual (consumer). Also most individuals depend on other processing individuals for the preparation of food. This dependency leads to establishment of a vital link of dependency in the food web which when altered can adversely affect the food web and can have dire consequences on the health of the individual. This study investigates the challenges arising out due to this dependency and the impact of food processing on the economy of the individual. A comparison of Industrial food processing and processing at domestic platforms (households and restaurants) has been made to provide an idea about the present scenario of automation in the food processing sector. A lot of time and energy is also consumed while processing food at home for consumption. The high frequency of consumption of meals (greater than 2 times a day) makes it even more laborious. Through the medium of this study a pressing need for development of an automatic cooking machine is proposed with a mission to reduce the inter-dependency & human effort of individuals required for the preparation of food (by automation of the food preparation process) and make them more self-reliant The impact of development of this product has also further been profoundly discussed. Assumption used: The individuals those who process food also consume the food that they produce. (They are also termed as ‘independent’ or ‘self-reliant’ modern human beings.)Keywords: automation, food processing, impact on economy, processing individual
Procedia PDF Downloads 4685188 Impulsivity Leads to Compromise Effect
Authors: Sana Maidullah, Ankita Sharma
Abstract:
The present study takes naturalistic decision-making approach to examine the role of personality in information processing in consumer decision making. In the technological era, most of the information comes in form of HTML or similar language via the internet; processing of this situation could be ambiguous, laborious and painful. The present study explores the role of impulsivity in creating an extreme effect on consumer decision making. Specifically, the study explores the role of impulsivity in extreme effect, i.e., extremeness avoidance (compromise effect) and extremeness seeking; the role of demographic variables, i.e. age and gender, in the relation between impulsivity and extreme effect. The study was conducted with the help of a questionnaire and two experiments. The experiment was designed in the form of two shopping websites with two product types: Hotel choice and Mobile choice. Both experimental interfaces were created with the Xampp software, the frontend of interfaces was HTML CSS JAVASCRIPT and backend was PHP MySQL. The mobile experiment was designed to measure the extreme effect and hotel experiment was designed to measure extreme effect with alignability of attributes. To observe the possibilities of the combined effect of individual difference and context effects, the manipulation of price, a number of alignable attributes and number of the non-alignable attributes is done. The study was conducted on 100 undergraduate and post-graduate engineering students within the age range of 18-35. The familiarity and level of use of internet and shopping website were assessed and controlled in the analysis. The analysis was done by using a t-test, ANOVA and regression analysis. The results indicated that the impulsivity leads to compromise effect and at the same time it also increases the relationship between alignability of attribute among choices and the compromise effect. The demographic variables were found to play a significant role in the relationship. The subcomponents of impulsivity were significantly influencing compromise effect, but the cognitive impulsivity was significant for women, and motor impulsivity was significant for males only. The impulsivity was significantly positively predicted by age, though there were no significant gender differences in impulsivity. The results clearly indicate the importance of individual factors in decision making. The present study, with precise and direct results, provides a significant suggestion for market analyst and business providers.Keywords: impulsivity, extreme effect, personality, alignability, consumer decision making
Procedia PDF Downloads 1885187 Effect of Modification on the Properties of Blighia sapida (Ackee) Seed Starch
Authors: Olufunmilola A. Abiodun, Adegbola O. Dauda, Ayobami Ojo, Samson A. Oyeyinka
Abstract:
Blighia sapida (Ackee) seed is a neglected and under-utilised crop. The fruit is cultivated for the aril which is used as meat substitute in soup while the seed is discarded. The seed is toxic due to the presence of hypoglycin which causes vomiting and death. The seed is shining black and bigger than the legume seeds. The seed contains high starch content which could serve as a cheap source of starch hereby reducing wastage of the crop during its season. Native starch had limitation in their use; therefore, modification of starch had been reported to improve the functional properties of starches. Therefore, this work determined the effect of modification on the properties of Blighia sapida seed starch. Blighia sapida seed was dehulled manually, milled and the starch extracted using standard method. The starch was subjected to modification using four methods (acid, alkaline, oxidized and acetylated methods). The morphological structure, form factor, granule size, amylose, swelling power, hypoglycin and pasting properties of the starches were determined. The structure of Blighia sapida using light microscope showed that the seed starch demonstrated an oval, round, elliptical, dome-shaped and also irregular shape. The form factors of the starch ranged from 0.32-0.64. Blighia sapida seed starches were smaller in granule sizes ranging from 2-6 µm. Acid modified starch had the highest amylose content (24.83%) and was significantly different ( < 0.05) from other starches. Blighia sapida seed starches showed a progressive increase in swelling power as temperature increased in native, acidified, alkalized, oxidized and acetylated starches but reduced with increasing temperature in pregelatinized starch. Hypoglycin A ranged from 3.89 to 5.74 mg/100 g with pregelatinized starch having the lowest value and alkalized starch having the highest value. Hypoglycin B ranged from 7.17 to 8.47 mg/100 g. Alkali-treated starch had higher peak viscosity (3973 cP) which was not significantly different (p > 0.05) from the native starch. Alkali-treated starch also was significantly different (p > 0.05) from other starches in holding strength value while acetylated starch had higher breakdown viscosity (1161.50 cP). Native starch was significantly different (p > 0.05) from other starches in final and setback viscosities. Properties of Blighia sapida modified starches showed that it could be used as a source of starch in food and other non-food industries and the toxic compound found in the starch was very low when compared to lethal dosage.Keywords: Blighia sapida seed, modification, starch, hypoglycin
Procedia PDF Downloads 2365186 Habits for Teenagers to Remain Unruffled by Stress When They Enter the Workforce
Authors: Sandeep Nath
Abstract:
There are good stresses and bad stresses. To tell the difference, recognize early signs of stress, and label stress conditions correctly, we need to understand stress triggers and the mechanism of stress as it arises. By understanding this in our teenage years, we can be prepared to prevent harmful stress from escalating and ruining health, physical, mental, and emotional. We can also prepare others/peers to be stress-free. The understanding of this is available in a form closest to our natural being, in ancient oriental wisdom, and is brought together as actionable habits in the movement called RENEWALism. The constructs of RENEWALism Habits are detailed in this paper, and case studies are presented of teenagers who have been equipped with both capability and capacity to handle their situations and environments independently.Keywords: habits, renewalism, stress, teenagers
Procedia PDF Downloads 775185 Collaborative Management Approach for Logistics Flow Management of Cuban Medicine Supply Chain
Authors: Ana Julia Acevedo Urquiaga, Jose A. Acevedo Suarez, Ana Julia Urquiaga Rodriguez, Neyfe Sablon Cossio
Abstract:
Despite the progress made in logistics and supply chains fields, it is unavoidable the development of business models that use efficiently information to facilitate the integrated logistics flows management between partners. Collaborative management is an important tool for materializing the cooperation between companies, as a way to achieve the supply chain efficiency and effectiveness. The first face of this research was a comprehensive analysis of the collaborative planning on the Cuban companies. It is evident that they have difficulties in supply chains planning where production, supplies and replenishment planning are independent tasks, as well as logistics and distribution operations. Large inventories generate serious financial and organizational problems for entities, demanding increasing levels of working capital that cannot be financed. Problems were found in the efficient application of Information and Communication Technology on business management. The general objective of this work is to develop a methodology that allows the deployment of a planning and control system in a coordinated way on the medicine’s logistics system in Cuba. To achieve these objectives, several mechanisms of supply chain coordination, mathematical programming models, and other management techniques were analyzed to meet the requirements of collaborative logistics management in Cuba. One of the findings is the practical and theoretical inadequacies of the studied models to solve the current situation of the Cuban logistics systems management. To contribute to the tactical-operative management of logistics, the Collaborative Logistics Flow Management Model (CLFMM) is proposed as a tool for the balance of cycles, capacities, and inventories, always to meet the final customers’ demands in correspondence with the service level expected by these. The CLFMM has as center the supply chain planning and control system as a unique information system, which acts on the processes network. The development of the model is based on the empirical methods of analysis-synthesis and the study cases. Other finding is the demonstration of the use of a single information system to support the supply chain logistics management, allows determining the deadlines and quantities required in each process. This ensures that medications are always available to patients and there are no faults that put the population's health at risk. The simulation of planning and control with the CLFMM in medicines such as dipyrone and chlordiazepoxide, during 5 months of 2017, permitted to take measures to adjust the logistic flow, eliminate delayed processes and avoid shortages of the medicines studied. As a result, the logistics cycle efficiency can be increased to 91%, the inventory rotation would increase, and this results in a release of financial resources.Keywords: collaborative management, medicine logistic system, supply chain planning, tactical-operative planning
Procedia PDF Downloads 1745184 Instant Data-Driven Robotics Fabrication of Light-Transmitting Ceramics: A Responsive Computational Modeling Workflow
Authors: Shunyi Yang, Jingjing Yan, Siyu Dong, Xiangguo Cui
Abstract:
Current architectural façade design practices incorporate various daylighting and solar radiation analysis methods. These emphasize the impact of geometry on façade design. There is scope to extend this knowledge into methods that address material translucency, porosity, and form. Such approaches can also achieve these conditions through adaptive robotic manufacturing approaches that exploit material dynamics within the design, and alleviate fabrication waste from molds, ultimately accelerating the autonomous manufacturing system. Besides analyzing the environmental solar radiant in building facade design, there is also a vacancy research area of how lighting effects can be precisely controlled by engaging the instant real-time data-driven robot control and manipulating the material properties. Ceramics carries a wide range of transmittance and deformation potentials for robotics control with the research of its material property. This paper presents one semi-autonomous system that engages with real-time data-driven robotics control, hardware kit design, environmental building studies, human interaction, and exploratory research and experiments. Our objectives are to investigate the relationship between different clay bodies or ceramics’ physio-material properties and their transmittance; to explore the feedback system of instant lighting data in robotic fabrication to achieve precise lighting effect; to design the sufficient end effector and robot behaviors for different stages of deformation. We experiment with architectural clay, as the material of the façade that is potentially translucent at a certain stage can respond to light. Studying the relationship between form, material properties, and porosity can help create different interior and exterior light effects and provide façade solutions for specific architectural functions. The key idea is to maximize the utilization of in-progress robotics fabrication and ceramics materiality to create a highly integrated autonomous system for lighting facade design and manufacture.Keywords: light transmittance, data-driven fabrication, computational design, computer vision, gamification for manufacturing
Procedia PDF Downloads 1205183 Effect of Viscosity on Propagation of MHD Waves in Astrophysical Plasma
Authors: Alemayehu Mengesha, Solomon Belay
Abstract:
We determine the general dispersion relation for the propagation of magnetohydrodynamic (MHD) waves in an astrophysical plasma by considering the effect of viscosity with an anisotropic pressure tensor. Basic MHD equations have been derived and linearized by the method of perturbation to develop the general form of the dispersion relation equation. Our result indicates that an astrophysical plasma with an anisotropic pressure tensor is stable in the presence of viscosity and a strong magnetic field at considerable wavelength. Currently, we are doing the numerical analysis of this work.Keywords: astrophysical, magnetic field, instability, MHD, wavelength, viscosity
Procedia PDF Downloads 3425182 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 935181 Faithful Extension of Constant Height and Constant Width between Finite Posets
Authors: Walied Hazim Sharif
Abstract:
The problem of faithful extension with the condition of keeping constant height h and constant width w, i.e. for hw-inextensibility, seems more interesting than the brute extension of finite poset (partially ordered set). We shall investigate some theorems of hw-inextensive and hw-extensive posets that can be used to formulate the faithful extension problem. A theorem in its general form of hw-inextensive posets are given to implement the presented theorems.Keywords: faithful extension, poset, extension, inextension, height, width, hw-extensive, hw-inextensive
Procedia PDF Downloads 2585180 An Exploratory Study to Appraise the Current Challenges and Limitations Faced in Applying and Integrating the Historic Building Information Modelling Concept for the Management of Historic Buildings
Authors: Oluwatosin Adewale
Abstract:
The sustainability of built heritage has become a relevant issue in recent years due to the social and economic values associated with these buildings. Heritage buildings provide a means for human perception of culture and represent a legacy of long-existing history; they define the local character of the social world and provide a vital connection to the past with their associated aesthetical and communal benefits. The identified values of heritage buildings have increased the importance of conservation and the lifecycle management of these buildings. The recent developments of digital design technology in engineering and the built environment have led to the adoption of Building Information Modelling (BIM) by the Architecture, Engineering, Construction, and Operations (AECO) industry. BIM provides a platform for the lifecycle management of a construction project through effective collaboration among stakeholders and the analysis of a digital information model. This growth in digital design technology has also made its way into the field of architectural heritage management in the form of Historic Building Information Modelling (HBIM). A reverse engineering process for digital documentation of heritage assets that draws upon similar information management processes as the BIM process. However, despite the several scientific and technical contributions made to the development of the HBIM process, it doesn't remain easy to integrate at the most practical level of heritage asset management. The main objective identified under the scope of the study is to review the limitations and challenges faced by heritage management professionals in adopting an HBIM-based asset management procedure for historic building projects. This paper uses an exploratory study in the form of semi-structured interviews to investigate the research problem. A purposive sample of heritage industry experts and professionals were selected to take part in a semi-structured interview to appraise some of the limitations and challenges they have faced with the integration of HBIM into their project workflows. The findings from this study will present the challenges and limitations faced in applying and integrating the HBIM concept for the management of historic buildings.Keywords: building information modelling, built heritage, heritage asset management, historic building information modelling, lifecycle management
Procedia PDF Downloads 955179 In vitro and in vivo Anticancer Activity of Nanosize Zinc Oxide Composites of Doxorubicin
Authors: Emma R. Arakelova, Stepan G. Grigoryan, Flora G. Arsenyan, Nelli S. Babayan, Ruzanna M. Grigoryan, Natalia K. Sarkisyan
Abstract:
Novel nanosize zinc oxide composites of doxorubicin obtained by deposition of 180 nm thick zinc oxide film on the drug surface using DC-magnetron sputtering of a zinc target in the form of gels (PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO) were studied for drug delivery applications. The cancer specificity was revealed both in in vitro and in vivo models. The cytotoxicity of the test compounds was analyzed against human cancer (HeLa) and normal (MRC5) cell lines using MTT colorimetric cell viability assay. IC50 values were determined and compared to reveal the cancer specificity of the test samples. The mechanistic study of the most active compound was investigated using Flow cytometry analyzing of the DNA content after PI (propidium iodide) staining. Data were analyzed with Tree Star FlowJo software using cell cycle analysis Dean-Jett-Fox module. The in vivo anticancer activity estimation experiments were carried out on mice with inoculated ascitic Ehrlich’s carcinoma at intraperitoneal introduction of doxorubicin and its zinc oxide compositions. It was shown that the nanosize zinc oxide film deposition on the drug surface leads to the selective anticancer activity of composites at the cellular level with the range of selectivity index (SI) from 4 (Starch+NaCMC+Dox+ZnO) to 200 (PEO(gel)+Dox+ZnO) which is higher than that of free Dox (SI = 56). The significant increase in vivo antitumor activity (by a factor of 2-2.5) and decrease of general toxicity of zinc oxide compositions of doxorubicin in the form of the above mentioned gels compared to free doxorubicin were shown on the model of inoculated Ehrlich's ascitic carcinoma. Mechanistic studies of anticancer activity revealed the cytostatic effect based on the high level of DNA biosynthesis inhibition at considerable low concentrations of zinc oxide compositions of doxorubicin. The results of studies in vitro and in vivo behavior of PEO+Dox+ZnO and Starch+NaCMC+Dox+ZnO composites confirm the high potential of the nanosize zinc oxide composites as a vector delivery system for future application in cancer chemotherapy.Keywords: anticancer activity, cancer specificity, doxorubicin, zinc oxide
Procedia PDF Downloads 4095178 Acetic Acid Adsorption and Decomposition on Pt(111): Comparisons to Ni(111)
Authors: Lotanna Ezeonu, Jason P. Robbins, Ziyu Tang, Xiaofang Yang, Bruce E. Koel, Simon G. Podkolzin
Abstract:
The interaction of organic molecules with metal surfaces is of interest in numerous technological applications, such as catalysis, bone replacement, and biosensors. Acetic acid is one of the main products of bio-oils produced from the pyrolysis of hemicellulosic feedstocks. However, their high oxygen content makes them unsuitable for use as fuels. Hydrodeoxygenation is a proven technique for catalytic deoxygenation of bio-oils. An understanding of the energetics and control of the bond-breaking sequences of biomass-derived oxygenates on metal surfaces will enable a guided optimization of existing catalysts and the development of more active/selective processes for biomass transformations to fuels. Such investigations have been carried out with the aid of ultrahigh vacuum and its concomitant techniques. The high catalytic activity of platinum in biomass-derived oxygenate transformations has sparked a lot of interest. We herein exploit infrared reflection absorption spectroscopy(IRAS), temperature-programmed desorption(TPD), and density functional theory(DFT) to study the adsorption and decomposition of acetic acid on a Pt(111) surface, which was then compared with Ni(111), a model non-noble metal. We found that acetic acid adsorbs molecularly on the Pt(111) surface, interacting through the lone pair of electrons of one oxygen atomat 90 K. At 140 K, the molecular form is still predominant, with some dissociative adsorption (in the form of acetate and hydrogen). Annealing to 193 K led to complete dehydrogenation of molecular acetic acid species leaving adsorbed acetate. At 440 K, decomposition of the acetate species occurs via decarbonylation and decarboxylation as evidenced by desorption peaks for H₂,CO, CO₂ and CHX fragments (x=1, 2) in theTPD.The assignments for the experimental IR peaks were made using visualization of the DFT-calculated vibrational modes. The results showed that acetate adsorbs in a bridged bidentate (μ²η²(O,O)) configuration. The coexistence of linear and bridge bonded CO was also predicted by the DFT results. Similar molecular acid adsorption energy was predicted in the case of Ni(111) whereas a significant difference was found for acetate adsorption.Keywords: acetic acid, platinum, nickel, infared-absorption spectrocopy, temperature programmed desorption, density functional theory
Procedia PDF Downloads 1055177 Optimal Price Points in Differential Pricing
Authors: Katerina Kormusheva
Abstract:
Pricing plays a pivotal role in the marketing discipline as it directly influences consumer perceptions, purchase decisions, and overall market positioning of a product or service. This paper seeks to expand current knowledge in the area of discriminatory and differential pricing, a main area of marketing research. The methodology includes developing a framework and a model for determining how many price points to implement in differential pricing. We focus on choosing the levels of differentiation, derive a function form of the model framework proposed, and lastly, test it empirically with data from a large-scale marketing pricing experiment of services in telecommunications.Keywords: marketing, differential pricing, price points, optimization
Procedia PDF Downloads 915176 Preparation of Silver and Silver-Gold, Universal and Repeatable, Surface Enhanced Raman Spectroscopy Platforms from SERSitive
Authors: Pawel Albrycht, Monika Ksiezopolska-Gocalska, Robert Holyst
Abstract:
Surface Enhanced Raman Spectroscopy (SERS) is a technique of growing importance not only in purely scientific research related to analytical chemistry. It finds more and more applications in broadly understood testing - medical, forensic, pharmaceutical, food - and everywhere works perfectly, on one condition that SERS substrates used for testing give adequate enhancement, repeatability, and homogeneity of SERS signal. This is a problem that has existed since the invention of this technique. Some laboratories use as SERS amplifiers colloids with silver or gold nanoparticles, others form rough silver or gold surfaces, but results are generally either weak or unrepeatable. Furthermore, these structures are very often highly specific - they amplify the signal only of a small group of compounds. It means that they work with some kinds of analytes but only with those which were used at a developer’s laboratory. When it comes to research on different compounds, completely new SERS 'substrates' are required. That underlay our decision to develop universal substrates for the SERS spectroscopy. Generally, each compound has different affinity for both silver and gold, which have the best SERS properties, and that's what depends on what signal we get in the SERS spectrum. Our task was to create the platform that gives a characteristic 'fingerprint' of the largest number of compounds with very high repeatability - even at the expense of the intensity of the enhancement factor (EF) (possibility to repeat research results is of the uttermost importance). As specified above SERS substrates are offered by SERSitive company. Applied method is based on cyclic potentiodynamic electrodeposition of silver or silver-gold nanoparticles on the conductive surface of ITO-coated glass at controlled temperature of the reaction solution. Silver nanoparticles are supplied in the form of silver nitrate (AgNO₃, 10 mM), gold nanoparticles are derived from tetrachloroauric acid (10 mM) while sodium sulfite (Na₂O₃, 5 mM) is used as a reductor. To limit and standardize the size of the SERS surface on which nanoparticles are deposited, photolithography is used. We secure the desired ITO-coated glass surface, and then etch the unprotected ITO layer which prevents nanoparticles from settling at these sites. On the prepared surface, we carry out the process described above, obtaining SERS surface with nanoparticles of sizes 50-400 nm. The SERSitive platforms present highly sensitivity (EF = 10⁵-10⁶), homogeneity and repeatability (70-80%).Keywords: electrodeposition, nanoparticles, Raman spectroscopy, SERS, SERSitive, SERS platforms, SERS substrates
Procedia PDF Downloads 1545175 Design and Study of a DC/DC Converter for High Power, 14.4 V and 300 A for Automotive Applications
Authors: Júlio Cesar Lopes de Oliveira, Carlos Henrique Gonçalves Treviso
Abstract:
The shortage of the automotive market in relation to options for sources of high power car audio systems, led to development of this work. Thus, we developed a source with stabilized voltage with 4320 W effective power. Designed to the voltage of 14.4 V and a choice of two currents: 30 A load option in battery banks and 300 A at full load. This source can also be considered as a source of general use dedicated commercial with a simple control circuit in analog form based on discrete components. The assembly of power circuit uses a methodology for higher power than the initially stipulated.Keywords: DC-DC power converters, converters, power conversion, pulse width modulation converters
Procedia PDF Downloads 3835174 Basics of Gamma Ray Burst and Its Afterglow
Authors: Swapnil Kumar Singh
Abstract:
Gamma-ray bursts (GRB's), short and intense pulses of low-energy γ rays, have fascinated astronomers and astrophysicists since their unexpected discovery in the late sixties. GRB'sare accompanied by long-lasting afterglows, and they are associated with core-collapse supernovae. The detection of delayed emission in X-ray, optical, and radio wavelength, or "afterglow," following a γ-ray burst can be described as the emission of a relativistic shell decelerating upon collision with the interstellar medium. While it is fair to say that there is strong diversity amongst the afterglow population, probably reflecting diversity in the energy, luminosity, shock efficiency, baryon loading, progenitor properties, circumstellar medium, and more, the afterglows of GRBs do appear more similar than the bursts themselves, and it is possible to identify common features within afterglows that lead to some canonical expectations. After an initial flash of gamma rays, a longer-lived "afterglow" is usually emitted at longer wavelengths (X-ray, ultraviolet, optical, infrared, microwave, and radio). It is a slowly fading emission at longer wavelengths created by collisions between the burst ejecta and interstellar gas. In X-ray wavelengths, the GRB afterglow fades quickly at first, then transitions to a less-steep drop-off (it does other stuff after that, but we'll ignore that for now). During these early phases, the X-ray afterglow has a spectrum that looks like a power law: flux F∝ E^β, where E is energy and beta is some number called the spectral index. This kind of spectrum is characteristic of synchrotron emission, which is produced when charged particles spiral around magnetic field lines at close to the speed of light. In addition to the outgoing forward shock that ploughs into the interstellar medium, there is also a so-called reverse shock, which propagates backward through the ejecta. In many ways," reverse" shock can be misleading; this shock is still moving outward from the restframe of the star at relativistic velocity but is ploughing backward through the ejecta in their frame and is slowing the expansion. This reverse shock can be dynamically important, as it can carry comparable energy to the forward shock. The early phases of the GRB afterglow still provide a good description even if the GRB is highly collimated since the individual emitting regions of the outflow are not in causal contact at large angles and so behave as though they are expanding isotropically. The majority of afterglows, at times typically observed, fall in the slow cooling regime, and the cooling break lies between the optical and the X-ray. Numerous observations support this broad picture for afterglows in the spectral energy distribution of the afterglow of the very bright GRB. The bluer light (optical and X-ray) appears to follow a typical synchrotron forward shock expectation (note that the apparent features in the X-ray and optical spectrum are due to the presence of dust within the host galaxy). We need more research in GRB and Particle Physics in order to unfold the mysteries of afterglow.Keywords: GRB, synchrotron, X-ray, isotropic energy
Procedia PDF Downloads 875173 C-Spine Imaging in a Non-trauma Centre: Compliance with NEXUS Criteria Audit
Authors: Andrew White, Abigail Lowe, Kory Watkins, Hamed Akhlaghi, Nicole Winter
Abstract:
The timing and appropriateness of diagnostic imaging are critical to the evaluation and management of traumatic injuries. Within the subclass of trauma patients, the prevalence of c-spine injury is less than 4%. However, the incidence of delayed diagnosis within this cohort has been documented as up to 20%, with inadequate radiological examination most cited issue. In order to assess those in which c-spine injury cannot be fully excluded based on clinical examination alone and, therefore, should undergo diagnostic imaging, a set of criteria is used to provide clinical guidance. The NEXUS (National Emergency X-Radiography Utilisation Study) criteria is a validated clinical decision-making tool used to facilitate selective c-spine radiography. The criteria allow clinicians to determine whether cervical spine imaging can be safely avoided in appropriate patients. The NEXUS criteria are widely used within the Emergency Department setting given their ease of use and relatively straightforward application and are used in the Victorian State Trauma System’s guidelines. This audit utilized retrospective data collection to examine the concordance of c-spine imaging in trauma patients to that of the NEXUS criteria and assess compliance with state guidance on diagnostic imaging in trauma. Of the 183 patients that presented with trauma to the head, neck, or face (244 excluded due to incorrect triage), 98 did not undergo imaging of the c-spine. Out of those 98, 44% fulfilled at least one of the NEXUS criteria, meaning the c-spine could not be clinically cleared as per the current guidelines. The criterion most met was intoxication, comprising 42% (18 of 43), with midline spinal tenderness (or absence of documentation of this) the second most common with 23% (10 of 43). Intoxication being the most met criteria is significant but not unexpected given the cohort of patients seen at St Vincent’s and within many emergency departments in general. Given these patients will always meet NEXUS criteria, an element of clinical judgment is likely needed, or concurrent use of the Canadian C-Spine Rules to exclude the need for imaging. Midline tenderness as a met criterion was often in the context of poor or absent documentation relating to this, emphasizing the importance of clear and accurate assessments. The distracting injury was identified in 7 out of the 43 patients; however, only one of these patients exhibited a thoracic injury (T11 compression fracture), with the remainder comprising injuries to the extremities – some studies suggest that C-spine imaging may not be required in the evaluable blunt trauma patient despite distracting injuries in any body regions that do not involve the upper chest. This emphasises the need for standardised definitions for distracting injury, at least at a departmental/regional level. The data highlights the currently poor application of the NEXUS guidelines, with likely common themes throughout emergency departments, highlighting the need for further education regarding implementation and potential refinement/clarification of criteria. Of note, there appeared to be no significant differences between levels of experience with respect to inappropriately clearing the c-spine clinically with respect to the guidelines.Keywords: imaging, guidelines, emergency medicine, audit
Procedia PDF Downloads 705172 A Miniaturized Circular Patch Antenna Based on Metamaterial for WI-FI Applications
Authors: Fatima Zahra Moussa, Yamina Belhadef, Souheyla Ferouani
Abstract:
In this work, we present a new form of miniature circular patch antenna based on CSRR metamaterials with an extended bandwidth proposed for 5 GHz Wi-Fiapplications. A reflection coefficient of -35 dB and a radiation pattern of 7.47 dB are obtained when simulating the initial proposed antenna with the CST microwave studio simulation software. The notch insertion technique in the radiating element was used for matching the antenna to the desired frequency in the frequency band [5150-5875] MHz.An extension of the bandwidth from 332 MHz to 1423 MHz was done by the DGS (defected ground structure) technique to meet the user's requirement in the 5 GHz Wi-Fi frequency band.Keywords: patch antenna, miniaturisation, CSRR, notches, wifi, DGS
Procedia PDF Downloads 1205171 Increasing Prevalence of Multi-Allergen Sensitivities in Patients with Allergic Rhinitis and Asthma in Eastern India
Authors: Sujoy Khan
Abstract:
There is a rising concern with increasing allergies affecting both adults and children in rural and urban India. Recent report on adults in a densely populated North Indian city showed sensitization rates for house dust mite, parthenium, and cockroach at 60%, 40% and 18.75% that is now comparable to allergy prevalence in cities in the United States. Data from patients residing in the eastern part of India is scarce. A retrospective study (over 2 years) was done on patients with allergic rhinitis and asthma where allergen-specific IgE levels were measured to see the aero-allergen sensitization pattern in a large metropolitan city of East India. Total IgE and allergen-specific IgE levels were measured using ImmunoCAP (Phadia 100, Thermo Fisher Scientific, Sweden) using region-specific aeroallergens: Dermatophagoides pteronyssinus (d1); Dermatophagoides farinae (d2); cockroach (i206); grass pollen mix (gx2) consisted of Cynodon dactylon, Lolium perenne, Phleum pratense, Poa pratensis, Sorghum halepense, Paspalum notatum; tree pollen mix (tx3) consisted of Juniperus sabinoides, Quercus alba, Ulmus americana, Populus deltoides, Prosopis juliflora; food mix 1 (fx1) consisted of Peanut, Hazel nut, Brazil nut, Almond, Coconut; mould mix (mx1) consisted of Penicillium chrysogenum, Cladosporium herbarum, Aspergillus fumigatus, Alternaria alternate; animal dander mix (ex1) consisted of cat, dog, cow and horse dander; and weed mix (wx1) consists of Ambrosia elatior, Artemisia vulgaris, Plantago lanceolata, Chenopodium album, Salsola kali, following manufacturer’s instructions. As the IgE levels were not uniformly distributed, median values were used to represent the data. 92 patients with allergic rhinitis and asthma (united airways disease) were studied over 2 years including 21 children (age < 12 years) who had total IgE and allergen-specific IgE levels measured. The median IgE level was higher in 2016 than in 2015 with 60% of patients (adults and children) being sensitized to house dust mite (dual positivity for Dermatophagoides pteronyssinus and farinae). Of 11 children in 2015, whose total IgE ranged from 16.5 to >5000 kU/L, 36% of children were polysensitized (≥4 allergens), and 55% were sensitized to dust mites. Of 10 children in 2016, total IgE levels ranged from 37.5 to 2628 kU/L, and 20% were polysensitized with 60% sensitized to dust mites. Mould sensitivity was 10% in both of the years in the children studied. A consistent finding was that ragweed sensitization (molecular homology to Parthenium hysterophorus) appeared to be increasing across all age groups, and throughout the year, as reported previously by us where 25% of patients were sensitized. In the study sample overall, sensitizations to dust mite, cockroach, and parthenium were important risks in our patients with moderate to severe asthma that reinforces the importance of controlling indoor exposure to these allergens. Sensitizations to dust mite, cockroach and parthenium allergens are important predictors of asthma morbidity not only among children but also among adults in Eastern India.Keywords: aAeroallergens, asthma, dust mite, parthenium, rhinitis
Procedia PDF Downloads 1985170 Detailed Sensitive Detection of Impurities in Waste Engine Oils Using Laser Induced Breakdown Spectroscopy, Rotating Disk Electrode Optical Emission Spectroscopy and Surface Plasmon Resonance
Authors: Cherry Dhiman, Ayushi Paliwal, Mohd. Shahid Khan, M. N. Reddy, Vinay Gupta, Monika Tomar
Abstract:
The laser based high resolution spectroscopic experimental techniques such as Laser Induced Breakdown Spectroscopy (LIBS), Rotating Disk Electrode Optical Emission spectroscopy (RDE-OES) and Surface Plasmon Resonance (SPR) have been used for the study of composition and degradation analysis of used engine oils. Engine oils are mainly composed of aliphatic and aromatics compounds and its soot contains hazardous components in the form of fine, coarse and ultrafine particles consisting of wear metal elements. Such coarse particulates matter (PM) and toxic elements are extremely dangerous for human health that can cause respiratory and genetic disorder in humans. The combustible soot from thermal power plants, industry, aircrafts, ships and vehicles can lead to the environmental and climate destabilization. It contributes towards global pollution for land, water, air and global warming for environment. The detection of such toxicants in the form of elemental analysis is a very serious issue for the waste material management of various organic, inorganic hydrocarbons and radioactive waste elements. In view of such important points, the current study on used engine oils was performed. The fundamental characterization of engine oils was conducted by measuring water content and kinematic viscosity test that proves the crude analysis of the degradation of used engine oils samples. The microscopic quantitative and qualitative analysis was presented by RDE-OES technique which confirms the presence of elemental impurities of Pb, Al, Cu, Si, Fe, Cr, Na and Ba lines for used waste engine oil samples in few ppm. The presence of such elemental impurities was confirmed by LIBS spectral analysis at various transition levels of atomic line. The recorded transition line of Pb confirms the maximum degradation which was found in used engine oil sample no. 3 and 4. Apart from the basic tests, the calculations for dielectric constants and refractive index of the engine oils were performed via SPR analysis.Keywords: surface plasmon resonance, laser-induced breakdown spectroscopy, ICCD spectrometer, engine oil
Procedia PDF Downloads 1415169 Manufacture and Characterization of Poly (Tri Methylene Terephthalate) Nanofibers by Electrospinning
Authors: Omid Saligheh
Abstract:
Poly (tri methylene terephthalate) (PTT) nanofibers were prepared by electrospinning, being directly deposited in the form of a random fibers web. The effect of changing processing parameters such as solution concentration and electrospinning voltage on the morphology of the electrospun PTT nanofibers was investigated with scanning electron microscopy (SEM). The electrospun fibers diameter increased with rising concentration and decreased by increasing the electrospinning voltage, thermal and mechanical properties of electrospun fibers were characterized by DSC and tensile testing, respectively.Keywords: poly tri methylene terephthalate, electrospinning, morphology, thermal behavior, mechanical properties
Procedia PDF Downloads 845168 Habituation on Children Mental Retardation through Practice of Behaviour Therapy in Great Aceh, Aceh Province
Authors: Marini Kristina Situmeang, Siti Hazar Sitorus, Mukhammad Fatkhullah, Arfan Fadli
Abstract:
This study aims to identify and explain how forms of treatment and community action include parents who have children with mental retardation while undergoing behavioral therapy that leads to habituation processes. Based on observations made there is inappropriate treatment such as labeling that child mental retardation is considered ‘crazy’ by some people in Aceh Besar region. Reflecting on the phenomenon of discriminatory treatment, the existence of children with mental retardation should be realized in concrete actions that can encourage the development of cognitive abilities, language, motor, and social, one of them through behavioral. The purpose of this research is to find out and explain how the social practices of children with mental retardation when undergoing behavioral therapy that leads to habituation process. This study focuses on families or parents who have children with mental retardation and do therapy of behavioral therapy at home or at physiotherapy clinics in Aceh Besar. The research method is qualitative with case study approach. Data collection techniques are conducted with in-depth interviews and Focus Group Discussion (FGD). The results showed that habituation process which is conducted by parents at home and in fisotherapy clinic have a positive effect on the development of children behavior of mental retardation, especially when dealing with the environment of the community around the residence. Habituation processes conducted through behavioral therapy practices are influenced by Habitus (Gestational and childcare at therapy) and Reinforcement (in this case family and social support). Habituation process is done in the form of habituation, the creation of the situation, and strengthening the character. For example, when a child's mental retardation commits a wrong act (disgraceful or inappropriate behavior) then the child gets punishment in accordance with the form of punishment in a normal child generally, and when he performs a good deed, then he is given a prize such as praise or a thing he likes. Through some of these actions, the child with mental retardation can behave in accordance with the character formed and expected by the community. The process of habituation done by parents accompanied by continuous support of physiotherapy can be one of the alternative booster of cognitive and social development of children mental retardation to then out of the ‘crazy’ label that has been given.Keywords: behaviour therapy, habituation, habitus, mental retardation
Procedia PDF Downloads 2565167 Estimating Estimators: An Empirical Comparison of Non-Invasive Analysis Methods
Authors: Yan Torres, Fernanda Simoes, Francisco Petrucci-Fonseca, Freddie-Jeanne Richard
Abstract:
The non-invasive samples are an alternative of collecting genetic samples directly. Non-invasive samples are collected without the manipulation of the animal (e.g., scats, feathers and hairs). Nevertheless, the use of non-invasive samples has some limitations. The main issue is degraded DNA, leading to poorer extraction efficiency and genotyping. Those errors delayed for some years a widespread use of non-invasive genetic information. Possibilities to limit genotyping errors can be done using analysis methods that can assimilate the errors and singularities of non-invasive samples. Genotype matching and population estimation algorithms can be highlighted as important analysis tools that have been adapted to deal with those errors. Although, this recent development of analysis methods there is still a lack of empirical performance comparison of them. A comparison of methods with dataset different in size and structure can be useful for future studies since non-invasive samples are a powerful tool for getting information specially for endangered and rare populations. To compare the analysis methods, four different datasets used were obtained from the Dryad digital repository were used. Three different matching algorithms (Cervus, Colony and Error Tolerant Likelihood Matching - ETLM) are used for matching genotypes and two different ones for population estimation (Capwire and BayesN). The three matching algorithms showed different patterns of results. The ETLM produced less number of unique individuals and recaptures. A similarity in the matched genotypes between Colony and Cervus was observed. That is not a surprise since the similarity between those methods on the likelihood pairwise and clustering algorithms. The matching of ETLM showed almost no similarity with the genotypes that were matched with the other methods. The different cluster algorithm system and error model of ETLM seems to lead to a more criterious selection, although the processing time and interface friendly of ETLM were the worst between the compared methods. The population estimators performed differently regarding the datasets. There was a consensus between the different estimators only for the one dataset. The BayesN showed higher and lower estimations when compared with Capwire. The BayesN does not consider the total number of recaptures like Capwire only the recapture events. So, this makes the estimator sensitive to data heterogeneity. Heterogeneity in the sense means different capture rates between individuals. In those examples, the tolerance for homogeneity seems to be crucial for BayesN work properly. Both methods are user-friendly and have reasonable processing time. An amplified analysis with simulated genotype data can clarify the sensibility of the algorithms. The present comparison of the matching methods indicates that Colony seems to be more appropriated for general use considering a time/interface/robustness balance. The heterogeneity of the recaptures affected strongly the BayesN estimations, leading to over and underestimations population numbers. Capwire is then advisable to general use since it performs better in a wide range of situations.Keywords: algorithms, genetics, matching, population
Procedia PDF Downloads 1425166 Thermodynamic Analyses of Information Dissipation along the Passive Dendritic Trees and Active Action Potential
Authors: Bahar Hazal Yalçınkaya, Bayram Yılmaz, Mustafa Özilgen
Abstract:
Brain information transmission in the neuronal network occurs in the form of electrical signals. Neural work transmits information between the neurons or neurons and target cells by moving charged particles in a voltage field; a fraction of the energy utilized in this process is dissipated via entropy generation. Exergy loss and entropy generation models demonstrate the inefficiencies of the communication along the dendritic trees. In this study, neurons of 4 different animals were analyzed with one dimensional cable model with N=6 identical dendritic trees and M=3 order of symmetrical branching. Each branch symmetrically bifurcates in accordance with the 3/2 power law in an infinitely long cylinder with the usual core conductor assumptions, where membrane potential is conserved in the core conductor at all branching points. In the model, exergy loss and entropy generation rates are calculated for each branch of equivalent cylinders of electrotonic length (L) ranging from 0.1 to 1.5 for four different dendritic branches, input branch (BI), and sister branch (BS) and two cousin branches (BC-1 & BC-2). Thermodynamic analysis with the data coming from two different cat motoneuron studies show that in both experiments nearly the same amount of exergy is lost while generating nearly the same amount of entropy. Guinea pig vagal motoneuron loses twofold more exergy compared to the cat models and the squid exergy loss and entropy generation were nearly tenfold compared to the guinea pig vagal motoneuron model. Thermodynamic analysis show that the dissipated energy in the dendritic tress is directly proportional with the electrotonic length, exergy loss and entropy generation. Entropy generation and exergy loss show variability not only between the vertebrate and invertebrates but also within the same class. Concurrently, single action potential Na+ ion load, metabolic energy utilization and its thermodynamic aspect contributed for squid giant axon and mammalian motoneuron model. Energy demand is supplied to the neurons in the form of Adenosine triphosphate (ATP). Exergy destruction and entropy generation upon ATP hydrolysis are calculated. ATP utilization, exergy destruction and entropy generation showed differences in each model depending on the variations in the ion transport along the channels.Keywords: ATP utilization, entropy generation, exergy loss, neuronal information transmittance
Procedia PDF Downloads 3935165 Reducing Later Life Loneliness: A Systematic Literature Review of Loneliness Interventions
Authors: Dhruv Sharma, Lynne Blair, Stephen Clune
Abstract:
Later life loneliness is a social issue that is increasing alongside an upward global population trend. As a society, one way that we have responded to this social challenge is through developing non-pharmacological interventions such as befriending services, activity clubs, meet-ups, etc. Through a systematic literature review, this paper suggests that currently there is an underrepresentation of radical innovation, and underutilization of digital technologies in developing loneliness interventions for older adults. This paper examines intervention studies that were published in English language, within peer reviewed journals between January 2005 and December 2014 across 4 electronic databases. In addition to academic databases, interventions found in grey literature in the form of websites, blogs, and Twitter were also included in the overall review. This approach yielded 129 interventions that were included in the study. A systematic approach allowed the minimization of any bias dictating the selection of interventions to study. A coding strategy based on a pattern analysis approach was devised to be able to compare and contrast the loneliness interventions. Firstly, interventions were categorized on the basis of their objective to identify whether they were preventative, supportive, or remedial in nature. Secondly, depending on their scope, they were categorized as one-to-one, community-based, or group based. It was also ascertained whether interventions represented an improvement, an incremental innovation, a major advance or a radical departure, in comparison to the most basic form of a loneliness intervention. Finally, interventions were also assessed on the basis of the extent to which they utilized digital technologies. Individual visualizations representing the four levels of coding were created for each intervention, followed by an aggregated visual to facilitate analysis. To keep the inquiry within scope and to present a coherent view of the findings, the analysis was primarily concerned the level of innovation, and the use of digital technologies. This analysis highlights a weak but positive correlation between the level of innovation and the use of digital technologies in designing and deploying loneliness interventions, and also emphasizes how certain existing interventions could be tweaked to enable their migration from representing incremental innovation to radical innovation for example. This analysis also points out the value of including grey literature, especially from Twitter, in systematic literature reviews to get a contemporary view of latest work in the area under investigation.Keywords: ageing, loneliness, innovation, digital
Procedia PDF Downloads 1215164 KPI and Tool for the Evaluation of Competency in Warehouse Management for Furniture Business
Authors: Kritchakhris Na-Wattanaprasert
Abstract:
The objective of this research is to design and develop a prototype of a key performance indicator system this is suitable for warehouse management in a case study and use requirement. In this study, we design a prototype of key performance indicator system (KPI) for warehouse case study of furniture business by methodology in step of identify scope of the research and study related papers, gather necessary data and users requirement, develop key performance indicator base on balance scorecard, design pro and database for key performance indicator, coding the program and set relationship of database and finally testing and debugging each module. This study use Balance Scorecard (BSC) for selecting and grouping key performance indicator. The system developed by using Microsoft SQL Server 2010 is used to create the system database. In regard to visual-programming language, Microsoft Visual C# 2010 is chosen as the graphic user interface development tool. This system consists of six main menus: menu login, menu main data, menu financial perspective, menu customer perspective, menu internal, and menu learning and growth perspective. Each menu consists of key performance indicator form. Each form contains a data import section, a data input section, a data searches – edit section, and a report section. The system generates outputs in 5 main reports, the KPI detail reports, KPI summary report, KPI graph report, benchmarking summary report and benchmarking graph report. The user will select the condition of the report and period time. As the system has been developed and tested, discovers that it is one of the ways to judging the extent to warehouse objectives had been achieved. Moreover, it encourages the warehouse functional proceed with more efficiency. In order to be useful propose for other industries, can adjust this system appropriately. To increase the usefulness of the key performance indicator system, the recommendations for further development are as follows: -The warehouse should review the target value and set the better suitable target periodically under the situation fluctuated in the future. -The warehouse should review the key performance indicators and set the better suitable key performance indicators periodically under the situation fluctuated in the future for increasing competitiveness and take advantage of new opportunities.Keywords: key performance indicator, warehouse management, warehouse operation, logistics management
Procedia PDF Downloads 4305163 Evaluating the Potential of a Fast Growing Indian Marine Cyanobacterium by Reconstructing and Analysis of a Genome Scale Metabolic Model
Authors: Ruchi Pathania, Ahmad Ahmad, Shireesh Srivastava
Abstract:
Cyanobacteria is a promising microbe that can capture and convert atmospheric CO₂ and light into valuable industrial bio-products like biofuels, biodegradable plastics, etc. Among their most attractive traits are faster autotrophic growth, whole year cultivation using non-arable land, high photosynthetic activity, much greater biomass and productivity and easy for genetic manipulations. Cyanobacteria store carbon in the form of glycogen which can be hydrolyzed to release glucose and fermented to form bioethanol or other valuable products. Marine cyanobacterial species are especially attractive for countries with scarcity of freshwater. We recently identified a marine native cyanobacterium Synechococcus sp. BDU 130192 which has good growth rate and high level of polyglucans accumulation compared to Synechococcus PCC 7002. In this study, firstly we sequenced the whole genome and the sequences were annotated using the RAST server. Genome scale metabolic model (GSMM) was reconstructed through COBRA toolbox. GSMM is a computational representation of the metabolic reactions and metabolites of the target strain. GSMMs construction through the application of Flux Balance Analysis (FBA), which uses external nutrient uptake rates and estimate steady state intracellular and extracellular reaction fluxes, including maximization of cell growth. The model, which we have named isyn942, includes 942 reactions and 913 metabolites having 831 metabolic, 78 transport and 33 exchange reactions. The phylogenetic tree obtained by BLAST search revealed that the strain was a close relative of Synechococcus PCC 7002. The flux balance analysis (FBA) was applied on the model iSyn942 to predict the theoretical yields (mol product produced/mol CO₂ consumed) for native and non-native products like acetone, butanol, etc. under phototrophic condition by applying metabolic engineering strategies. The reported strain can be a viable strain for biotechnological applications, and the model will be helpful to researchers interested in understanding the metabolism as well as to design metabolic engineering strategies for enhanced production of various bioproducts.Keywords: cyanobacteria, flux balance analysis, genome scale metabolic model, metabolic engineering
Procedia PDF Downloads 1575162 Sharing Personal Information for Connection: The Effect of Social Exclusion on Consumer Self-Disclosure to Brands
Authors: Jiyoung Lee, Andrew D. Gershoff, Jerry Jisang Han
Abstract:
Most extant research on consumer privacy concerns and their willingness to share personal data has focused on contextual factors (e.g., types of information collected, type of compensation) that lead to consumers’ personal information disclosure. Unfortunately, the literature lacks a clear understanding of how consumers’ incidental psychological needs may influence consumers’ decisions to share their personal information with companies or brands. In this research, we investigate how social exclusion, which is an increasing societal problem, especially since the onset of the COVID-19 pandemic, leads to increased information disclosure intentions for consumers. Specifically, we propose and find that when consumers become socially excluded, their desire for social connection increases, and this desire leads to a greater willingness to disclose their personal information with firms. The motivation to form and maintain interpersonal relationships is one of the most fundamental human needs, and many researchers have found that deprivation of belongingness has negative consequences. Given the negative effects of social exclusion and the universal need to affiliate with others, people respond to exclusion with a motivation for social reconnection, resulting in various cognitive and behavioral consequences, such as paying greater attention to social cues and conforming to others. Here, we propose personal information disclosure as another form of behavior that can satisfy such social connection needs. As self-disclosure can serve as a strategic tool in creating and developing social relationships, those who have been socially excluded and thus have greater social connection desires may be more willing to engage in self-disclosure behavior to satisfy such needs. We conducted four experiments to test how feelings of social exclusion can influence the extent to which consumers share their personal information with brands. Various manipulations and measures were used to demonstrate the robustness of our effects. Through the four studies, we confirmed that (1) consumers who have been socially excluded show greater willingness to share their personal information with brands and that (2) such an effect is driven by the excluded individuals’ desire for social connection. Our findings shed light on how the desire for social connection arising from exclusion influences consumers’ decisions to disclose their personal information to brands. We contribute to the consumer disclosure literature by uncovering a psychological need that influences consumers’ disclosure behavior. We also extend the social exclusion literature by demonstrating that exclusion influences not only consumers’ choice of products but also their decision to disclose personal information to brands.Keywords: consumer-brand relationship, consumer information disclosure, consumer privacy, social exclusion
Procedia PDF Downloads 1215161 Decision Making Regarding Spouse Selection and Women's Autonomy in India: Exploring the Linkage
Authors: Nivedita Paul
Abstract:
The changing character of marriage be it arranged marriage, love marriage, polygamy, informal unions, all signify different gender relations in everyday lives. Marriages in India are part and parcel of the kinship and cultural practices. Arranged marriage is still the dominant form of marriage where spouse selection is the initiative and decision of the parents; but its form is changing, as women are now actively participating in spouse selection but with parental consent. Spouse selection related decision making is important because marriage as an institution brings social change and gender inequality; especially in a women’s life as marriages in India are mostly patrilocal. Moreover, the amount of say in spouse selection can affect a woman’s reproductive rights, domestic violence issues, household resource allocation, communication possibilities with the spouse/husband, marital life, etc. The present study uses data from Indian Human Development Survey II (2011-12) which is a nationally representative multitopic survey that covers 41,554 households. Currently, married women of age group 15-49 in their first marriage; whose year of marriage is from 1970s to 2000s have been taken for the study. Based on spouse selection experiences, the sample of women has been divided into three marriage categories-self, semi and family arranged. Women in self arranged or love marriage is the sole decision maker in choosing the partner, in semi arranged marriage or arranged marriage with consent both parents and women together take the decision, whereas in family arranged or arranged marriage without consent only parents take the decision. The main aim of the study is to find the relationship between spouse selection experiences and women’s autonomy in India. Decision making in economic matters, child and health related decision making, mobility and access to resources are taken to be proxies of autonomy. Method of ordinal regression has been used to find the relationship between spouse selection experiences and autonomy after marriage keeping other independent variables as control factors. Results show that women in semi arranged marriage have more decision making power regarding financial matters of the household, health related matters, mobility and accessibility to resources, when compared to women in family, arranged marriages. For freedom of movement and access to resources women in self arranged marriage have the highest say or exercise greatest power. Therefore, greater participation of women (even though not absolute control) in spouse selection may lead to greater autonomy after marriage.Keywords: arranged marriage, autonomy, consent, spouse selection
Procedia PDF Downloads 146