Search results for: interactive voice technology
246 Corrosion Protective Coatings in Machines Design
Authors: Cristina Diaz, Lucia Perez, Simone Visigalli, Giuseppe Di Florio, Gonzalo Fuentes, Roberto Canziani, Paolo Gronchi
Abstract:
During the last 50 years, the selection of materials is one of the main decisions in machine design for different industrial applications. It is due to numerous physical, chemical, mechanical and technological factors to consider in it. Corrosion effects are related with all of these factors and impact in the life cycle, machine incidences and the costs for the life of the machine. Corrosion affects the deterioration or destruction of metals due to the reaction with the environment, generally wet. In food industry, dewatering industry, concrete industry, paper industry, etc. corrosion is an unsolved problem and it might introduce some alterations of some characteristics in the final product. Nowadays, depending on the selected metal, its surface and its environment of work, corrosion prevention might be a change of metal, use a coating, cathodic protection, use of corrosion inhibitors, etc. In the vast majority of the situations, use of a corrosion resistant material or in its defect, a corrosion protection coating is the solution. Stainless steels are widely used in machine design, because of their strength, easily cleaned capacity, corrosion resistance and appearance. Typical used are AISI 304 and AISI 316. However, their benefits don’t fit every application, and some coatings are required against corrosion such as some paintings, galvanizing, chrome plating, SiO₂, TiO₂ or ZrO₂ coatings, etc. In this work, some coatings based in a bilayer made of Titanium-Tantalum, Titanium-Niobium, Titanium-Hafnium or Titanium-Zirconium, have been developed used magnetron sputtering configuration by PVD (Physical Vapor Deposition) technology, for trying to reduce corrosion effects on AISI 304, AISI 316 and comparing it with Titanium alloy substrates. Ti alloy display exceptional corrosion resistance to chlorides, sour and oxidising acidic media and seawater. In this study, Ti alloy (99%) has been included for comparison with coated AISI 304 and AISI 316 stainless steel. Corrosion tests were conducted by a Gamry Instrument under ASTM G5-94 standard, using different electrolytes such as tomato salsa, wine, olive oil, wet compost, a mix of sand and concrete with water and NaCl for testing corrosion in different industrial environments. In general, in all tested environments, the results showed an improvement of corrosion resistance of all coated AISI 304 and AISI 316 stainless steel substrates when they were compared to uncoated stainless steel substrates. After that, comparing these results with corrosion studies on uncoated Ti alloy substrate, it was observed that in some cases, coated stainless steel substrates, reached similar current density that uncoated Ti alloy. Moreover, Titanium-Zirconium and Titanium-Tantalum coatings showed for all substrates in study including coated Ti alloy substrates, a reduction in current density more than two order in magnitude. As conclusion, Ti-Ta, Ti-Zr, Ti-Nb and Ti-Hf coatings have been developed for improving corrosion resistance of AISI 304 and AISI 316 materials. After corrosion tests in several industry environments, substrates have shown improvements on corrosion resistance. Similar processes have been carried out in Ti alloy (99%) substrates. Coated AISI 304 and AISI 316 stainless steel, might reach similar corrosion protection on the surface than uncoated Ti alloy (99%). Moreover, coated Ti Alloy (99%) might increase its corrosion resistance using these coatings.Keywords: coatings, corrosion, PVD, stainless steel
Procedia PDF Downloads 156245 Empirical Study of Innovative Development of Shenzhen Creative Industries Based on Triple Helix Theory
Authors: Yi Wang, Greg Hearn, Terry Flew
Abstract:
In order to understand how cultural innovation occurs, this paper explores the interaction in Shenzhen of China between universities, creative industries, and government in creative economic using the Triple Helix framework. During the past two decades, Triple Helix has been recognized as a new theory of innovation to inform and guide policy-making in national and regional development. Universities and governments around the world, especially in developing countries, have taken actions to strengthen connections with creative industries to develop regional economies. To date research based on the Triple Helix model has focused primarily on Science and Technology collaborations, largely ignoring other fields. Hence, there is an opportunity for work to be done in seeking to better understand how the Triple Helix framework might apply in the field of creative industries and what knowledge might be gleaned from such an undertaking. Since the late 1990s, the concept of ‘creative industries’ has been introduced as policy and academic discourse. The development of creative industries policy by city agencies has improved city wealth creation and economic capital. It claims to generate a ‘new economy’ of enterprise dynamics and activities for urban renewal through the arts and digital media, via knowledge transfer in knowledge-based economies. Creative industries also involve commercial inputs to the creative economy, to dynamically reshape the city into an innovative culture. In particular, this paper will concentrate on creative spaces (incubators, digital tech parks, maker spaces, art hubs) where academic, industry and government interact. China has sought to enhance the brand of their manufacturing industry in cultural policy. It aims to transfer the image of ‘Made in China’ to ‘Created in China’ as well as to give Chinese brands more international competitiveness in a global economy. Shenzhen is a notable example in China as an international knowledge-based city following this path. In 2009, the Shenzhen Municipal Government proposed the city slogan ‘Build a Leading Cultural City”’ to show the ambition of government’s strong will to develop Shenzhen’s cultural capacity and creativity. The vision of Shenzhen is to become a cultural innovation center, a regional cultural center and an international cultural city. However, there has been a lack of attention to the triple helix interactions in the creative industries in China. In particular, there is limited knowledge about how interactions in creative spaces co-location within triple helix networks significantly influence city based innovation. That is, the roles of participating institutions need to be better understood. Thus, this paper discusses the interplay between university, creative industries and government in Shenzhen. Secondary analysis and documentary analysis will be used as methods in an effort to practically ground and illustrate this theoretical framework. Furthermore, this paper explores how are creative spaces being used to implement Triple Helix in creative industries. In particular, the new combination of resources generated from the synthesized consolidation and interactions through the institutions. This study will thus provide an innovative lens to understand the components, relationships and functions that exist within creative spaces by applying Triple Helix framework to the creative industries.Keywords: cultural policy, creative industries, creative city, triple Helix
Procedia PDF Downloads 205244 All-In-One Universal Cartridge Based Truly Modular Electrolyte Analyzer
Authors: S. Dalvi, N. Sane, V. Patil, D. Bansode, A. Tharakan, V. Mathur
Abstract:
Measurement of routine clinical electrolyte tests is common in labs worldwide for screening of illness or diseases. All the analyzers for the measurement of electrolyte parameters have sensors, reagents, sampler, pump tubing, valve, other tubing’s separate that are either expensive, require heavy maintenance and have a short shelf-life. Moreover, the costs required to maintain such Lab instrumentation is high and this limits the use of the device to only highly specialized personnel and sophisticated labs. In order to provide Healthcare Diagnostics to ALL at affordable costs, there is a need for an All-in-one Universal Modular Cartridge that contains sensors, reagents, sampler, valve, pump tubing, and other tubing’s in one single integrated module-in-module cartridge that is affordable, reliable, easy-to-use, requires very low sample volume and is truly modular and maintenance-free. DiaSys India has developed a World’s first, Patent Pending, Versatile All-in-one Universal Module-in-Module Cartridge based Electrolyte Analyzer (QDx InstaLyte) that can perform sodium, potassium, chloride, calcium, pH, lithium tests. QDx InstaLyte incorporates High Performance, Inexpensive All-in-one Universal Cartridge for rapid quantitative measurement of electrolytes in body fluids. Our proposed methodology utilizes Advanced & Improved long life ISE sensors to provide a sensitive and accurate result in 120 sec with just 100 µl of sample volume. The All-in-One Universal Cartridge has a very low reagent consumption capable of maximum of 1000 tests with a Use-life of 3-4 months and a long Shelf life of 12-18 months at 4-25°C making it very cost-effective. Methods: QDx InstaLyte analyzers with All-in-one Universal Modular Cartridges were independently evaluated with three R&D lots for Method Performance (Linearity, Precision, Method Comparison, Cartridge Stability) to measure Sodium, Potassium, Chloride. Method Comparison was done against Medica EasyLyte Plus Na/K/Cl Electrolyte Analyzer, a mid-size lab based clinical chemistry analyzer with N = 100 samples run over 10 days. Within-run precision study was done using modified CLSI guidelines with N = 20 samples and day-to-day precision study was done for 7 consecutive days using Trulab N & P Quality Control Samples. Accelerated stability testing was done at 45oC for 4 weeks with Production Lots. Results: Data analysis indicates that the CV for within-run precision for Na is ≤ 1%, for K is ≤2%, and for Cl is ≤2% and with R2 ≥ 0.95 for Method Comparison. Further, the All-in-One Universal Cartridge is stable up to 12-18 months at 4-25oC storage temperature based on preliminary extrapolated data. Conclusion: The Developed Technology Platform of All-in-One Universal Module-in-Module Cartridge based QDx InstaLyte is Reliable and meets all the performance specifications of the lab and is Truly Modular and Maintenance-Free. Hence, it can be easily adapted for low cost, sensitive and rapid measurement of electrolyte tests in low resource settings such as in urban, semi-urban and rural areas in the developing countries and can be used as a Point-of-care testing system for worldwide applications.Keywords: all-in-one modular catridge, electrolytes, maintenance free, QDx instalyte
Procedia PDF Downloads 27243 Statistical Models and Time Series Forecasting on Crime Data in Nepal
Authors: Dila Ram Bhandari
Abstract:
Throughout the 20th century, new governments were created where identities such as ethnic, religious, linguistic, caste, communal, tribal, and others played a part in the development of constitutions and the legal system of victim and criminal justice. Acute issues with extremism, poverty, environmental degradation, cybercrimes, human rights violations, crime against, and victimization of both individuals and groups have recently plagued South Asian nations. Everyday massive number of crimes are steadfast, these frequent crimes have made the lives of common citizens restless. Crimes are one of the major threats to society and also for civilization. Crime is a bone of contention that can create a societal disturbance. The old-style crime solving practices are unable to live up to the requirement of existing crime situations. Crime analysis is one of the most important activities of the majority of intelligent and law enforcement organizations all over the world. The South Asia region lacks such a regional coordination mechanism, unlike central Asia of Asia Pacific regions, to facilitate criminal intelligence sharing and operational coordination related to organized crime, including illicit drug trafficking and money laundering. There have been numerous conversations in recent years about using data mining technology to combat crime and terrorism. The Data Detective program from Sentient as a software company, uses data mining techniques to support the police (Sentient, 2017). The goals of this internship are to test out several predictive model solutions and choose the most effective and promising one. First, extensive literature reviews on data mining, crime analysis, and crime data mining were conducted. Sentient offered a 7-year archive of crime statistics that were daily aggregated to produce a univariate dataset. Moreover, a daily incidence type aggregation was performed to produce a multivariate dataset. Each solution's forecast period lasted seven days. Statistical models and neural network models were the two main groups into which the experiments were split. For the crime data, neural networks fared better than statistical models. This study gives a general review of the applied statistics and neural network models. A detailed image of each model's performance on the available data and generalizability is provided by a comparative analysis of all the models on a comparable dataset. Obviously, the studies demonstrated that, in comparison to other models, Gated Recurrent Units (GRU) produced greater prediction. The crime records of 2005-2019 which was collected from Nepal Police headquarter and analysed by R programming. In conclusion, gated recurrent unit implementation could give benefit to police in predicting crime. Hence, time series analysis using GRU could be a prospective additional feature in Data Detective.Keywords: time series analysis, forecasting, ARIMA, machine learning
Procedia PDF Downloads 164242 Stability of Porous SiC Based Materials under Relevant Conditions of Radiation and Temperature
Authors: Marta Malo, Carlota Soto, Carmen García-Rosales, Teresa Hernández
Abstract:
SiC based composites are candidates for possible use as structural and functional materials in the future fusion reactors, the main role is intended for the blanket modules. In the blanket, the neutrons produced in the fusion reaction slow down and their energy is transformed into heat in order to finally generate electrical power. In the blanket design named Dual Coolant Lead Lithium (DCLL), a PbLi alloy for power conversion and tritium breeding circulates inside hollow channels called Flow Channel Inserts (FCIs). These FCI must protect the steel structures against the highly corrosive PbLi liquid and the high temperatures, but also provide electrical insulation in order to minimize magnetohydrodynamic interactions of the flowing liquid metal with the high magnetic field present in a magnetically confined fusion environment. Due to their nominally high temperature and radiation stability as well as corrosion resistance, SiC is the main choice for the flow channel inserts. The significantly lower manufacturing cost presents porous SiC (dense coating is required in order to assure protection against corrosion and as a tritium barrier) as a firm alternative to SiC/SiC composites for this purpose. This application requires the materials to be exposed to high radiation levels and extreme temperatures, conditions for which previous studies have shown noticeable changes in both the microstructure and the electrical properties of different types of silicon carbide. Both initial properties and radiation/temperature induced damage strongly depend on the crystal structure, polytype, impurities/additives that are determined by the fabrication process, so the development of a suitable material requires full control of these variables. For this work, several SiC samples with different percentage of porosity and sintering additives have been manufactured by the so-called sacrificial template method at the Ceit-IK4 Technology Center (San Sebastián, Spain), and characterized at Ciemat (Madrid, Spain). Electrical conductivity was measured as a function of temperature before and after irradiation with 1.8 MeV electrons in the Ciemat HVEC Van de Graaff accelerator up to 140 MGy (~ 2·10 -5 dpa). Radiation-induced conductivity (RIC) was also examined during irradiation at 550 ºC for different dose rates (from 0.5 to 5 kGy/s). Although no significant RIC was found in general for any of the samples, electrical conductivity increase with irradiation dose was observed to occur for some compositions with a linear tendency. However, first results indicate enhanced radiation resistance for coated samples. Preliminary thermogravimetric tests of selected samples, together with posterior XRD analysis allowed interpret radiation-induced modification of the electrical conductivity in terms of changes in the SiC crystalline structure. Further analysis is needed in order to confirm this.Keywords: DCLL blanket, electrical conductivity, flow channel insert, porous SiC, radiation damage, thermal stability
Procedia PDF Downloads 199241 The Real Ambassador: How Hip Hop Culture Connects and Educates across Borders
Authors: Frederick Gooding
Abstract:
This paper explores how many Hip Hop artists have intentionally and strategically invoked sustainability principles of people, planet and profits as a means to create community, compensate for and cope with structural inequalities in society. These themes not only create community within one's country, but the powerful display and demonstration of these narratives create community on a global plane. Listeners of Hip Hop are therefore able to learn about the political events occurring in another country free of censure, and establish solidarity worldwide. Hip Hop therefore can be an ingenious tool to create self-worth, recycle positive imagery, and serve as a defense mechanism from institutional and structural forces that conspire to make an upward economic and social trajectory difficult, if not impossible for many people of color, all across the world. Although the birthplace of Hip Hop, the United States of America, is still predominately White, it has undoubtedly grown more diverse at a breath-taking pace in recent decades. Yet, whether American mainstream media will fully reflect America’s newfound diversity remains to be seen. As it stands, American mainstream media is seen and enjoyed by diverse audiences not just in America, but all over the world. Thus, it is imperative that further inquiry is conducted about one of the fastest growing genres within one of the world’s largest and most influential media industries generating upwards of $10 billion annually. More importantly, hip hop, its music and associated culture collectively represent a shared social experience of significant value. They are important tools used both to inform and influence economic, social and political identity. Conversely, principles of American exceptionalism often prioritize American political issues over those of others, thereby rendering a myopic political view within the mainstream. This paper will therefore engage in an international contextualization of the global phenomena entitled Hip Hop by exploring the creative genius and marketing appeal of Hip Hop within the global context of information technology, political expression and social change in addition to taking a critical look at historically racialized imagery within mainstream media. Many artists the world over have been able to freely express themselves and connect with broader communities outside of their own borders, all through the sound practice of the craft of Hip Hop. An empirical understanding of political, social and economic forces within the United States will serve as a bridge for identifying and analyzing transnational themes of commonality for typically marginalized or disaffected communities facing similar struggles for survival and respect. The sharing of commonalities of marginalized cultures not only serves as a source of education outside of typically myopic, mainstream sources, but it also creates transnational bonds globally to the extent that practicing artists resonate with many of the original themes of (now mostly underground) Hip Hop as with many of the African American artists responsible for creating and fostering Hip Hop's powerful outlet of expression. Hip Hop's power of connectivity and culture-sharing transnationally across borders provides a key source of education to be taken seriously by academics.Keywords: culture, education, global, hip hop, mainstream music, transnational
Procedia PDF Downloads 100240 Formation of Science Literations Based on Indigenous Science Mbaru Niang Manggarai
Authors: Yuliana Wahyu, Ambros Leonangung Edu
Abstract:
The learning praxis that is proposed by 2013 Curriculum (K-13) is no longer school-oriented as a supply-driven, but now a demand-driven provider. This vision is connected with Jokowi-Kalla Nawacita program to create a competitive nation in the global era. Competition is a social fact that must be faced. Therefore the curriculum will design a process to be the innovators and entrepreneurs.To get this goal, K-13 implements the character education. This aims at creating the innovators and entrepreneurs from an early age (primary school). One part of strengthening it is literacy formations (reading, numeracy, science, ICT, finance, and culture). Thus, science literacy is an integral part of character education. The above outputs are only formed through the innovative process through intra-curricular (blended learning), co-curriculer (hands-on learning) and extra-curricular (personalized learning). Unlike the curriculums before that child cram with the theories dominating the intellectual process, new breakthroughs make natural, social, and cultural phenomena as learning sources. For example, Science in primary schoolsplaceBiology as the platform. And Science places natural, social, and cultural phenomena as a learning field so that students can learn, discover, solve concrete problems, and the prospects of development and application in their everyday lives. Science education not only learns about facts collection or natural phenomena but also methods and scientific attitudes. In turn, Science will form the science literacy. Science literacy have critical, creative, logical, and initiative competences in responding to the issues of culture, science and technology. This is linked with science nature which includes hands-on and minds-on. To sustain the effectiveness of science learning, K-13 opens a new way of viewing a contextual learning model in which facts or natural phenomena are drawn closer to the child's learning environment to be studied and analyzed scientifically. Thus, the topic of elementary science discussion is the practical and contextual things that students encounter. This research is about to contextualize Science in primary schools at Manggarai, NTT, by placing local wisdom as a learning source and media to form the science literacy. Explicitly, this study discovers the concept of science and mathematics in Mbaru Niang. Mbaru Niang is a forgotten potentials of the centralistic-theoretical mainstream curriculum so far. In fact, the traditional Manggarai community stores and inherits much of the science-mathematical indigenous sciences. In the traditional house structures are full of science and mathematics knowledge. Every details have style, sound and mathematical symbols. Learning this, students are able to collaborate and synergize the content and learning resources in student learning activities. This is constructivist contextual learning that will be applied in meaningful learning. Meaningful learning allows students to learn by doing. Students then connect topics to the context, and science literacy is constructed from their factual experiences. The research location will be conducted in Manggarai through observation, interview, and literature study.Keywords: indigenous science, Mbaru Niang, science literacy, science
Procedia PDF Downloads 208239 The Use of Telecare in the Re-design of Overnight Supports for People with Learning Disabilities: Implementing a Cluster-based Approach in North Ayrshire
Authors: Carly Nesvat, Dominic Jarrett, Colin Thomson, Wilma Coltart, Thelma Bowers, Jan Thomson
Abstract:
Introduction: Within Scotland, the Same As You strategy committed to moving people with learning disabilities out of long-stay hospital accommodation into homes in the community. Much of the focus of this movement was on the placement of people within individual homes. In order to achieve this, potentially excessive supports were put in place which created dependence, and carried significant ongoing cost primarily for local authorities. The greater focus on empowerment and community participation which has been evident in more recent learning disability strategy, along with the financial pressures being experienced across the public sector, created an imperative to re-examine that provision, particularly in relation to the use of expensive sleepover supports to individuals, and the potential for this to be appropriately scaled back through the use of telecare. Method: As part of a broader programme of redesigning overnight supports within North Ayrshire, a cluster of individuals living in close proximity were identified, who were in receipt of overnight supports, but who were identified as having the capacity to potentially benefit from their removal. In their place, a responder service was established (an individual staying overnight in a nearby service user’s home), and a variety of telecare solutions were placed within individual’s homes. Active and passive technology was connected to an Alarm Receiving Centre, which would alert the local responder service when necessary. Individuals and their families were prepared for the change, and continued to be informed about progress with the pilot. Results: 4 individuals, 2 of whom shared a tenancy, had their sleepover supports removed as part of the pilot. Extensive data collection in relation to alarm activation was combined with feedback from the 4 individuals, their families, and staff involved in their support. Varying perspectives emerged within the feedback. 3 of the individuals were clearly described as benefitting from the change, and the greater sense of independence it brought, while more concerns were evident in relation to the fourth. Some family members expressed a need for greater preparation in relation to the change and ongoing information provision. Some support staff also expressed a need for more information, to help them understand the new support arrangements for an individual, as well as noting concerns in relation to the outcomes for one participant. Conclusion: Developing a telecare response in relation to a cluster of individuals was facilitated by them all being supported by the same care provider. The number of similar clusters of individuals being identified within North Ayrshire is limited. Developing other solutions such as a response service for redesign will potentially require greater collaboration between different providers of home support, as well as continuing to explore the full range of telecare, including digital options. The pilot has highlighted the need for effective preparatory and ongoing engagement with staff and families, as well as the challenges which can accompany making changes to long-standing packages of support.Keywords: challenges, change, engagement, telecare
Procedia PDF Downloads 177238 Phycoremiadation of Heavy Metals by Marine Macroalgae Collected from Olaikuda, Rameswaram, Southeast Coast of India
Authors: Suparna Roy, Anatharaman Perumal
Abstract:
The industrial effluent with high amount of heavy metals is known to have adverse effects on the environment. For the removal of heavy metals from aqueous environment, different conventional treatment technologies had been applied gradually which are not economically beneficial and also produce huge quantity of toxic chemical sludge. So, bio-sorption of heavy metals by marine plant is an eco-friendly innovative and alternative technology for removal of these pollutants from aqueous environment. The aim of this study is to evaluate the capacity of heavy metals accumulation and removal by some selected marine macroalgae (seaweeds) from marine environment. Methods: Seaweeds Acanthophora spicifera (Vahl.) Boergesen, Codium tomentosum Stackhouse, Halimeda gracilis Harvey ex. J. Agardh, Gracilaria opuntia Durairatnam.nom. inval. Valoniopsis pachynema (Martens) Boergesen, Caulerpa racemosa var. macrophysa (Sonder ex Kutzing) W. R. Taylor and Hydroclathrus clathratus (C. Agardh) Howe were collected from Olaikuda (09°17.526'N-079°19.662'E), Rameshwaram, south east coast of India during post monsoon period (April’2016). Seaweeds were washed with sterilized and filtered in-situ seawater repeatedly to remove all the epiphytes and debris and clean seaweeds were kept for shade drying for one week. The dried seaweeds were grinded to powder, and one gm powder seaweeds were taken in a 250ml conical flask, and 8 ml of 10 % HNO3 (70 % pure) was added to each sample and kept in room temperature (28 ̊C) for 24 hours and then samples were heated in hotplate at 120 ̊C, boiled to evaporate up to dryness and 20 ml of Nitric acid: Percholoric acid in 4:1 were added to it and again heated to hotplate at 90 ̊C up to evaporate to dryness, then samples were kept in room temperature for few minutes to cool and 10ml 10 % HNO3 were added to it and kept for 24 hours in cool and dark place and filtered with Whatman (589/2) filter paper and the filtrates were collected in 250ml clean conical flask and diluted accurately to 25 ml volume with double deionised water and triplicate of each sample were analysed with Inductively-Coupled plasma analysis (ICP-OES) to analyse total eleven heavy metals (Ag, Cd, B, Cu, Mn, Co, Ni, Cr, Pb, Zn, and Al content of the specified species and data were statistically evaluated for standard deviation. Results: Acanthophora spicifera contains highest amount of Ag (0.1± 0.2 mg/mg) followed by Cu (0.16±0.01 mg/mg), Mn (1.86±0.02 mg/mg), B (3.59±0.2 mg/mg), Halimeda gracilis showed highest accumulation of Al (384.75±0.12mg/mg), Valoniopsis pachynema accumulates maximum amount of Co (0.12±0.01 mg/mg), Zn (0.64±0.02 mg/mg), Caulerpa racemosa var. macrophysa contains Zn (0.63±0.01), Cr (0.26±0.01 mg/mg ), Ni (0.21±0.05), Pb (0.16±0.03 ) and Cd ( 0.02±00 ). Hydroclathrus clathratus, Codium tomentosum and Gracilaria opuntia also contain adequate amount of heavy metals. Conclusions: The mentioned species of seaweeds are contributing important role for decreasing the heavy metals pollution in marine environment by bioaccumulation. So, we can utilise this species to remove excess amount of heavy metals from polluted area.Keywords: heavy metals pollution, seaweeds, bioaccumulation, eco-friendly, phyco-remediation
Procedia PDF Downloads 233237 Physico-Mechanical Behavior of Indian Oil Shales
Authors: K. S. Rao, Ankesh Kumar
Abstract:
The search for alternative energy sources to petroleum has increased these days because of increase in need and depletion of petroleum reserves. Therefore the importance of oil shales as an economically viable substitute has increased many folds in last 20 years. The technologies like hydro-fracturing have opened the field of oil extraction from these unconventional rocks. Oil shale is a compact laminated rock of sedimentary origin containing organic matter known as kerogen which yields oil when distilled. Oil shales are formed from the contemporaneous deposition of fine grained mineral debris and organic degradation products derived from the breakdown of biota. Conditions required for the formation of oil shales include abundant organic productivity, early development of anaerobic conditions, and a lack of destructive organisms. These rocks are not gown through the high temperature and high pressure conditions in Mother Nature. The most common approach for oil extraction is drastically breaking the bond of the organics which involves retorting process. The two approaches for retorting are surface retorting and in-situ processing. The most environmental friendly approach for extraction is In-situ processing. The three steps involved in this process are fracturing, injection to achieve communication, and fluid migration at the underground location. Upon heating (retorting) oil shale at temperatures in the range of 300 to 400°C, the kerogen decomposes into oil, gas and residual carbon in a process referred to as pyrolysis. Therefore it is very important to understand the physico-mechenical behavior of such rocks, to improve the technology for in-situ extraction. It is clear from the past research and the physical observations that these rocks will behave as an anisotropic rock so it is very important to understand the mechanical behavior under high pressure at different orientation angles for the economical use of these resources. By knowing the engineering behavior under above conditions will allow us to simulate the deep ground retorting conditions numerically and experimentally. Many researchers have investigate the effect of organic content on the engineering behavior of oil shale but the coupled effect of organic and inorganic matrix is yet to be analyzed. The favourable characteristics of Assam coal for conversion to liquid fuels have been known for a long time. Studies have indicated that these coals and carbonaceous shale constitute the principal source rocks that have generated the hydrocarbons produced from the region. Rock cores of the representative samples are collected by performing on site drilling, as coring in laboratory is very difficult due to its highly anisotropic nature. Different tests are performed to understand the petrology of these samples, further the chemical analyses are also done to exactly quantify the organic content in these rocks. The mechanical properties of these rocks are investigated by considering different anisotropic angles. Now the results obtained from petrology and chemical analysis are correlated with the mechanical properties. These properties and correlations will further help in increasing the producibility of these rocks. It is well established that the organic content is negatively correlated to tensile strength, compressive strength and modulus of elasticity.Keywords: oil shale, producibility, hydro-fracturing, kerogen, petrology, mechanical behavior
Procedia PDF Downloads 345236 Regional Barriers and Opportunities for Developing Innovation Networks in the New Media Industry: A Comparison between Beijing and Bangalore Regional Innovation Systems
Authors: Cristina Chaminade, Mandar Kulkarni, Balaji Parthasarathy, Monica Plechero
Abstract:
The characteristics of a regional innovation system (RIS) and the specificity of the knowledge base of an industry may contribute to create peculiar paths for innovation and development of firms’ geographic extended innovation networks. However, the relative empirical evidence in emerging economies remains underexplored. The paper aims to fill the research gap by means of some recent qualitative research conducted in 2016 in Beijing (China) and Bangalore (India). It analyzes cases studies of firms in the new media industry, a sector that merges different IT competences with competences from other knowledge domains and that is emerging in those RIS. The results show that while in Beijing the new media sector results to be more in line with the existing institutional setting and governmental goals aimed at targeting specific social aspects and social problems of the population, in Bangalore it remains a more spontaneous firms-led process. In Beijing what matters for the development of innovation networks is the governmental setting and the national and regional strategies to promote science and technology in this sector, internet and mass innovation. The peculiarities of recent governmental policies aligned to the domestic goals may provide good possibilities for start-ups to develop innovation networks. However, due to the specificities of those policies targeting the Chinese market, networking outside the domestic market are not so promoted. Moreover, while some institutional peculiarities, such as a culture of collaboration in the region, may be favorable for local networking, regulations related to Internet censorship may limit the use of global networks particularly when based on virtual spaces. Mainly firms with already some foreign experiences and contact take advantage of global networks. In Bangalore, the role of government in pushing networking for the new media industry at the present stage is quite absent at all geographical levels. Indeed there is no particular strategic planning or prioritizing in the region toward the new media industry, albeit one industrial organization has emerged to represent the animation industry interests. This results in a lack of initiatives for sustaining the integration of complementary knowledge into the local portfolio of IT specialization. Firms actually involved in the new media industry face institutional constrains related to a poor level of local trust and cooperation, something that does not allow for full exploitation of local linkages. Moreover, knowledge-provider organizations in Bangalore remain still a solid base for the IT domain, but not for other domains. Initiatives to link to international networks seem therefore more the result of individual entrepreneurial actions aimed at acquiring complementary knowledge and competencies from different domains and exploiting potentiality in different markets. From those cases, it emerges that role of government, soft institutions and organizations in the two RIS differ substantially in the creation of barriers and opportunities for the development of innovation networks and their specific aim.Keywords: regional innovation system, emerging economies, innovation network, institutions, organizations, Bangalore, Beijing
Procedia PDF Downloads 322235 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana
Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta
Abstract:
Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.Keywords: bioeconomy, lipids, microalgae, proteins, saccharides
Procedia PDF Downloads 244234 Challenges in Self-Managing Vitality: A Qualitative Study about Staying Vital at Work among Dutch Office Workers
Authors: Violet Petit-Steeghs, Jochem J. R. Van Roon, Jacqueline E. W. Broerse
Abstract:
Last decennia the retirement age in Europe is gradually increasing. As a result, people have to continue working for a longer period of time. Health problems due to increased sedentary behavior and mental conditions like burn-out, pose a threat in fulfilling employees’ working life. In order to stimulate the ability and willingness to work in the present and future, it is important to stay vital. Vitality is regarded in literature as a sense of energy, motivation and resilience. It is assumed that by increasing their vitality, employees will stay healthier and be more satisfied with their job, leading to a more sustainable employment and less absenteeism in the future. The aim of this project is to obtain insights into the experiences and barriers of employees, and specifically office workers, with regard to their vitality. These insights are essential in order to develop appropriate measures in the future. To get more insights in the experiences of office workers on their vitality, 8 focus group discussions were organized with 6-10 office workers from 4 different employers (an university, a national construction company and a large juridical and care service organization) in the Netherlands. The discussions were transcribed and analyzed via open coding. This project is part of a larger consortium project Provita2, and conducted in collaboration with University of Technology Eindhoven. Results showed that a range of interdependent factors form a complex network that influences office workers’ vitality. These factors can be divided in three overarching groups: (1) personal (2) organizational and (3) environmental factors. Personal intrinsic factors, relating to the office worker, comprise someone’s physical health, coping style, life style, needs, and private life. Organizational factors, relating to the employer, are the workload, management style and the structure, vision and culture of the organization. Lastly, environmental factors consist of the air, light, temperature at the workplace and whether the workplace is inspiring and workable. Office workers experienced barriers to improve their own vitality due to a lack of autonomy. On the one hand, because most factors were not only intrinsic but extrinsic, like work atmosphere or the temperature in the room. On the other hand, office workers were restricted in adapting both intrinsic as well as extrinsic factors. Restrictions to for instance the flexibility of working times and the workload, can set limitations for improving vitality through personal factors like physical activity and mental relaxation. In conclusion, a large range of interdependent factors influence the vitality of office workers. Office workers are often regarded to have a responsibility to improve their vitality, but are limitedly autonomous in adapting these factors. Measures to improve vitality should therefore not only focus on increasing awareness among office workers, but also on empowering them to fulfill this responsibility. A holistic approach that takes the complex mutual dependencies between the different factors and actors (like managers, employees and HR personnel) into account is highly recommended.Keywords: occupational health, perspectives office workers, sustainable employment, vitality at work, work & wellbeing
Procedia PDF Downloads 136233 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms
Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee
Abstract:
Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences
Procedia PDF Downloads 272232 On the Limits of Board Diversity: Impact of Network Effect on Director Appointments
Authors: Vijay Marisetty, Poonam Singh
Abstract:
Research on the effect of director's network connections on investor welfare is inconclusive. Some studies suggest that directors' connections are beneficial, in terms of, improving earnings information, firms valuation for new investors. On the other hand, adverse effects of directorial networks are also reported, in terms of higher earnings management, options back dating fraud, reduction in firm performance, lower board monitoring. From regulatory perspective, the role of directorial networks on corporate welfare is crucial. Cognizant of the possible ill effects associated with directorial networks, large investors, for better representation on the boards, are building their own database of prospective directors who are highly qualified, however, sourced from outside the highly connected directorial labor market. For instance, following Dodd-Frank Reform Act, California Public Employees' Retirement Systems (CalPERs) has initiated a database for registering aspiring and highly qualified directors to nominate them for board seats (proxy access). Our paper stems from this background and tries to explore the chances of outside directors getting directorships who lack established network connections. The paper is able to identify such aspiring directors' information by accessing a unique Indian data sourced from an online portal that aims to match the supply of registered aspirants with the growing demand for outside directors in India. The online portal's tie-up with stock exchanges ensures firms to access the new pool of directors. Such direct access to the background details of aspiring directors over a period of 10 years, allows us to examine the chances of aspiring directors without corporate network, to enter directorial network. Using this resume data of 16105 aspiring corporate directors in India, who have no prior board experience in the directorial labor market, the paper analyses the entry dynamics in corporate directors' labor market. The database also allows us to investigate the value of corporate network by comparing non-network new entrants with incumbent networked directors. The study develops measures of network centrality and network degree based on merit, i.e. network of individuals belonging to elite educational institutions, like Indian Institute of Management (IIM) or Indian Institute of Technology (IIT) and based on job or company, i.e. network of individuals serving in the same company. The paper then measures the impact of these networks on the appointment of first time directors and subsequent appointment of directors. The paper reports the following main results: 1. The likelihood of becoming a corporate director, without corporate network strength, is only 1 out 100 aspirants. This is inspite of comparable educational background and similar duration of corporate experience; 2. Aspiring non-network directors' elite educational ties help them to secure directorships. However, for post-board appointments, their newly acquired corporate network strength overtakes as their main determinant for subsequent board appointments and compensation. The results thus highlight the limitations in increasing board diversity.Keywords: aspiring corporate directors, board diversity, director labor market, director networks
Procedia PDF Downloads 312231 Determination of Slope of Hilly Terrain by Using Proposed Method of Resolution of Forces
Authors: Reshma Raskar-Phule, Makarand Landge, Saurabh Singh, Vijay Singh, Jash Saparia, Shivam Tripathi
Abstract:
For any construction project, slope calculations are necessary in order to evaluate constructability on the site, such as the slope of parking lots, sidewalks, and ramps, the slope of sanitary sewer lines, slope of roads and highways. When slopes and grades are to be determined, designers are concerned with establishing proper slopes and grades for their projects to assess cut and fill volume calculations and determine inverts of pipes. There are several established instruments commonly used to determine slopes, such as Dumpy level, Abney level or Hand Level, Inclinometer, Tacheometer, Henry method, etc., and surveyors are very familiar with the use of these instruments to calculate slopes. However, they have some other drawbacks which cannot be neglected while major surveying works. Firstly, it requires expert surveyors and skilled staff. The accessibility, visibility, and accommodation to remote hilly terrain with these instruments and surveying teams are difficult. Also, determination of gentle slopes in case of road and sewer drainage constructions in congested urban places with these instruments is not easy. This paper aims to develop a method that requires minimum field work, minimum instruments, no high-end technology or instruments or software, and low cost. It requires basic and handy surveying accessories like a plane table with a fixed weighing machine, standard weights, alidade, tripod, and ranging rods should be able to determine the terrain slope in congested areas as well as in remote hilly terrain. Also, being simple and easy to understand and perform the people of that local rural area can be easily trained for the proposed method. The idea for the proposed method is based on the principle of resolution of weight components. When any object of standard weight ‘W’ is placed on an inclined surface with a weighing machine below it, then its cosine component of weight is presently measured by that weighing machine. The slope can be determined from the relation between the true or actual weight and the apparent weight. A proper procedure is to be followed, which includes site location, centering and sighting work, fixing the whole set at the identified station, and finally taking the readings. A set of experiments for slope determination, mild and moderate slopes, are carried out by the proposed method and by the theodolite instrument in a controlled environment, on the college campus, and uncontrolled environment actual site. The slopes determined by the proposed method were compared with those determined by the established instruments. For example, it was observed that for the same distances for mild slope, the difference in the slope obtained by the proposed method and by the established method ranges from 4’ for a distance of 8m to 2o15’20” for a distance of 16m for an uncontrolled environment. Thus, for mild slopes, the proposed method is suitable for a distance of 8m to 10m. The correlation between the proposed method and the established method shows a good correlation of 0.91 to 0.99 for various combinations, mild and moderate slope, with the controlled and uncontrolled environment.Keywords: surveying, plane table, weight component, slope determination, hilly terrain, construction
Procedia PDF Downloads 94230 Physiological Effects on Scientist Astronaut Candidates: Hypobaric Training Assessment
Authors: Pedro Llanos, Diego García
Abstract:
This paper is addressed to expanding our understanding of the effects of hypoxia training on our bodies to better model its dynamics and leverage some of its implications and effects on human health. Hypoxia training is a recommended practice for military and civilian pilots that allow them to recognize their early hypoxia signs and symptoms, and Scientist Astronaut Candidates (SACs) who underwent hypobaric hypoxia (HH) exposure as part of a training activity for prospective suborbital flight applications. This observational-analytical study describes physiologic responses and symptoms experienced by a SAC group before, during and after HH exposure and proposes a model for assessing predicted versus observed physiological responses. A group of individuals with diverse Science Technology Engineering Mathematics (STEM) backgrounds conducted a hypobaric training session to an altitude up to 22,000 ft (FL220) or 6,705 meters, where heart rate (HR), breathing rate (BR) and core temperature (Tc) were monitored with the use of a chest strap sensor pre and post HH exposure. A pulse oximeter registered levels of saturation of oxygen (SpO2), number and duration of desaturations during the HH chamber flight. Hypoxia symptoms as described by the SACs during the HH training session were also registered. This data allowed to generate a preliminary predictive model of the oxygen desaturation and O2 pressure curve for each subject, which consists of a sixth-order polynomial fit during exposure, and a fifth or fourth-order polynomial fit during recovery. Data analysis showed that HR and BR showed no significant differences between pre and post HH exposure in most of the SACs, while Tc measures showed slight but consistent decrement changes. All subjects registered SpO2 greater than 94% for the majority of their individual HH exposures, but all of them presented at least one clinically significant desaturation (SpO2 < 85% for more than 5 seconds) and half of the individuals showed SpO2 below 87% for at least 30% of their HH exposure time. Finally, real time collection of HH symptoms presented temperature somatosensory perceptions (SP) for 65% of individuals, and task-focus issues for 52.5% of individuals as the most common HH indications. 95% of the subjects experienced HH onset symptoms below FL180; all participants achieved full recovery of HH symptoms within 1 minute of donning their O2 mask. The current HH study performed on this group of individuals suggests a rapid and fully reversible physiologic response after HH exposure as expected and obtained in previous studies. Our data showed consistent results between predicted versus observed SpO2 curves during HH suggesting a mathematical function that may be used to model HH performance deficiencies. During the HH study, real-time HH symptoms were registered providing evidenced SP and task focusing as the earliest and most common indicators. Finally, an assessment of HH signs of symptoms in a group of heterogeneous, non-pilot individuals showed similar results to previous studies in homogeneous populations of pilots.Keywords: slow onset hypoxia, hypobaric chamber training, altitude sickness, symptoms and altitude, pressure cabin
Procedia PDF Downloads 115229 Monitoring the Production of Large Composite Structures Using Dielectric Tool Embedded Capacitors
Authors: Galatee Levadoux, Trevor Benson, Chris Worrall
Abstract:
With the rise of public awareness on climate change comes an increasing demand for renewable sources of energy. As a result, the wind power sector is striving to manufacture longer, more efficient and reliable wind turbine blades. Currently, one of the leading causes of blade failure in service is improper cure of the resin during manufacture. The infusion process creating the main part of the composite blade structure remains a critical step that is yet to be monitored in real time. This stage consists of a viscous resin being drawn into a mould under vacuum, then undergoing a curing reaction until solidification. Successful infusion assumes the resin fills all the voids and cures completely. Given that the electrical properties of the resin change significantly during its solidification, both the filling of the mould and the curing reaction are susceptible to be followed using dieletrometry. However, industrially available dielectrics sensors are currently too small to monitor the entire surface of a wind turbine blade. The aim of the present research project is to scale up the dielectric sensor technology and develop a device able to monitor the manufacturing process of large composite structures, assessing the conformity of the blade before it even comes out of the mould. An array of flat copper wires acting as electrodes are embedded in a polymer matrix fixed in an infusion mould. A multi-frequency analysis from 1 Hz to 10 kHz is performed during the filling of the mould with an epoxy resin and the hardening of the said resin. By following the variations of the complex admittance Y*, the filling of the mould and curing process are monitored. Results are compared to numerical simulations of the sensor in order to validate a virtual cure-monitoring system. The results obtained by drawing glycerol on top of the copper sensor displayed a linear relation between the wetted length of the sensor and the complex admittance measured. Drawing epoxy resin on top of the sensor and letting it cure at room temperature for 24 hours has provided characteristic curves obtained when conventional interdigitated sensor are used to follow the same reaction. The response from the developed sensor has shown the different stages of the polymerization of the resin, validating the geometry of the prototype. The model created and analysed using COMSOL has shown that the dielectric cure process can be simulated, so long as a sufficient time and temperature dependent material properties can be determined. The model can be used to help design larger sensors suitable for use with full-sized blades. The preliminary results obtained with the sensor prototype indicate that the infusion and curing process of an epoxy resin can be followed with the chosen configuration on a scale of several decimeters. Further work is to be devoted to studying the influence of the sensor geometry and the infusion parameters on the results obtained. Ultimately, the aim is to develop a larger scale sensor able to monitor the flow and cure of large composite panels industrially.Keywords: composite manufacture, dieletrometry, epoxy, resin infusion, wind turbine blades
Procedia PDF Downloads 165228 Consumers and Voters’ Choice: Two Different Contexts with a Powerful Behavioural Parallel
Authors: Valentina Dolmova
Abstract:
What consumers choose to buy and who voters select on election days are two questions that have captivated the interest of both academics and practitioners for many decades. The importance of understanding what influences the behavior of those groups and whether or not we can predict or control it fuels a steady stream of research in a range of fields. By looking only at the past 40 years, more than 70 thousand scientific papers have been published in each field – consumer behavior and political psychology, respectively. From marketing, economics, and the science of persuasion to political and cognitive psychology - we have all remained heavily engaged. The ever-evolving technology, inevitable socio-cultural shifts, global economic conditions, and much more play an important role in choice-equations regardless of context. On one hand, this makes the research efforts always relevant and needed. On the other, the relatively low number of cross-field collaborations, which seem to be picking up only in more in recent years, makes the existing findings isolated into framed bubbles. By performing systematic research across both areas of psychology and building a parallel between theories and factors of influence, however, we find that there is not only a definitive common ground between the behaviors of consumers and voters but that we are moving towards a global model of choice. This means that the lines between contexts are fading which has a direct implication on what we should focus on when predicting or navigating buyers and voters’ behavior. Internal and external factors in four main categories determine the choices we make as consumers and as voters. Together, personal, psychological, social, and cultural create a holistic framework through which all stimuli in relation to a particular product or a political party get filtered. The analogy “consumer-voter” solidifies further. Leading academics suggest that this fundamental parallel is the key to managing successfully political and consumer brands alike. However, we distinguish additional four key stimuli that relate to those factor categories (1/ opportunity costs; 2/the memory of the past; 3/recognisable figures/faces and 4/conflict) arguing that the level of expertise a person has determines the prevalence of factors or specific stimuli. Our efforts take into account global trends such as the establishment of “celebrity politics” and the image of “ethically concerned consumer brands” which bridge the gap between contexts to an even greater extent. Scientists and practitioners are pushed to accept the transformative nature of both fields in social psychology. Existing blind spots as well as the limited number of research conducted outside the American and European societies open up space for more collaborative efforts in this highly demanding and lucrative field. A mixed method of research tests three main hypotheses, the first two of which are focused on the level of irrelevance of context when comparing voting or consumer behavior – both from the factors and stimuli lenses, the third on determining whether or not the level of expertise in any field skews the weight of what prism we are more likely to choose when evaluating options.Keywords: buyers’ behaviour, decision-making, voters’ behaviour, social psychology
Procedia PDF Downloads 153227 Nuclear Powered UAV for Surveillances and Aerial Photography
Authors: Rajasekar Elangopandian, Anand Shanmugam
Abstract:
Now-a-days for surveillances unmanned aerial vehicle plays a vital role. Not only for surveillances, aerial photography disaster management and the notice of earth behavior UAV1s envisages meticulously. To reduce the maintenance and fuel nuclear powered Vehicles are greater support. The design consideration is much important for the UAV manufacturing industry and Research and development agency. Eventually design is looking like a pentagon shaped fuselage and black rubber coated paint in order to escape from the enemy radar and other targets. The pentagon shape fuselage has large space to keep the mini nuclear reactor inside and the material is carbon – carbon fiber specially designed by the software called cosmol and hyper mesh 14.2. So the weight consideration will produce the positive result for productivity. The walls of the fuselage are coated with lead and protective shield. A double layer of W/Bi sheet is proposed for radiation protection at the energy range of 70 Kev to 90 Kev. The designed W/bi sheet, only 0.14 mm thick and is 36% light. The properties of the fillers were determined from zeta potential and particle size measurements. The Exposes of the radiation can be attenuated by 3 ways such as minimizing exposure time, Maximizing distance from the radiation source and shielding the whole vehicle. The inside reactor will be switched ON when the UAV starts its cruise. The moderators and the control rods can be inserted by automation technique by newly developed software. The heat generated by the reactor will be used to run the turbine which is fixed inside the UAV called mini turbine with natural rubber composite Shaft radiation shield. Cooling system will be in two mode such as liquid and air cooled. Liquid coolant for the heat regeneration is ordinary water, liquid sodium, helium and the walls are made up of regenerative and radiation protective material. The other components like camera and arms bay will be located at the bottom of the UAV high are specially made products in order to escape from the radiation. They are coated with lead Pb and natural rubber composite material. This technique provides the long rang and endurance for eternal flight mission until we need any changeability of parts or product. This UAV has the special advantage of ` land on String` means it`ll land at electric line to charge the automated electronics. Then the fuel is enriched uranium (< 5% U - 235) contains hundreds of fuel pins. This technique provides eternal duty for surveillances and aerial photography. The landing of the vehicle is ease of operation likewise the takeoff is also easier than any other mechanism which present in nowadays. This UAV gives great immense and immaculate technology for surveillance and target detecting and smashing the target.Keywords: mini turbine, liquid coolant for the heat regeneration, in order to escape from the radiation, eternal flight mission, it`ll land at electric line
Procedia PDF Downloads 408226 Modeling the Demand for the Healthcare Services Using Data Analysis Techniques
Authors: Elizaveta S. Prokofyeva, Svetlana V. Maltseva, Roman D. Zaitsev
Abstract:
Rapidly evolving modern data analysis technologies in healthcare play a large role in understanding the operation of the system and its characteristics. Nowadays, one of the key tasks in urban healthcare is to optimize the resource allocation. Thus, the application of data analysis in medical institutions to solve optimization problems determines the significance of this study. The purpose of this research was to establish the dependence between the indicators of the effectiveness of the medical institution and its resources. Hospital discharges by diagnosis; hospital days of in-patients and in-patient average length of stay were selected as the performance indicators and the demand of the medical facility. The hospital beds by type of care, medical technology (magnetic resonance tomography, gamma cameras, angiographic complexes and lithotripters) and physicians characterized the resource provision of medical institutions for the developed models. The data source for the research was an open database of the statistical service Eurostat. The choice of the source is due to the fact that the databases contain complete and open information necessary for research tasks in the field of public health. In addition, the statistical database has a user-friendly interface that allows you to quickly build analytical reports. The study provides information on 28 European for the period from 2007 to 2016. For all countries included in the study, with the most accurate and complete data for the period under review, predictive models were developed based on historical panel data. An attempt to improve the quality and the interpretation of the models was made by cluster analysis of the investigated set of countries. The main idea was to assess the similarity of the joint behavior of the variables throughout the time period under consideration to identify groups of similar countries and to construct the separate regression models for them. Therefore, the original time series were used as the objects of clustering. The hierarchical agglomerate algorithm k-medoids was used. The sampled objects were used as the centers of the clusters obtained, since determining the centroid when working with time series involves additional difficulties. The number of clusters used the silhouette coefficient. After the cluster analysis it was possible to significantly improve the predictive power of the models: for example, in the one of the clusters, MAPE error was only 0,82%, which makes it possible to conclude that this forecast is highly reliable in the short term. The obtained predicted values of the developed models have a relatively low level of error and can be used to make decisions on the resource provision of the hospital by medical personnel. The research displays the strong dependencies between the demand for the medical services and the modern medical equipment variable, which highlights the importance of the technological component for the successful development of the medical facility. Currently, data analysis has a huge potential, which allows to significantly improving health services. Medical institutions that are the first to introduce these technologies will certainly have a competitive advantage.Keywords: data analysis, demand modeling, healthcare, medical facilities
Procedia PDF Downloads 144225 Improving a Stagnant River Reach Water Quality by Combining Jet Water Flow and Ultrasonic Irradiation
Authors: A. K. Tekile, I. L. Kim, J. Y. Lee
Abstract:
Human activities put freshwater quality under risk, mainly due to expansion of agriculture and industries, damming, diversion and discharge of inadequately treated wastewaters. The rapid human population growth and climate change escalated the problem. External controlling actions on point and non-point pollution sources are long-term solution to manage water quality. To have a holistic approach, these mechanisms should be coupled with the in-water control strategies. The available in-lake or river methods are either costly or they have some adverse effect on the ecological system that the search for an alternative and effective solution with a reasonable balance is still going on. This study aimed at the physical and chemical water quality improvement in a stagnant Yeo-cheon River reach (Korea), which has recently shown sign of water quality problems such as scum formation and fish death. The river water quality was monitored, for the duration of three months by operating only water flow generator in the first two weeks and then ultrasonic irradiation device was coupled to the flow unit for the remaining duration of the experiment. In addition to assessing the water quality improvement, the correlation among the parameters was analyzed to explain the contribution of the ultra-sonication. Generally, the combined strategy showed localized improvement of water quality in terms of dissolved oxygen, Chlorophyll-a and dissolved reactive phosphate. At locations under limited influence of the system operation, chlorophyll-a was highly increased, but within 25 m of operation the low initial value was maintained. The inverse correlation coefficient between dissolved oxygen and chlorophyll-a decreased from 0.51 to 0.37 when ultrasonic irradiation unit was used with the flow, showing that ultrasonic treatment reduced chlorophyll-a concentration and it inhibited photosynthesis. The relationship between dissolved oxygen and reactive phosphate also indicated that influence of ultra-sonication was higher than flow on the reactive phosphate concentration. Even though flow increased turbidity by suspending sediments, ultrasonic waves canceled out the effect due to the agglomeration of suspended particles and the follow-up settling out. There has also been variation of interaction in the water column as the decrease of pH and dissolved oxygen from surface to the bottom played a role in phosphorus release into the water column. The variation of nitrogen and dissolved organic carbon concentrations showed mixed trend probably due to the complex chemical reactions subsequent to the operation. Besides, the intensive rainfall and strong wind around the end of the field trial had apparent impact on the result. The combined effect of water flow and ultrasonic irradiation was a cumulative water quality improvement and it maintained the dissolved oxygen and chlorophyll-a requirement of the river for healthy ecological interaction. However, the overall improvement of water quality is not guaranteed as effectiveness of ultrasonic technology requires long-term monitoring of water quality before, during and after treatment. Even though, the short duration of the study conducted here has limited nutrient pattern realization, the use of ultrasound at field scale to improve water quality is promising.Keywords: stagnant, ultrasonic irradiation, water flow, water quality
Procedia PDF Downloads 192224 Spin Rate Decaying Law of Projectile with Hemispherical Head in Exterior Trajectory
Authors: Quan Wen, Tianxiao Chang, Shaolu Shi, Yushi Wang, Guangyu Wang
Abstract:
As a kind of working environment of the fuze, the spin rate decaying law of projectile in exterior trajectory is of great value in the design of the rotation count fixed distance fuze. In addition, it is significant in the field of devices for simulation tests of fuze exterior ballistic environment, flight stability, and dispersion accuracy of gun projectile and opening and scattering design of submunition and illuminating cartridges. Besides, the self-destroying mechanism of the fuze in small-caliber projectile often works by utilizing the attenuation of centrifugal force. In the theory of projectile aerodynamics and fuze design, there are many formulas describing the change law of projectile angular velocity in external ballistic such as Roggla formula, exponential function formula, and power function formula. However, these formulas are mostly semi-empirical due to the poor test conditions and insufficient test data at that time. These formulas are difficult to meet the design requirements of modern fuze because they are not accurate enough and have a narrow range of applications now. In order to provide more accurate ballistic environment parameters for the design of a hemispherical head projectile fuze, the projectile’s spin rate decaying law in exterior trajectory under the effect of air resistance was studied. In the analysis, the projectile shape was simplified as hemisphere head, cylindrical part, rotating band part, and anti-truncated conical tail. The main assumptions are as follows: a) The shape and mass are symmetrical about the longitudinal axis, b) There is a smooth transition between the ball hea, c) The air flow on the outer surface is set as a flat plate flow with the same area as the expanded outer surface of the projectile, and the boundary layer is turbulent, d) The polar damping moment attributed to the wrench hole and rifling mark on the projectile is not considered, e) The groove of the rifle on the rotating band is uniform, smooth and regular. The impacts of the four parts on aerodynamic moment of the projectile rotation were obtained by aerodynamic theory. The surface friction stress of the projectile, the polar damping moment formed by the head of the projectile, the surface friction moment formed by the cylindrical part, the rotating band, and the anti-truncated conical tail were obtained by mathematical derivation. After that, the mathematical model of angular spin rate attenuation was established. In the whole trajectory with the maximum range angle (38°), the absolute error of the polar damping torque coefficient obtained by simulation and the coefficient calculated by the mathematical model established in this paper is not more than 7%. Therefore, the credibility of the mathematical model was verified. The mathematical model can be described as a first-order nonlinear differential equation, which has no analytical solution. The solution can be only gained as a numerical solution by connecting the model with projectile mass motion equations in exterior ballistics.Keywords: ammunition engineering, fuze technology, spin rate, numerical simulation
Procedia PDF Downloads 143223 Developing Effective Strategies to Reduce Hiv, Aids and Sexually Transmitted Infections, Nakuru, Kenya
Authors: Brian Bacia, Esther Githaiga, Teresia Kabucho, Paul Moses Ndegwa, Lucy Gichohi
Abstract:
Purpose: The aim of the study is to ensure an appropriate mix of evidence-based prevention strategies geared towards the reduction of new HIV infections and the incidence of Sexually transmitted Illnesses Background: In Nakuru County, more than 90% of all HIV-infected patients are adults and on a single-dose medication-one pill that contains a combination of several different HIV drugs. Nakuru town has been identified as the hardest hit by HIV/Aids in the County according to the latest statistics from the County Aids and STI group, with a prevalence rate of 5.7 percent attributed to the high population and an active urban center. Method: 2 key studies were carried out to provide evidence for the effectiveness of antiretroviral therapy (ART) when used optimally on preventing sexual transmission of HIV. Discussions based on an examination, assessments of successes in planning, program implementation, and ultimate impact of prevention and treatment were undertaken involving health managers, health workers, community health workers, and people living with HIV/AIDS between February -August 2021. Questionnaires were carried out by a trained duo on ethical procedures at 15 HIV treatment clinics targeting patients on ARVs and caregivers on ARV prevention and treatment of pediatric HIV infection. Findings: Levels of AIDS awareness are extremely high. Advances in HIV treatment have led to an enhanced understanding of the virus, improved care of patients, and control of the spread of drug-resistant HIV. There has been a tremendous increase in the number of people living with HIV having access to life-long antiretroviral drugs (ARV), mostly on generic medicines. Healthcare facilities providing treatment are stressed challenging the administration of the drugs, which require a clinical setting. Women find it difficult to take a daily pill which reduces the effectiveness of the medicine. ART adherence can be strengthened largely through the use of innovative digital technology. The case management approach is useful in resource-limited settings. The county has made tremendous progress in mother-to-child transmission reduction through enhanced early antenatal care (ANC) attendance and mapping of pregnant women Recommendations: Treatment reduces the risk of transmission to the child during pregnancy, labor, and delivery. Promote research of medicines through patients and community engagement. Reduce the risk of transmission through breastfeeding. Enhance testing strategies and strengthen health systems for sustainable HIV service delivery. Need exists for improved antenatal care and delivery by skilled birth attendants. Develop a comprehensive maternal reproductive health policy covering equitability, efficient and effective delivery of services. Put in place referral systems.Keywords: evidence-based prevention strategies, service delivery, human management, integrated approach
Procedia PDF Downloads 86222 Treatment Process of Sludge from Leachate with an Activated Sludge System and Extended Aeration System
Authors: A. Chávez, A. Rodríguez, F. Pinzón
Abstract:
Society is concerned about measures of environmental, economic and social impacts generated in the solid waste disposal. These places of confinement, also known as landfills, are locations where problems of pollution and damage to human health are reduced. They are technically designed and operated, using engineering principles, storing the residue in a small area, compact it to reduce volume and covering them with soil layers. Problems preventing liquid (leachate) and gases produced by the decomposition of organic matter. Despite planning and site selection for disposal, monitoring and control of selected processes, remains the dilemma of the leachate as extreme concentration of pollutants, devastating soil, flora and fauna; aggressive processes requiring priority attention. A biological technology is the activated sludge system, used for tributaries with high pollutant loads. Since transforms biodegradable dissolved and particulate matter into CO2, H2O and sludge; transform suspended and no Settleable solids; change nutrients as nitrogen and phosphorous; and degrades heavy metals. The microorganisms that remove organic matter in the processes are in generally facultative heterotrophic bacteria, forming heterogeneous populations. Is possible to find unicellular fungi, algae, protozoa and rotifers, that process the organic carbon source and oxygen, as well as the nitrogen and phosphorus because are vital for cell synthesis. The mixture of the substrate, in this case sludge leachate, molasses and wastewater is maintained ventilated by mechanical aeration diffusers. Considering as the biological processes work to remove dissolved material (< 45 microns), generating biomass, easily obtained by decantation processes. The design consists of an artificial support and aeration pumps, favoring develop microorganisms (denitrifying) using oxygen (O) with nitrate, resulting in nitrogen (N) in the gas phase. Thus, avoiding negative effects of the presence of ammonia or phosphorus. Overall the activated sludge system includes about 8 hours of hydraulic retention time, which does not prevent the demand for nitrification, which occurs on average in a value of MLSS 3,000 mg/L. The extended aeration works with times greater than 24 hours detention; with ratio of organic load/biomass inventory under 0.1; and average stay time (sludge age) more than 8 days. This project developed a pilot system with sludge leachate from Doña Juana landfill - RSDJ –, located in Bogota, Colombia, where they will be subjected to a process of activated sludge and extended aeration through a sequential Bach reactor - SBR, to be dump in hydric sources, avoiding ecological collapse. The system worked with a dwell time of 8 days, 30 L capacity, mainly by removing values of BOD and COD above 90%, with initial data of 1720 mg/L and 6500 mg/L respectively. Motivating the deliberate nitrification is expected to be possible commercial use diffused aeration systems for sludge leachate from landfills.Keywords: sludge, landfill, leachate, SBR
Procedia PDF Downloads 266221 Using Business Simulations and Game-Based Learning for Enterprise Resource Planning Implementation Training
Authors: Carin Chuang, Kuan-Chou Chen
Abstract:
An Enterprise Resource Planning (ERP) system is an integrated information system that supports the seamless integration of all the business processes of a company. Implementing an ERP system can increase efficiencies and decrease the costs while helping improve productivity. Many organizations including large, medium and small-sized companies have already adopted an ERP system for decades. Although ERP system can bring competitive advantages to organizations, the lack of proper training approach in ERP implementation is still a major concern. Organizations understand the importance of ERP training to adequately prepare managers and users. The low return on investment, however, for the ERP training makes the training difficult for knowledgeable workers to transfer what is learned in training to the jobs at workplace. Inadequate and inefficient ERP training limits the value realization and success of an ERP system. That is the need to call for a profound change and innovation for ERP training in both workplace at industry and the Information Systems (IS) education in academia. The innovated ERP training approach can improve the users’ knowledge in business processes and hands-on skills in mastering ERP system. It also can be instructed as educational material for IS students in universities. The purpose of the study is to examine the use of ERP simulation games via the ERPsim system to train the IS students in learning ERP implementation. The ERPsim is the business simulation game developed by ERPsim Lab at HEC Montréal, and the game is a real-life SAP (Systems Applications and Products) ERP system. The training uses the ERPsim system as the tool for the Internet-based simulation games and is designed as online student competitions during the class. The competitions involve student teams with the facilitation of instructor and put the students’ business skills to the test via intensive simulation games on a real-world SAP ERP system. The teams run the full business cycle of a manufacturing company while interacting with suppliers, vendors, and customers through sending and receiving orders, delivering products and completing the entire cash-to-cash cycle. To learn a range of business skills, student needs to adopt individual business role and make business decisions around the products and business processes. Based on the training experiences learned from rounds of business simulations, the findings show that learners have reduced risk in making mistakes that help learners build self-confidence in problem-solving. In addition, the learners’ reflections from their mistakes can speculate the root causes of the problems and further improve the efficiency of the training. ERP instructors teaching with the innovative approach report significant improvements in student evaluation, learner motivation, attendance, engagement as well as increased learner technology competency. The findings of the study can provide ERP instructors with guidelines to create an effective learning environment and can be transferred to a variety of other educational fields in which trainers are migrating towards a more active learning approach.Keywords: business simulations, ERP implementation training, ERPsim, game-based learning, instructional strategy, training innovation
Procedia PDF Downloads 139220 Novel Numerical Technique for Dusty Plasma Dynamics (Yukawa Liquids): Microfluidic and Role of Heat Transport
Authors: Aamir Shahzad, Mao-Gang He
Abstract:
Currently, dusty plasmas motivated the researchers' widespread interest. Since the last two decades, substantial efforts have been made by the scientific and technological community to investigate the transport properties and their nonlinear behavior of three-dimensional and two-dimensional nonideal complex (dusty plasma) liquids (NICDPLs). Different calculations have been made to sustain and utilize strongly coupled NICDPLs because of their remarkable scientific and industrial applications. Understanding of the thermophysical properties of complex liquids under various conditions is of practical interest in the field of science and technology. The determination of thermal conductivity is also a demanding question for thermophysical researchers, due to some reasons; very few results are offered for this significant property. Lack of information of the thermal conductivity of dense and complex liquids at different parameters related to the industrial developments is a major barrier to quantitative knowledge of the heat flux flow from one medium to another medium or surface. The exact numerical investigation of transport properties of complex liquids is a fundamental research task in the field of thermophysics, as various transport data are closely related with the setup and confirmation of equations of state. A reliable knowledge of transport data is also important for an optimized design of processes and apparatus in various engineering and science fields (thermoelectric devices), and, in particular, the provision of precise data for the parameters of heat, mass, and momentum transport is required. One of the promising computational techniques, the homogenous nonequilibrium molecular dynamics (HNEMD) simulation, is over viewed with a special importance on the application to transport problems of complex liquids. This proposed work is particularly motivated by the FIRST TIME to modify the problem of heat conduction equations leads to polynomial velocity and temperature profiles algorithm for the investigation of transport properties with their nonlinear behaviors in the NICDPLs. The aim of proposed work is to implement a NEMDS algorithm (Poiseuille flow) and to delve the understanding of thermal conductivity behaviors in Yukawa liquids. The Yukawa system is equilibrated through the Gaussian thermostat in order to maintain the constant system temperature (canonical ensemble ≡ NVT)). The output steps will be developed between 3.0×105/ωp and 1.5×105/ωp simulation time steps for the computation of λ data. The HNEMD algorithm shows that the thermal conductivity is dependent on plasma parameters and the minimum value of lmin shifts toward higher G with an increase in k, as expected. New investigations give more reliable simulated data for the plasma conductivity than earlier known simulation data and generally the plasma λ0 by 2%-20%, depending on Γ and κ. It has been shown that the obtained results at normalized force field are in satisfactory agreement with various earlier simulation results. This algorithm shows that the new technique provides more accurate results with fast convergence and small size effects over a wide range of plasma states.Keywords: molecular dynamics simulation, thermal conductivity, nonideal complex plasma, Poiseuille flow
Procedia PDF Downloads 273219 Big Data and Health: An Australian Perspective Which Highlights the Importance of Data Linkage to Support Health Research at a National Level
Authors: James Semmens, James Boyd, Anna Ferrante, Katrina Spilsbury, Sean Randall, Adrian Brown
Abstract:
‘Big data’ is a relatively new concept that describes data so large and complex that it exceeds the storage or computing capacity of most systems to perform timely and accurate analyses. Health services generate large amounts of data from a wide variety of sources such as administrative records, electronic health records, health insurance claims, and even smart phone health applications. Health data is viewed in Australia and internationally as highly sensitive. Strict ethical requirements must be met for the use of health data to support health research. These requirements differ markedly from those imposed on data use from industry or other government sectors and may have the impact of reducing the capacity of health data to be incorporated into the real time demands of the Big Data environment. This ‘big data revolution’ is increasingly supported by national governments, who have invested significant funds into initiatives designed to develop and capitalize on big data and methods for data integration using record linkage. The benefits to health following research using linked administrative data are recognised internationally and by the Australian Government through the National Collaborative Research Infrastructure Strategy Roadmap, which outlined a multi-million dollar investment strategy to develop national record linkage capabilities. This led to the establishment of the Population Health Research Network (PHRN) to coordinate and champion this initiative. The purpose of the PHRN was to establish record linkage units in all Australian states, to support the implementation of secure data delivery and remote access laboratories for researchers, and to develop the Centre for Data Linkage for the linkage of national and cross-jurisdictional data. The Centre for Data Linkage has been established within Curtin University in Western Australia; it provides essential record linkage infrastructure necessary for large-scale, cross-jurisdictional linkage of health related data in Australia and uses a best practice ‘separation principle’ to support data privacy and security. Privacy preserving record linkage technology is also being developed to link records without the use of names to overcome important legal and privacy constraint. This paper will present the findings of the first ‘Proof of Concept’ project selected to demonstrate the effectiveness of increased record linkage capacity in supporting nationally significant health research. This project explored how cross-jurisdictional linkage can inform the nature and extent of cross-border hospital use and hospital-related deaths. The technical challenges associated with national record linkage, and the extent of cross-border population movements, were explored as part of this pioneering research project. Access to person-level data linked across jurisdictions identified geographical hot spots of cross border hospital use and hospital-related deaths in Australia. This has implications for planning of health service delivery and for longitudinal follow-up studies, particularly those involving mobile populations.Keywords: data integration, data linkage, health planning, health services research
Procedia PDF Downloads 215218 Application of Infrared Thermal Imaging, Eye Tracking and Behavioral Analysis for Deception Detection
Authors: Petra Hypšová, Martin Seitl
Abstract:
One of the challenges of forensic psychology is to detect deception during a face-to-face interview. In addition to the classical approaches of monitoring the utterance and its components, detection is also sought by observing behavioral and physiological changes that occur as a result of the increased emotional and cognitive load caused by the production of distorted information. Typical are changes in facial temperature, eye movements and their fixation, pupil dilation, emotional micro-expression, heart rate and its variability. Expanding technological capabilities have opened the space to detect these psychophysiological changes and behavioral manifestations through non-contact technologies that do not interfere with face-to-face interaction. Non-contact deception detection methodology is still in development, and there is a lack of studies that combine multiple non-contact technologies to investigate their accuracy, as well as studies that show how different types of lies produced by different interviewers affect physiological and behavioral changes. The main objective of this study is to apply a specific non-contact technology for deception detection. The next objective is to investigate scenarios in which non-contact deception detection is possible. A series of psychophysiological experiments using infrared thermal imaging, eye tracking and behavioral analysis with FaceReader 9.0 software was used to achieve our goals. In the laboratory experiment, 16 adults (12 women, 4 men) between 18 and 35 years of age (SD = 4.42) were instructed to produce alternating prepared and spontaneous truths and lies. The baseline of each proband was also measured, and its results were compared to the experimental conditions. Because the personality of the examiner (particularly gender and facial appearance) to whom the subject is lying can influence physiological and behavioral changes, the experiment included four different interviewers. The interviewer was represented by a photograph of a face that met the required parameters in terms of gender and facial appearance (i.e., interviewer likability/antipathy) to follow standardized procedures. The subject provided all information to the simulated interviewer. During follow-up analyzes, facial temperature (main ROIs: forehead, cheeks, the tip of the nose, chin, and corners of the eyes), heart rate, emotional expression, intensity and fixation of eye movements and pupil dilation were observed. The results showed that the variables studied varied with respect to the production of prepared truths and lies versus the production of spontaneous truths and lies, as well as the variability of the simulated interviewer. The results also supported the assumption of variability in physiological and behavioural values during the subject's resting state, the so-called baseline, and the production of prepared and spontaneous truths and lies. A series of psychophysiological experiments provided evidence of variability in the areas of interest in the production of truths and lies to different interviewers. The combination of technologies used also led to a comprehensive assessment of the physiological and behavioral changes associated with false and true statements. The study presented here opens the space for further research in the field of lie detection with non-contact technologies.Keywords: emotional expression decoding, eye-tracking, functional infrared thermal imaging, non-contact deception detection, psychophysiological experiment
Procedia PDF Downloads 98217 “laws Drifting Off While Artificial Intelligence Thriving” – A Comparative Study with Special Reference to Computer Science and Information Technology
Authors: Amarendar Reddy Addula
Abstract:
Definition of Artificial Intelligence: Artificial intelligence is the simulation of mortal intelligence processes by machines, especially computer systems. Explicit operations of AI comprise expert systems, natural language processing, and speech recognition, and machine vision. Artificial Intelligence (AI) is an original medium for digital business, according to a new report by Gartner. The last 10 times represent an advance period in AI’s development, prodded by the confluence of factors, including the rise of big data, advancements in cipher structure, new machine literacy ways, the materialization of pall computing, and the vibrant open- source ecosystem. Influence of AI to a broader set of use cases and druggies and its gaining fashionability because it improves AI’s versatility, effectiveness, and rigidity. Edge AI will enable digital moments by employing AI for real- time analytics closer to data sources. Gartner predicts that by 2025, further than 50 of all data analysis by deep neural networks will do at the edge, over from lower than 10 in 2021. Responsible AI is a marquee term for making suitable business and ethical choices when espousing AI. It requires considering business and societal value, threat, trust, translucency, fairness, bias mitigation, explainability, responsibility, safety, sequestration, and nonsupervisory compliance. Responsible AI is ever more significant amidst growing nonsupervisory oversight, consumer prospects, and rising sustainability pretensions. Generative AI is the use of AI to induce new vestiges and produce innovative products. To date, generative AI sweats have concentrated on creating media content similar as photorealistic images of people and effects, but it can also be used for law generation, creating synthetic irregular data, and designing medicinals and accoutrements with specific parcels. AI is the subject of a wide- ranging debate in which there's a growing concern about its ethical and legal aspects. Constantly, the two are varied and nonplussed despite being different issues and areas of knowledge. The ethical debate raises two main problems the first, abstract, relates to the idea and content of ethics; the alternate, functional, and concerns its relationship with the law. Both set up models of social geste, but they're different in compass and nature. The juridical analysis is grounded on anon-formalistic scientific methodology. This means that it's essential to consider the nature and characteristics of the AI as a primary step to the description of its legal paradigm. In this regard, there are two main issues the relationship between artificial and mortal intelligence and the question of the unitary or different nature of the AI. From that theoretical and practical base, the study of the legal system is carried out by examining its foundations, the governance model, and the nonsupervisory bases. According to this analysis, throughout the work and in the conclusions, International Law is linked as the top legal frame for the regulation of AI.Keywords: artificial intelligence, ethics & human rights issues, laws, international laws
Procedia PDF Downloads 93