Search results for: rice fields
581 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems
Authors: L. Kiefer, C. Richter, G. Reinhart
Abstract:
The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.Keywords: agent systems, autonomous control, handling systems, identification
Procedia PDF Downloads 177580 Everyday Interactions among Imprisoned Sex Offenders: A Qualitative Study within the 'Due Palazzi' Prison in Padua
Authors: Matteo Mazzucato, Elena Faccio, Antonio Iudici
Abstract:
Prison is a social reality constructed by everyday interactions between an inmate, other social actors (cellmates, prison officers, educationalists and psychologists or other detainees) and the external world which participates in this complex construction through the social discourses on prison reality and its problems. Being a detainee means performing a self dealing with processes of stereotypization, attribution of a social role and prejudices assigned by various interlocutors and depending on what kind of crime one has been convicted of. Among all inmates, sex offenders are the ones who risk more to be socially condemned beyond a legal sentence since they have committed one of the most hated and disapproved crime. Regarding this, prison has to be considered as a critical context in which all community expectations and beliefs are converged: for common sense, rapists and child molesters are dangerous people who have to be stigmatized, punished and isolated. Furthermore, other detainees share a code of conduct by which the ‘sex offender’ is collocated at the lowest level of the social hierarchy of the prison. The penitentiary administration too defines this kind of detainee as a ‘vulnerable person to protect’ while prison staff considers him as a particular inmate who has to be treated and definitely changed. Considering all the complexities connected with being imprisoned as a sex offender, our research aimed at exploring how people convicted of sex crimes are called upon to manage all these hetero-narrations about their selves. Set this goal, textual data retrieved from this qualitative research show that sex offenders tend to not face the stigma assigned to them. They are rather used to minimize the story telling about their selves and costruct alternative biographies to be shared with other inmates. Managing narrations about their selves in this way permits to distance them from all the threats perceived living together with other detainees but it blocks sex offenders’ ri-signification of their offences during prison treatment. Given these results, prison administration should develop activities in order to create fields of interaction between detainees where experiencing new versions of their selves spendable even in external social situations. Regarding this it’s important to re-consider prison as part of the community and the sex offenders as a member of it.Keywords: interactions, qualitative research, prison reality, sex offender
Procedia PDF Downloads 220579 Conductivity-Depth Inversion of Large Loop Transient Electromagnetic Sounding Data over Layered Earth Models
Authors: Ravi Ande, Mousumi Hazari
Abstract:
One of the common geophysical techniques for mapping subsurface geo-electrical structures, extensive hydro-geological research, and engineering and environmental geophysics applications is the use of time domain electromagnetic (TDEM)/transient electromagnetic (TEM) soundings. A large transmitter loop for energising the ground and a small receiver loop or magnetometer for recording the transient voltage or magnetic field in the air or on the surface of the earth, with the receiver at the center of the loop or at any random point inside or outside the source loop, make up a large loop TEM system. In general, one can acquire data using one of the configurations with a large loop source, namely, with the receiver at the center point of the loop (central loop method), at an arbitrary in-loop point (in-loop method), coincident with the transmitter loop (coincidence-loop method), and at an arbitrary offset loop point (offset-loop method), respectively. Because of the mathematical simplicity associated with the expressions of EM fields, as compared to the in-loop and offset-loop systems, the central loop system (for ground surveys) and coincident loop system (for ground as well as airborne surveys) have been developed and used extensively for the exploration of mineral and geothermal resources, for mapping contaminated groundwater caused by hazardous waste and thickness of permafrost layer. Because a proper analytical expression for the TEM response over the layered earth model for the large loop TEM system does not exist, the forward problem used in this inversion scheme is first formulated in the frequency domain and then it is transformed in the time domain using Fourier cosine or sine transforms. Using the EMLCLLER algorithm, the forward computation is initially carried out in the frequency domain. As a result, the EMLCLLER modified the forward calculation scheme in NLSTCI to compute frequency domain answers before converting them to the time domain using Fourier Cosine and/or Sine transforms.Keywords: time domain electromagnetic (TDEM), TEM system, geoelectrical sounding structure, Fourier cosine
Procedia PDF Downloads 92578 Flood Risk Assessment for Agricultural Production in a Tropical River Delta Considering Climate Change
Authors: Chandranath Chatterjee, Amina Khatun, Bhabagrahi Sahoo
Abstract:
With the changing climate, precipitation events are intensified in the tropical river basins. Since these river basins are significantly influenced by the monsoonal rainfall pattern, critical impacts are observed on the agricultural practices in the downstream river reaches. This study analyses the crop damage and associated flood risk in terms of net benefit in the paddy-dominated tropical Indian delta of the Mahanadi River. The Mahanadi River basin lies in eastern part of the Indian sub-continent and is greatly affected by the southwest monsoon rainfall extending from the month of June to September. This river delta is highly flood-prone and has suffered from recurring high floods, especially after the 2000s. In this study, the lumped conceptual model, Nedbør Afstrømnings Model (NAM) from the suite of MIKE models, is used for rainfall-runoff modeling. The NAM model is laterally integrated with the MIKE11-Hydrodynamic (HD) model to route the runoffs up to the head of the delta region. To obtain the precipitation-derived future projected discharges at the head of the delta, nine Global Climate Models (GCMs), namely, BCC-CSM1.1(m), GFDL-CM3, GFDL-ESM2G, HadGEM2-AO, IPSL-CM5A-LR, IPSL-CM5A-MR, MIROC5, MIROC-ESM-CHEM and NorESM1-M, available in the Coupled Model Intercomparison Project-Phase 5 (CMIP5) archive are considered. These nine GCMs are previously found to best-capture the Indian Summer Monsoon rainfall. Based on the performance of the nine GCMs in reproducing the historical discharge pattern, three GCMs (HadGEM2-AO, IPSL-CM5A-MR and MIROC-ESM-CHEM) are selected. A higher Taylor Skill Score is considered as the GCM selection criteria. Thereafter, the 10-year return period design flood is estimated using L-moments based flood frequency analysis for the historical and three future projected periods (2010-2039, 2040-2069 and 2070-2099) under Representative Concentration Pathways (RCP) 4.5 and 8.5. A non-dimensional hydrograph analysis is performed to obtain the hydrographs for the historical/projected 10-year return period design floods. These hydrographs are forced into the calibrated and validated coupled 1D-2D hydrodynamic model, MIKE FLOOD, to simulate the flood inundation in the delta region. Historical and projected flood risk is defined based on the information about the flood inundation simulated by the MIKE FLOOD model and the inundation depth-damage-duration relationship of a normal rice variety cultivated in the river delta. In general, flood risk is expected to increase in all the future projected time periods as compared to the historical episode. Further, in comparison to the 2010s (2010-2039), an increased flood risk in the 2040s (2040-2069) is shown by all the three selected GCMs. However, the flood risk then declines in the 2070s as we move towards the end of the century (2070-2099). The methodology adopted herein for flood risk assessment is one of its kind and may be implemented in any world-river basin. The results obtained from this study can help in future flood preparedness by implementing suitable flood adaptation strategies.Keywords: flood frequency analysis, flood risk, global climate models (GCMs), paddy cultivation
Procedia PDF Downloads 75577 Healthcare-SignNet: Advanced Video Classification for Medical Sign Language Recognition Using CNN and RNN Models
Authors: Chithra A. V., Somoshree Datta, Sandeep Nithyanandan
Abstract:
Sign Language Recognition (SLR) is the process of interpreting and translating sign language into spoken or written language using technological systems. It involves recognizing hand gestures, facial expressions, and body movements that makeup sign language communication. The primary goal of SLR is to facilitate communication between hearing- and speech-impaired communities and those who do not understand sign language. Due to the increased awareness and greater recognition of the rights and needs of the hearing- and speech-impaired community, sign language recognition has gained significant importance over the past 10 years. Technological advancements in the fields of Artificial Intelligence and Machine Learning have made it more practical and feasible to create accurate SLR systems. This paper presents a distinct approach to SLR by framing it as a video classification problem using Deep Learning (DL), whereby a combination of Convolutional Neural Networks (CNNs) and Recurrent Neural Networks (RNNs) has been used. This research targets the integration of sign language recognition into healthcare settings, aiming to improve communication between medical professionals and patients with hearing impairments. The spatial features from each video frame are extracted using a CNN, which captures essential elements such as hand shapes, movements, and facial expressions. These features are then fed into an RNN network that learns the temporal dependencies and patterns inherent in sign language sequences. The INCLUDE dataset has been enhanced with more videos from the healthcare domain and the model is evaluated on the same. Our model achieves 91% accuracy, representing state-of-the-art performance in this domain. The results highlight the effectiveness of treating SLR as a video classification task with the CNN-RNN architecture. This approach not only improves recognition accuracy but also offers a scalable solution for real-time SLR applications, significantly advancing the field of accessible communication technologies.Keywords: sign language recognition, deep learning, convolution neural network, recurrent neural network
Procedia PDF Downloads 27576 Nanotechnology in Construction as a Building Security
Authors: Hanan Fayez Hussein
Abstract:
‘Due to increasing environmental challenges and security problems in the world such as global warming, storms, and terrorism’, humans have discovered new technologies and new materials in order to program daily life. As providing physical and psychological security is one of the primary functions of architecture, so in order to provide security, building must prevents unauthorized entry and harm to occupant and reduce the threat of attack by making building less attractive targets by new technologies such as; Nanotechnology, which has emerged as a major science and technology focus of the 21st century and will be the next industrial revolution. Nanotechnology is control of the properties of matter, and it deals with structures of the size 100 nanometers or smaller in at least one dimension and has wide application in various fields. The construction and architecture sectors were among the first to be identified as a promising application area for nanotechnology. The advantages of using nanomaterials in construction are enormous, and promises heighten building security by utilizing the strength of building materials to make our buildings more secure and get smart home. Access barriers such as wall and windows could incorporate stronger materials benefiting from nano-reinforcement utilizing nanotubes and nano composites to act as protective cover. Carbon nanotubes, as one of nanotechnology application, can be designed up to 250 times stronger than steel. Nano-enabled devices and materials offer both enhanced and, in some cases, completely new defence systems. In the addition, the small amount of carbon nanoparticles to the construction materials such as; cement, concrete, wood, glass, gypson, and steel can make these materials act as defence elements. This paper highlights the fact that nanotechnology can impact the future global security and how building’s envelop can act as a defensive cover for the building and can be resistance to any threats can attack it. Then focus on its effect on construction materials such as; Concrete can obtain by nanoadditives excellent mechanical, chemical, and physical properties with less material, which can acts as a precautionary shield to the building.Keywords: nanomaterial, global warming, building security, smart homes
Procedia PDF Downloads 82575 Investigation of Contact Pressure Distribution at Expanded Polystyrene Geofoam Interfaces Using Tactile Sensors
Authors: Chen Liu, Dawit Negussey
Abstract:
EPS (Expanded Polystyrene) geofoam as light-weight material in geotechnical applications are made of pre-expanded resin beads that form fused cellular micro-structures. The strength and deformation properties of geofoam blocks are determined by unconfined compression of small test samples between rigid loading plates. Applied loads are presumed to be supported uniformly over the entire mating end areas. Predictions of field performance on the basis of such laboratory tests widely over-estimate actual post-construction settlements and exaggerate predictions of long-term creep deformations. This investigation examined the development of contact pressures at a large number of discrete points at low and large strain levels for different densities of geofoam. Development of pressure patterns for fine and coarse interface material textures as well as for molding skin and hot wire cut geofoam surfaces were examined. The lab testing showed that I-Scan tactile sensors are useful for detailed observation of contact pressures at a large number of discrete points simultaneously. At low strain level (1%), the lower density EPS block presents low variations in localized stress distribution compared to higher density EPS. At high strain level (10%), the dense geofoam reached the sensor cut-off limit. The imprint and pressure patterns for different interface textures can be distinguished with tactile sensing. The pressure sensing system can be used in many fields with real-time pressure detection. The research findings provide a better understanding of EPS geofoam behavior for improvement of design methods and performance prediction of critical infrastructures, which will be anticipated to guide future improvements in design and rapid construction of critical transportation infrastructures with geofoam in geotechnical applications.Keywords: geofoam, pressure distribution, tactile pressure sensors, interface
Procedia PDF Downloads 173574 Sustainability through Resilience: How Emergency Responders Cope with Stressors
Authors: Sophie Kroeling, Agnetha Schuchardt
Abstract:
Striving for sustainability brings a lot of challenges for different fields of interest, e. g. security or health concerns. In Germany, civil protection is predominantly carried out by emergency responders who perform essential tasks of civil protection. Based on theoretical concepts of different psychological stress theories this contribution focuses on the question, how the resilience of emergency responders can be improved. The goal is to identify resources and successful coping strategies that help to prevent and reduce negative outcomes during or after stressful events. The paper will present results from a qualitative analysis of semi-structured qualitative interviews with 20 emergency responders. These results provide insights into the complexity of coping processes (e. g. controlling the situation, downplaying perceived personal threats through humor) and show the diversity of stressors (like complexity of the disastrous situation, intrusive press and media, or lack of social support within the organization). Self-efficacy expectation was a very important resource for coping with stressful situations. The results served as a starting point for a quantitative survey (that was conducted in March 2017), the development of education and training tools for emergency responders and the improvement of critical incident stress management processes. First results from the quantitative study with more than 700 participants show that, e. g., the emergency responders use social coping within their private social network and also within their aid organization and that both are correlated to resilience. Moreover, missing information, bureaucratic problems and social conflicts within the organization are events that the majority of the participants considered very onerous. Further results from regression analysis will be presented. The proposed paper will combine findings from the qualitative study with the quantitative results, illustrating figures and correlations with respective statements from the interviews. At the end, suggestions for the improvement of the emergency responder’s resilience are given and it is discussed how this can make a contribution to strive for civil security and furthermore a sustainable development.Keywords: civil security, emergency responders, stress, resilience, resources
Procedia PDF Downloads 144573 A Readiness Framework for Digital Innovation in Education: The Context of Academics and Policymakers in Higher Institutions of Learning to Assess the Preparedness of Their Institutions to Adopt and Incorporate Digital Innovation
Authors: Lufungula Osembe
Abstract:
The field of education has witnessed advances in technology and digital transformation. The methods of teaching have undergone significant changes in recent years, resulting in effects on various areas such as pedagogies, curriculum design, personalized teaching, gamification, data analytics, cloud-based learning applications, artificial intelligence tools, advanced plug-ins in LMS, and the emergence of multimedia creation and design. The field of education has not been immune to the changes brought about by digital innovation in recent years, similar to other fields such as engineering, health, science, and technology. There is a need to look at the variables/elements that digital innovation brings to education and develop a framework for higher institutions of learning to assess their readiness to create a viable environment for digital innovation to be successfully adopted. Given the potential benefits of digital innovation in education, it is essential to develop a framework that can assist academics and policymakers in higher institutions of learning to evaluate the effectiveness of adopting and adapting to the evolving landscape of digital innovation in education. The primary research question addressed in this study is to establish the preparedness of higher institutions of learning to adopt and adapt to the evolving landscape of digital innovation. This study follows a Design Science Research (DSR) paradigm to develop a framework for academics and policymakers in higher institutions of learning to evaluate the readiness of their institutions to adopt digital innovation in education. The Design Science Research paradigm is proposed to aid in developing a readiness framework for digital innovation in education. This study intends to follow the Design Science Research (DSR) methodology, which includes problem awareness, suggestion, development, evaluation, and conclusion. One of the major contributions of this study will be the development of the framework for digital innovation in education. Given the various opportunities offered by digital innovation in recent years, the need to create a readiness framework for digital innovation will play a crucial role in guiding academics and policymakers in their quest to align with emerging technologies facilitated by digital innovation in education.Keywords: digital innovation, DSR, education, opportunities, research
Procedia PDF Downloads 68572 The Heritagisation of the Titanic Culture for Urban Regeneration Use: A Case Study of the Titanic Belfast
Authors: Yu Liang
Abstract:
The study of heritage in different contexts has been discussed during the past decades, which the relationship with other fields such as tourism, museum, and urban regeneration has also been interested in scholars. Governmental and policy attention were also fascinated by the use of heritage, which it is a ‘heritagisation’ process, to achieve certain goals because the advantage will appear in both economic development and social inclusion with suitable planning. In the case of Belfast, this city has been through tough ages due to its complicated ideology issues in the past; however, it is obvious to see the transformation through representing their Belfast heritages in tourism. Planners are willing to use this method to attract cultural tourists, investors and also residents to reborn and retrieve their confidence. One of the target topics is the establishment of Titanic Belfast that explores the culture of Titanic and the history of the shipbuilding industry in Belfast. Even though the cultural flagship brought economic and social benefit, not all of the people agreed on the vision of relaunching a sunken ship and felt proud of it. The aim of this research is to clarify the concept of a ‘heritagisation’ that it could achieve certain goals in consolidating areas, increasing local self-identity pride, and promoting tourism activities if well-planned. Moreover, to discuss the preference and the pros and cons of its practice with the Titanic culture in Belfast’s regeneration process, especially the Titanic Belfast flagship project. From the methodological point of view, a mixed incorporating qualitative point of interviews, observation, and secondary sources with different perspectives and approaches are adopted in this case study. The expected result would show that a great majority of outsiders and the planners were pleasured about the concept of Titanic Belfast’s establishment and agreed its attraction traveling to Belfast. Nevertheless, there were still an amount of locals disagree that the Titanic culture and the flagship would be representative of this city and would bring other advantages to them. In other words, some residents doubt or less likely to support the issue since they have been ignored out of the planning process. Hence, opinions are divided among 38 residents, various outsiders, and stakeholders, and their perspectives have drawn an interesting task for sustainable research in the future.Keywords: Belfast, heritagisation, Titanic, Titanic Belfast, urban regeneration
Procedia PDF Downloads 315571 Computational Modelling of Epoxy-Graphene Composite Adhesive towards the Development of Cryosorption Pump
Authors: Ravi Verma
Abstract:
Cryosorption pump is the best solution to achieve clean, vibration free ultra-high vacuum. Furthermore, the operation of cryosorption pump is free from the influence of electric and magnetic fields. Due to these attributes, this pump is used in the space simulation chamber to create the ultra-high vacuum. The cryosorption pump comprises of three parts (a) panel which is cooled with the help of cryogen or cryocooler, (b) an adsorbent which is used to adsorb the gas molecules, (c) an epoxy which holds the adsorbent and the panel together thereby aiding in heat transfer from adsorbent to the panel. The performance of cryosorption pump depends on the temperature of the adsorbent and hence, on the thermal conductivity of the epoxy. Therefore we have made an attempt to increase the thermal conductivity of epoxy adhesive by mixing nano-sized graphene filler particles. The thermal conductivity of epoxy-graphene composite adhesive is measured with the help of indigenously developed experimental setup in the temperature range from 4.5 K to 7 K, which is generally the operating temperature range of cryosorption pump for efficiently pumping of hydrogen and helium gas. In this article, we have presented the experimental results of epoxy-graphene composite adhesive in the temperature range from 4.5 K to 7 K. We have also proposed an analytical heat conduction model to find the thermal conductivity of the composite. In this case, the filler particles, such as graphene, are randomly distributed in a base matrix of epoxy. The developed model considers the complete spatial random distribution of filler particles and this distribution is explained by Binomial distribution. The results obtained by the model have been compared with the experimental results as well as with the other established models. The developed model is able to predict the thermal conductivity in both isotropic regions as well as in anisotropic region over the required temperature range from 4.5 K to 7 K. Due to the non-empirical nature of the proposed model, it will be useful for the prediction of other properties of composite materials involving the filler in a base matrix. The present studies will aid in the understanding of low temperature heat transfer which in turn will be useful towards the development of high performance cryosorption pump.Keywords: composite adhesive, computational modelling, cryosorption pump, thermal conductivity
Procedia PDF Downloads 89570 Mathematics Bridging Theory and Applications for a Data-Driven World
Authors: Zahid Ullah, Atlas Khan
Abstract:
In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models
Procedia PDF Downloads 75569 Preparation and Properties of Polylactic Acid/MDI Modified Thermoplastic Starch Blends
Authors: Sukhila Krishnan, Smita Mohanty, Sanjay K. Nayak
Abstract:
Polylactide (PLA) and thermoplastic starch (TPS) are the most promising bio-based materials presently available on the market. Polylactic acid is one of the versatile biodegradable polyester showing wide range of applications in various fields and starch is a biopolymer which is renewable, cheap as well as extensively available. The usual increase in the cost of petroleum-based commodities in the next decades opens bright future for these materials. Their biodegradability and compostability was an added advantage in applications that are difficult to recycle. Currently, thermoplastic starch (TPS) has been used as a substitute for synthetic plastic in several commercial products. But, TPS shows some limitations mainly due to its brittle and hydrophilic nature, which has to be resolved to widen its application.The objective of the work we report here was to initiate chemical modifications on TPS and to build up a process to control its chemical structure using a solution process which can reduce its water sensitive properties and then blended it with PLA to improve compatibility between PLA and TPS. The method involves in cleavage of starch amylose and amylopectin chain backbone to plasticize with glycerol and water in batch mixer and then the prepared TPS was reacted in solution with diisocyanates i.e, 4,4'-Methylenediphenyl Diisocyanate (MDI).This diisocyanate was used before with great success for the chemical modification of TPS surface. The method utilized here will form an urethane-linkages between reactive isocyanate groups (–NCO) and hydroxyl groups (-OH) of starch as well as of glycerol. New polymer synthesised shows a reduced crystallinity, less hydrophilic and enhanced compatibility with other polymers. The TPS was prepared by Haake Rheomix 600 batch mixer with roller rotors operating at 50 rpm. The produced material is then refluxed for 5hrs with MDI in toluene with constant stirring. Finally, the modified TPS was melt blended with PLA in different compositions. Blends obtained shows an improved mechanical properties. These materials produced are characterized by Fourier Transform Infrared Spectra (FTIR), DSC, X-Ray diffraction and mechanical tests.Keywords: polylactic acid, thermoplastic starch, Methylenediphenyl Diisocyanate, Polylactide (PLA)
Procedia PDF Downloads 384568 Detection of Mustard Traces in Food by an Official Food Safety Laboratory
Authors: Clara Tramuta, Lucia Decastelli, Elisa Barcucci, Sandra Fragassi, Samantha Lupi, Enrico Arletti, Melissa Bizzarri, Daniela Manila Bianchi
Abstract:
Introdution: Food allergies occurs, in the Western World, 2% of adults and up to 8% of children. The protection of allergic consumers is guaranted, in Eurrope, by Regulation (EU) No 1169/2011 of the European Parliament which governs the consumer's right to information and identifies 14 food allergens to be mandatory indicated on the label. Among these, mustard is a popular spice added to enhance the flavour and taste of foods. It is frequently present as an ingredient in spice blends, marinades, salad dressings, sausages, and other products. Hypersensitivity to mustard is a public health problem since the ingestion of even low amounts can trigger severe allergic reactions. In order to protect the allergic consumer, high performance methods are required for the detection of allergenic ingredients. Food safety laboratories rely on validated methods that detect hidden allergens in food to ensure the safety and health of allergic consumers. Here we present the test results for the validation and accreditation of a Real time PCR assay (RT-PCR: SPECIALfinder MC Mustard, Generon), for the detection of mustard traces in food. Materials and Methods. The method was tested on five classes of food matrices: bakery and pastry products (chocolate cookies), meats (ragù), ready-to-eat (mixed salad), dairy products (yogurt), grains, and milling products (rice and barley flour). Blank samples were spiked starting with the mustard samples (Sinapis Alba), lyophilized and stored at -18 °C, at a concentration of 1000 ppm. Serial dilutions were then prepared to a final concentration of 0.5 ppm, using the DNA extracted by ION Force FAST (Generon) from the blank samples. The Real Time PCR reaction was performed by RT-PCR SPECIALfinder MC Mustard (Generon), using CFX96 System (BioRad). Results. Real Time PCR showed a limit of detection (LOD) of 0.5 ppm in grains and milling products, ready-to-eat, meats, bakery, pastry products, and dairy products (range Ct 25-34). To determine the exclusivity parameter of the method, the ragù matrix was contaminated with Prunus dulcis (almonds), peanut (Arachis hypogaea), Glycine max (soy), Apium graveolens (celery), Allium cepa (onion), Pisum sativum (peas), Daucus carota (carrots), and Theobroma cacao (cocoa) and no cross-reactions were observed. Discussion. In terms of sensitivity, the Real Time PCR confirmed, even in complex matrix, a LOD of 0.5 ppm in five classes of food matrices tested; these values are compatible with the current regulatory situation that does not consider, at international level, to establish a quantitative criterion for the allergen considered in this study. The Real Time PCR SPECIALfinder kit for the detection of mustard proved to be easy to use and particularly appreciated for the rapid response times considering that the amplification and detection phase has a duration of less than 50 minutes. Method accuracy was rated satisfactory for sensitivity (100%) and specificity (100%) and was fully validated and accreditated. It was found adequate for the needs of the laboratory as it met the purpose for which it was applied. This study was funded in part within a project of the Italian Ministry of Health (IZS PLV 02/19 RC).Keywords: allergens, food, mustard, real time PCR
Procedia PDF Downloads 166567 Evaluation of Double Displacement Process via Gas Dumpflood from Multiple Gas Reservoirs
Authors: B. Rakjarit, S. Athichanagorn
Abstract:
Double displacement process is a method in which gas is injected at an updip well to displace the oil bypassed by waterflooding operation from downdip water injector. As gas injection is costly and a large amount of gas is needed, gas dump-flood from multiple gas reservoirs is an attractive alternative. The objective of this paper is to demonstrate the benefits of the novel approach of double displacement process via gas dump-flood from multiple gas reservoirs. A reservoir simulation model consisting of a dipping oil reservoir and several underlying layered gas reservoirs was constructed in order to investigate the performance of the proposed method. Initially, water was injected via the downdip well to displace oil towards the producer located updip. When the water cut at the producer became high, the updip well was shut in and perforated in the gas zones in order to dump gas into the oil reservoir. At this point, the downdip well was open for production. In order to optimize oil recovery, oil production and water injection rates and perforation strategy on the gas reservoirs were investigated for different numbers of gas reservoirs having various depths and thicknesses. Gas dump-flood from multiple gas reservoirs can help increase the oil recovery after implementation of waterflooding upto 10%. Although the amount of additional oil recovery is slightly lower than the one obtained in conventional double displacement process, the proposed process requires a small completion cost of the gas zones and no operating cost while the conventional method incurs high capital investment in gas compression facility and high-pressure gas pipeline and additional operating cost. From the simulation study, oil recovery can be optimized by producing oil at a suitable rate and perforating the gas zones with the right strategy which depends on depths, thicknesses and number of the gas reservoirs. Conventional double displacement process has been studied and successfully implemented in many fields around the world. However, the method of dumping gas into the oil reservoir instead of injecting it from surface during the second displacement process has never been studied. The study of this novel approach will help a practicing engineer to understand the benefits of such method and can implement it with minimum cost.Keywords: gas dump-flood, multi-gas layers, double displacement process, reservoir simulation
Procedia PDF Downloads 408566 The Impact of E-commerce to Improve of Banking Services
Authors: Azzi Mohammed Amin
Abstract:
Summary: This note aims to demonstrate the impact that comes out of electronic commerce to improve the quality of banking services and to answer the questions raised in the problem; it also aims to find out the methods applied in the banks to improve the quality of banking. And it identified a conceptual framework for electronic commerce and electronic banking. In addition, the inclusion of case study includes the Algerian Popular Credit Bank to measure the impact of electronic commerce on the quality of banking services. Has been focusing on electronic banking services as a field of modern knowledge, including fields characterized by high module in content and content, where banking management concluded that the service and style of electronic submission is the only area to compete and improve their quality. After studying the exploration of some of the banks operating in Algeria, and concluded that the majority relies sites, especially on the Internet, to introduce themselves and their affiliates as well as the definition of customer coverage for traditional and electronic, which are still at the beginning of the road where only some plastic cards, e-Banking, Bank of cellular, ATM and fast transfers. The establishment of an electronic network that requires the use of an effective banking system overall settlement of all economic sectors also requires the Algerian banks to be ready to receive this technology through the modernization of management and modernization of services (expand the use of credit cards, electronic money, and expansion of the Internet). As well as the development of the banking media to contribute to the dissemination of electronic banking culture in the community. Has been reached that the use of the communications revolution has made e-banking services inevitable impose itself in determining the future of banks and development, has also been reached that there is the impact of electronic commerce on the improvement of banking services through the provision of the information base and extensive refresher on-site research and development, and apply strategies Marketing, all of which help banks to increase the performance of its services, despite the presence of some of the risks of the means of providing electronic service and not the nature of the service itself and clear impact also by changing the shape or location of service from traditional to electronic which works to reduce and the costs of providing high-quality service and thus access to the largest segment.Keywords: e-commerce, e-banking, impact e-commerce, B2C
Procedia PDF Downloads 89565 Practical Software for Optimum Bore Hole Cleaning Using Drilling Hydraulics Techniques
Authors: Abdulaziz F. Ettir, Ghait Bashir, Tarek S. Duzan
Abstract:
A proper well planning is very vital to achieve any successful drilling program on the basis of preventing, overcome all drilling problems and minimize cost operations. Since the hydraulic system plays an active role during the drilling operations, that will lead to accelerate the drilling effort and lower the overall well cost. Likewise, an improperly designed hydraulic system can slow drill rate, fail to clean the hole of cuttings, and cause kicks. In most cases, common sense and commercially available computer programs are the only elements required to design the hydraulic system. Drilling optimization is the logical process of analyzing effects and interactions of drilling variables through applied drilling and hydraulic equations and mathematical modeling to achieve maximum drilling efficiency with minimize drilling cost. In this paper, practical software adopted in this paper to define drilling optimization models including four different optimum keys, namely Opti-flow, Opti-clean, Opti-slip and Opti-nozzle that can help to achieve high drilling efficiency with lower cost. The used data in this research from vertical and horizontal wells were recently drilled in Waha Oil Company fields. The input data are: Formation type, Geopressures, Hole Geometry, Bottom hole assembly and Mud reghology. Upon data analysis, all the results from wells show that the proposed program provides a high accuracy than that proposed from the company in terms of hole cleaning efficiency, and cost break down if we consider that the actual data as a reference base for all wells. Finally, it is recommended to use the established Optimization calculations software at drilling design to achieve correct drilling parameters that can provide high drilling efficiency, borehole cleaning and all other hydraulic parameters which assist to minimize hole problems and control drilling operation costs.Keywords: optimum keys, namely opti-flow, opti-clean, opti-slip and opti-nozzle
Procedia PDF Downloads 319564 Thermal Stability and Electrical Conductivity of Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M = Zn, Ni Measured by Impedance Spectroscopy
Authors: Anna S. Tolkacheva, Sergey N. Shkerin, Kirill G. Zemlyanoi, Olga G. Reznitskikh, Pavel D. Khavlyuk
Abstract:
Calcium oxovanadates with garnet related structure are multifunctional oxides in various fields like photoluminescence, microwave dielectrics, and magneto-dielectrics. For example, vanadate garnets are self-luminescent compounds. They attract attention as RE-free broadband excitation and emission phosphors and are candidate materials for UV-based white light-emitting diodes (WLEDs). Ca₅M₄(VO₄)₆ (M = Mg, Zn, Co, Ni, Mn) compounds are also considered promising for application in microwave devices as substrate materials. However, the relation between their structure, composition and physical/chemical properties remains unclear. Given the above-listed observations, goals of this study are to synthesise Ca₅M₄(VO₄)₆ (M = Mg, Zn, Ni) and to study their thermal and electrical properties. Solid solutions Ca₅Mg₄₋ₓMₓ(VO₄)₆ (0 ≤ x ≤ 4) where M is Zn and Ni have been synthesized by sol-gel method. The single-phase character of the final products was checked by powder X-ray diffraction on a Rigaku D/MAX-2200 X-ray diffractometer using Cu Kα radiation in the 2θ range from 15° to 70°. The dependence of thermal properties on chemical composition of solid solutions was studied using simultaneous thermal analyses (DSC and TG). Thermal analyses were conducted in a Netzch simultaneous analyser STA 449C Jupiter, in Ar atmosphere, in temperature range from 25 to 1100°C heat rate was 10 K·min⁻¹. Coefficients of thermal expansion (CTE) were obtained by dilatometry measurements in air up to 800°C using a Netzsch 402PC dilatometer; heat rate was 1 K·min⁻¹. Impedance spectra were obtained via the two-probe technique with an impedance meter Parstat 2273 in air up to 700°C with the variation of pH₂O from 0.04 to 3.35 kPa. Cation deficiency in Ca and Mg sublattice under the substitution of MgO with ZnO up to 1/6 was observed using Rietveld refinement of the crystal structure. Melting point was found to decrease with x changing from 0 to 4 in Ca₅Mg₄₋ₓMₓ(VO₄)₆ where M is Zn and Ni. It was observed that electrical conductivity does not depend on air humidity. The reported study was funded by the RFBR Grant No. 17–03–01280. Sample attestation was carried out in the Shared Access Centers at the IHTE UB RAS.Keywords: garnet structure, electrical conductivity, thermal expansion, thermal properties
Procedia PDF Downloads 155563 Calibration of Mini TEPC and Measurement of Lineal Energy in a Mixed Radiation Field Produced by Neutrons
Authors: I. C. Cho, W. H. Wen, H. Y. Tsai, T. C. Chao, C. J. Tung
Abstract:
Tissue-equivalent proportional counter (TEPC) is a useful instrument used to measure radiation single-event energy depositions in a subcellular target volume. The quantity of measurements is the microdosimetric lineal energy, which determines the relative biological effectiveness, RBE, for radiation therapy or the radiation-weighting factor, WR, for radiation protection. TEPC is generally used in a mixed radiation field, where each component radiation has its own RBE or WR value. To reduce the pile-up effect during radiotherapy measurements, a miniature TEPC (mini TEPC) with cavity size in the order of 1 mm may be required. In the present work, a homemade mini TEPC with a cylindrical cavity of 1 mm in both the diameter and the height was constructed to measure the lineal energy spectrum of a mixed radiation field with high- and low-LET radiations. Instead of using external radiation beams to penetrate the detector wall, mixed radiation fields were produced by the interactions of neutrons with TEPC walls that contained small plugs of different materials, i.e. Li, B, A150, Cd and N. In all measurements, mini TEPC was placed at the beam port of the Tsing Hua Open-pool Reactor (THOR). Measurements were performed using the propane-based tissue-equivalent gas mixture, i.e. 55% C3H8, 39.6% CO2 and 5.4% N2 by partial pressures. The gas pressure of 422 torr was applied for the simulation of a 1 m diameter biological site. The calibration of mini TEPC was performed using two marking points in the lineal energy spectrum, i.e. proton edge and electron edge. Measured spectra revealed high lineal energy (> 100 keV/m) peaks due to neutron-capture products, medium lineal energy (10 – 100 keV/m) peaks from hydrogen-recoil protons, and low lineal energy (< 10 keV/m) peaks of reactor photons. For cases of Li and B plugs, the high lineal energy peaks were quite prominent. The medium lineal energy peaks were in the decreasing order of Li, Cd, N, A150, and B. The low lineal energy peaks were smaller compared to other peaks. This study demonstrated that internally produced mixed radiations from the interactions of neutrons with different plugs in the TEPC wall provided a useful approach for TEPC measurements of lineal energies.Keywords: TEPC, lineal energy, microdosimetry, radiation quality
Procedia PDF Downloads 470562 Exploring Multimodal Communication: Intersections of Language, Gesture, and Technology
Authors: Rasha Ali Dheyab
Abstract:
In today's increasingly interconnected and technologically-driven world, communication has evolved beyond traditional verbal exchanges. This paper delves into the fascinating realm of multimodal communication, a dynamic field at the intersection of linguistics, gesture studies, and technology. The study of how humans convey meaning through a combination of spoken language, gestures, facial expressions, and digital platforms has gained prominence as our modes of interaction continue to diversify. This exploration begins by examining the foundational theories in linguistics and gesture studies, tracing their historical development and mutual influences. It further investigates the role of nonverbal cues, such as gestures and facial expressions, in augmenting and sometimes even altering the meanings conveyed by spoken language. Additionally, the paper delves into the modern technological landscape, where emojis, GIFs, and other digital symbols have emerged as new linguistic tools, reshaping the ways in which we communicate and express emotions. The interaction between traditional and digital modes of communication is a central focus of this study. The paper investigates how technology has not only introduced new modes of expression but has also influenced the adaptation of existing linguistic and gestural patterns in online discourse. The emergence of virtual reality and augmented reality environments introduces yet another layer of complexity to multimodal communication, offering new avenues for studying how humans navigate and negotiate meaning in immersive digital spaces. Through a combination of literature review, case studies, and theoretical analysis, this paper seeks to shed light on the intricate interplay between language, gesture, and technology in the realm of multimodal communication. By understanding how these diverse modes of expression intersect and interact, we gain valuable insights into the ever-evolving nature of human communication and its implications for fields ranging from linguistics and psychology to human-computer interaction and digital anthropology.Keywords: multimodal communication, linguistics ., gesture studies., emojis., verbal communication., digital
Procedia PDF Downloads 81561 Differences in Production of Knowledge between Internationally Mobile versus Nationally Mobile and Non-Mobile Scientists
Authors: Valeria Aman
Abstract:
The presented study examines the impact of international mobility on knowledge production among mobile scientists and within the sending and receiving research groups. Scientists are relevant to the dynamics of knowledge production because scientific knowledge is mainly characterized by embeddedness and tacitness. International mobility enables the dissemination of scientific knowledge to other places and encourages new combinations of knowledge. It can also increase the interdisciplinarity of research by forming synergetic combinations of knowledge. Particularly innovative ideas can have their roots in related research domains and are sometimes transferred only through the physical mobility of scientists. Diversity among scientists with respect to their knowledge base can act as an engine for the creation of knowledge. It is therefore relevant to study how knowledge acquired through international mobility affects the knowledge production process. In certain research domains, international mobility may be essential to contextualize knowledge and to gain access to knowledge located at distant places. The knowledge production process contingent on the type of international mobility and the epistemic culture of a research field is examined. The production of scientific knowledge is a multi-faceted process, the output of which is mainly published in scholarly journals. Therefore, the study builds upon publication and citation data covered in Elsevier’s Scopus database for the period of 1996 to 2015. To analyse these data, bibliometric and social network analysis techniques are used. A basic analysis of scientific output using publication data, citation data and data on co-authored publications is combined with a content map analysis. Abstracts of publications indicate whether a research stay abroad makes an original contribution methodologically, theoretically or empirically. Moreover, co-citations are analysed to map linkages among scientists and emerging research domains. Finally, acknowledgements are studied that can function as channels of formal and informal communication between the actors involved in the process of knowledge production. The results provide better understanding of how the international mobility of scientists contributes to the production of knowledge, by contrasting the knowledge production dynamics of internationally mobile scientists with those being nationally mobile or immobile. Findings also allow indicating whether international mobility accelerates the production of knowledge and the emergence of new research fields.Keywords: bibliometrics, diversity, interdisciplinarity, international mobility, knowledge production
Procedia PDF Downloads 293560 A System Architecture for Hand Gesture Control of Robotic Technology: A Case Study Using a Myo™ Arm Band, DJI Spark™ Drone, and a Staubli™ Robotic Manipulator
Authors: Sebastian van Delden, Matthew Anuszkiewicz, Jayse White, Scott Stolarski
Abstract:
Industrial robotic manipulators have been commonplace in the manufacturing world since the early 1960s, and unmanned aerial vehicles (drones) have only begun to realize their full potential in the service industry and the military. The omnipresence of these technologies in their respective fields will only become more potent in coming years. While these technologies have greatly evolved over the years, the typical approach to human interaction with these robots has not. In the industrial robotics realm, a manipulator is typically jogged around using a teach pendant and programmed using a networked computer or the teach pendant itself via a proprietary software development platform. Drones are typically controlled using a two-handed controller equipped with throttles, buttons, and sticks, an app that can be downloaded to one’s mobile device, or a combination of both. This application-oriented work offers a novel approach to human interaction with both unmanned aerial vehicles and industrial robotic manipulators via hand gestures and movements. Two systems have been implemented, both of which use a Myo™ armband to control either a drone (DJI Spark™) or a robotic arm (Stäubli™ TX40). The methodologies developed by this work present a mapping of armband gestures (fist, finger spread, swing hand in, swing hand out, swing arm left/up/down/right, etc.) to either drone or robot arm movements. The findings of this study present the efficacy and limitations (precision and ergonomic) of hand gesture control of two distinct types of robotic technology. All source code associated with this project will be open sourced and placed on GitHub. In conclusion, this study offers a framework that maps hand and arm gestures to drone and robot arm control. The system has been implemented using current ubiquitous technologies, and these software artifacts will be open sourced for future researchers or practitioners to use in their work.Keywords: human robot interaction, drones, gestures, robotics
Procedia PDF Downloads 157559 Superchaotropicity: Grafted Surface to Probe the Adsorption of Nano-Ions
Authors: Raimoana Frogier, Luc Girard, Pierre Bauduin, Diane Rebiscoul, Olivier Diat
Abstract:
Nano-ions (NIs) are ionic species or clusters of nanometric size. Their low charge density and the delocalization of their charges give special properties to some of NIs belonging to chemical classes of polyoxometalates (POMs) or boron clusters. They have the particularity of interacting non-covalently with neutral hydrated surface or interfaces such as assemblies of surface-active molecules (micelles, vesicles, lyotropic liquid crystals), foam bubbles or emulsion droplets. This makes possible to classify those NIs in the Hofmeister series as superchaotropic ions. The mechanism of adsorption is complex, linked to the simultaneous dehydration of the ion and the molecule or supramolecular assembly with which it can interact, all with an enthalpic gain on the free energy of the system. This interaction process is reversible and is sufficiently pronounced to induce changes in molecular and supramolecular shape or conformation, phase transitions in the liquid phase, all at sub-millimolar ionic concentrations. This new property of some NIs opens up new possibilities for applications in fields as varied as biochemistry for solubilization, recovery of metals of interest by foams in the form of NIs... In order to better understand the physico-chemical mechanisms at the origin of this interaction, we use silicon wafers functionalized by non-ionic oligomers (polyethylene glycol chains or PEG) to study in situ by X-ray reflectivity this interaction of NIs with the grafted chains. This study carried out at ESRF (European Synchrotron Radiation Facility) and has shown that the adsorption of the NIs, such as POMs, has a very fast kinetics. Moreover the distribution of the NIs in the grafted PEG chain layer was quantify. These results are very encouraging and confirm what has been observed on soft interfaces such as micelles or foams. The possibility to play on the density, length and chemical nature of the grafted chains makes this system an ideal tool to provide kinetic and thermodynamic information to decipher the complex mechanisms at the origin of this adsorption.Keywords: adsorption, nano-ions, solid-liquid interface, superchaotropicity
Procedia PDF Downloads 67558 Analysis of Citation Rate and Data Reuse for Openly Accessible Biodiversity Datasets on Global Biodiversity Information Facility
Authors: Nushrat Khan, Mike Thelwall, Kayvan Kousha
Abstract:
Making research data openly accessible has been mandated by most funders over the last 5 years as it promotes reproducibility in science and reduces duplication of effort to collect the same data. There are evidence that articles that publicly share research data have higher citation rates in biological and social sciences. However, how and whether shared data is being reused is not always intuitive as such information is not easily accessible from the majority of research data repositories. This study aims to understand the practice of data citation and how data is being reused over the years focusing on biodiversity since research data is frequently reused in this field. Metadata of 38,878 datasets including citation counts were collected through the Global Biodiversity Information Facility (GBIF) API for this purpose. GBIF was used as a data source since it provides citation count for datasets, not a commonly available feature for most repositories. Analysis of dataset types, citation counts, creation and update time of datasets suggests that citation rate varies for different types of datasets, where occurrence datasets that have more granular information have higher citation rates than checklist and metadata-only datasets. Another finding is that biodiversity datasets on GBIF are frequently updated, which is unique to this field. Majority of the datasets from the earliest year of 2007 were updated after 11 years, with no dataset that was not updated since creation. For each year between 2007 and 2017, we compared the correlations between update time and citation rate of four different types of datasets. While recent datasets do not show any correlations, 3 to 4 years old datasets show weak correlation where datasets that were updated more recently received high citations. The results are suggestive that it takes several years to cumulate citations for research datasets. However, this investigation found that when searched on Google Scholar or Scopus databases for the same datasets, the number of citations is often not the same as GBIF. Hence future aim is to further explore the citation count system adopted by GBIF to evaluate its reliability and whether it can be applicable to other fields of studies as well.Keywords: data citation, data reuse, research data sharing, webometrics
Procedia PDF Downloads 178557 Analyzing Industry-University Collaboration Using Complex Networks and Game Theory
Authors: Elnaz Kanani-Kuchesfehani, Andrea Schiffauerova
Abstract:
Due to the novelty of the nanotechnology science, its highly knowledge intensive content, and its invaluable application in almost all technological fields, the close interaction between university and industry is essential. A possible gap between academic strengths to generate good nanotechnology ideas and industrial capacity to receive them can thus have far-reaching consequences. In order to be able to enhance the collaboration between the two parties, a better understanding of knowledge transfer within the university-industry relationship is needed. The objective of this research is to investigate the research collaboration between academia and industry in Canadian nanotechnology and to propose the best cooperative strategy to maximize the quality of the produced knowledge. First, a network of all Canadian academic and industrial nanotechnology inventors is constructed using the patent data from the USPTO (United States Patent and Trademark Office), and it is analyzed with social network analysis software. The actual level of university-industry collaboration in Canadian nanotechnology is determined and the significance of each group of actors in the network (academic vs. industrial inventors) is assessed. Second, a novel methodology is proposed, in which the network of nanotechnology inventors is assessed from a game theoretic perspective. It involves studying a cooperative game with n players each having at most n-1 decisions to choose from. The equilibrium leads to a strategy for all the players to choose their co-worker in the next period in order to maximize the correlated payoff of the game. The payoffs of the game represent the quality of the produced knowledge based on the citations of the patents. The best suggestion for the next collaborative relationship is provided for each actor from a game theoretic point of view in order to maximize the quality of the produced knowledge. One of the major contributions of this work is the novel approach which combines game theory and social network analysis for the case of large networks. This approach can serve as a powerful tool in the analysis of the strategic interactions of the network actors within the innovation systems and other large scale networks.Keywords: cooperative strategy, game theory, industry-university collaboration, knowledge production, social network analysis
Procedia PDF Downloads 258556 Formula Student Car: Design, Analysis and Lap Time Simulation
Authors: Rachit Ahuja, Ayush Chugh
Abstract:
Aerodynamic forces and moments, as well as tire-road forces largely affects the maneuverability of the vehicle. Car manufacturers are largely fascinated and influenced by various aerodynamic improvements made in formula cars. There is constant effort of applying these aerodynamic improvements in road vehicles. In motor racing, the key differentiating factor in a high performance car is its ability to maintain highest possible acceleration in appropriate direction. One of the main areas of concern in motor racing is balance of aerodynamic forces and stream line the flow of air across the body of the vehicle. At present, formula racing cars are regulated by stringent FIA norms, there are constrains for dimensions of the vehicle, engine capacity etc. So one of the fields in which there is a large scope of improvement is aerodynamics of the vehicle. In this project work, an attempt has been made to design a formula- student (FS) car, improve its aerodynamic characteristics through steady state CFD simulations and simultaneously calculate its lap time. Initially, a CAD model of a formula student car is made using SOLIDWORKS as per the given dimensions and a steady-state external air-flow simulation is performed on the baseline model of the formula student car without any add on device to evaluate and analyze the air-flow pattern around the car and aerodynamic forces using FLUENT Solver. A detailed survey on different add-on devices used in racing application like: - front wing, diffuser, shark pin, T- wing etc. is made and geometric model of these add-on devices are created. These add-on devices are assembled with the baseline model. Steady state CFD simulations are done on the modified car to evaluate the aerodynamic effects of these add-on devices on the car. Later comparison of lap time simulation of the formula student car with and without the add-on devices is done with the help of MATLAB. Aerodynamic performances like: - lift, drag and their coefficients are evaluated for different configuration and design of the add-on devices at different speed of the vehicle. From parametric CFD simulations on formula student car attached with add-on devices, there is a considerable amount of drag and lift force reduction besides streamlining the airflow across the car. The best possible configuration of these add-on devices is obtained from these CFD simulations and also use of these add-on devices have shown an improvement in performance of the car which can be compared by various lap time simulations of the car.Keywords: aerodynamic performance, front wing, laptime simulation, t-wing
Procedia PDF Downloads 197555 Career Guidance System Using Machine Learning
Authors: Mane Darbinyan, Lusine Hayrapetyan, Elen Matevosyan
Abstract:
Artificial Intelligence in Education (AIED) has been created to help students get ready for the workforce, and over the past 25 years, it has grown significantly, offering a variety of technologies to support academic, institutional, and administrative services. However, this is still challenging, especially considering the labor market's rapid change. While choosing a career, people face various obstacles because they do not take into consideration their own preferences, which might lead to many other problems like shifting jobs, work stress, occupational infirmity, reduced productivity, and manual error. Besides preferences, people should properly evaluate their technical and non-technical skills, as well as their personalities. Professional counseling has become a difficult undertaking for counselors due to the wide range of career choices brought on by changing technological trends. It is necessary to close this gap by utilizing technology that makes sophisticated predictions about a person's career goals based on their personality. Hence, there is a need to create an automated model that would help in decision-making based on user inputs. Improving career guidance can be achieved by embedding machine learning into the career consulting ecosystem. There are various systems of career guidance that work based on the same logic, such as the classification of applicants, matching applications with appropriate departments or jobs, making predictions, and providing suitable recommendations. Methodologies like KNN, Neural Networks, K-means clustering, D-Tree, and many other advanced algorithms are applied in the fields of data and compute some data, which is helpful to predict the right careers. Besides helping users with their career choice, these systems provide numerous opportunities which are very useful while making this hard decision. They help the candidate to recognize where he/she specifically lacks sufficient skills so that the candidate can improve those skills. They are also capable to offer an e-learning platform, taking into account the user's lack of knowledge. Furthermore, users can be provided with details on a particular job, such as the abilities required to excel in that industry.Keywords: career guidance system, machine learning, career prediction, predictive decision, data mining, technical and non-technical skills
Procedia PDF Downloads 80554 Numerical Analysis of Heat Transfer in Water Channels of the Opposed-Piston Diesel Engine
Authors: Michal Bialy, Marcin Szlachetka, Mateusz Paszko
Abstract:
This paper discusses the CFD results of heat transfer in water channels in the engine body. The research engine was a newly designed Diesel combustion engine. The engine has three cylinders with three pairs of opposed pistons inside. The engine will be able to generate 100 kW mechanical power at a crankshaft speed of 3,800-4,000 rpm. The water channels are in the engine body along the axis of the three cylinders. These channels are around the three combustion chambers. The water channels transfer combustion heat that occurs the cylinders to the external radiator. This CFD research was based on the ANSYS Fluent software and aimed to optimize the geometry of the water channels. These channels should have a maximum flow of heat from the combustion chamber or the external radiator. Based on the parallel simulation research, the boundary and initial conditions enabled us to specify average values of key parameters for our numerical analysis. Our simulation used the average momentum equations and turbulence model k-epsilon double equation. There was also used a real k-epsilon model with a function of a standard wall. The turbulence intensity factor was 10%. The working fluid mass flow rate was calculated for a single typical value, specified in line with the research into the flow rate of automotive engine cooling pumps used in engines of similar power. The research uses a series of geometric models which differ, for instance, in the shape of the cross-section of the channel along the axis of the cylinder. The results are presented as colourful distribution maps of temperature, speed fields and heat flow through the cylinder walls. Due to limitations of space, our paper presents the results on the most representative geometric model only. Acknowledgement: This work has been realized in the cooperation with The Construction Office of WSK ‘PZL-KALISZ’ S.A. and is part of Grant Agreement No. POIR.01.02.00-00-0002/15 financed by the Polish National Centre for Research and Development.Keywords: Ansys fluent, combustion engine, computational fluid dynamics CFD, cooling system
Procedia PDF Downloads 219553 Problem Solving: Process or Product? A Mathematics Approach to Problem Solving in Knowledge Management
Authors: A. Giannakopoulos, S. B. Buckley
Abstract:
Problem solving in any field is recognised as a prerequisite for any advancement in knowledge. For example in South Africa it is one of the seven critical outcomes of education together with critical thinking. As a systematic way to problem solving was initiated in mathematics by the great mathematician George Polya (the father of problem solving), more detailed and comprehensive ways in problem solving have been developed. This paper is based on the findings by the author and subsequent recommendations for further research in problem solving and critical thinking. Although the study was done in mathematics, there is no doubt by now in almost anyone’s mind that mathematics is involved to a greater or a lesser extent in all fields, from symbols, to variables, to equations, to logic, to critical thinking. Therefore it stands to reason that mathematical principles and learning cannot be divorced from any field. In management of knowledge situations, the types of problems are similar to mathematics problems varying from simple to analogical to complex; from well-structured to ill-structured problems. While simple problems could be solved by employees by adhering to prescribed sequential steps (the process), analogical and complex problems cannot be proceduralised and that diminishes the capacity of the organisation of knowledge creation and innovation. The low efficiency in some organisations and the low pass rates in mathematics prompted the author to view problem solving as a product. The authors argue that using mathematical approaches to knowledge management problem solving and treating problem solving as a product will empower the employee through further training to tackle analogical and complex problems. The question the authors asked was: If it is true that problem solving and critical thinking are indeed basic skills necessary for advancement of knowledge why is there so little literature of knowledge management (KM) about them and how they are connected and advance KM?This paper concludes with a conceptual model which is based on general accepted principles of knowledge acquisition (developing a learning organisation), knowledge creation, sharing, disseminating and storing thereof, the five pillars of knowledge management (KM). This model, also expands on Gray’s framework on KM practices and problem solving and opens the doors to a new approach to training employees in general and domain specific areas problems which can be adapted in any type of organisation.Keywords: critical thinking, knowledge management, mathematics, problem solving
Procedia PDF Downloads 596552 Convergence Results of Two-Dimensional Homogeneous Elastic Plates from Truncation of Potential Energy
Authors: Erick Pruchnicki, Nikhil Padhye
Abstract:
Plates are important engineering structures which have attracted extensive research since the 19th century. The subject of this work is statical analysis of a linearly elastic homogenous plate under small deformations. A 'thin plate' is a three-dimensional structure comprising of a small transverse dimension with respect to a flat mid-surface. The general aim of any plate theory is to deduce a two-dimensional model, in terms of mid-surface quantities, to approximately and accurately describe the plate's deformation in terms of mid-surface quantities. In recent decades, a common starting point for this purpose is to utilize series expansion of a displacement field across the thickness dimension in terms of the thickness parameter (h). These attempts are mathematically consistent in deriving leading-order plate theories based on certain a priori scaling between the thickness and the applied loads; for example, asymptotic methods which are aimed at generating leading-order two-dimensional variational problems by postulating formal asymptotic expansion of the displacement fields. Such methods rigorously generate a hierarchy of two-dimensional models depending on the order of magnitude of the applied load with respect to the plate-thickness. However, in practice, applied loads are external and thus not directly linked or dependent on the geometry/thickness of the plate; thus, rendering any such model (based on a priori scaling) of limited practical utility. In other words, the main limitation of these approaches is that they do not furnish a single plate model for all orders of applied loads. Following analogy of recent efforts of deploying Fourier-series expansion to study convergence of reduced models, we propose two-dimensional model(s) resulting from truncation of the potential energy and rigorously prove the convergence of these two-dimensional plate models to the parent three-dimensional linear elasticity with increasing truncation order of the potential energy.Keywords: plate theory, Fourier-series expansion, convergence result, Legendre polynomials
Procedia PDF Downloads 111