Search results for: principals' emotionally intelligent behaviours
98 Tensile Behaviours of Sansevieria Ehrenbergii Fiber Reinforced Polyester Composites with Water Absorption Time
Authors: T. P. Sathishkumar, P. Navaneethakrishnan
Abstract:
The research work investigates the variation of tensile properties for the sansevieria ehrenbergii fiber (SEF) and SEF reinforced polyester composites respect to various water absorption time. The experiments were conducted according to ATSM D3379-75 and ASTM D570 standards. The percentage of water absorption for composite specimens was measured according to ASTM D570 standard. The fiber of SE was cut in to 30 mm length for preparation of the composites. The simple hand lay-up method followed by compression moulding process adopted to prepare the randomly oriented SEF reinforced polyester composites at constant fiber weight fraction of 40%. The surface treatment was done on the SEFs with various chemicals such as NaOH, KMnO4, Benzoyl Peroxide, Benzoyl Chloride and Stearic Acid before preparing the composites. NaOH was used for pre-treatment of all other chemical treatments. The morphology of the tensile fractured specimens studied using the Scanning Electron Microscopic. The tensile strength of the SEF and SEF reinforced polymer composites were carried out with various water absorption time such as 4, 8, 12, 16, 20 and 24 hours respectively. The result shows that the tensile strength was drop off with increase in water absorption time for all composites. The highest tensile property of raw fiber was found due to lowest moistures content. Also the chemical bond between the cellulose and cementic materials such as lignin and wax was highest due to lowest moisture content. Tensile load was lowest and elongation was highest for the water absorbed fibers at various water absorption time ranges. During this process, the fiber cellulose inhales the water and expands the primary and secondary fibers walls. This increases the moisture content in the fibers. Ultimately this increases the hydrogen cation and the hydroxide anion from the water. In tensile testing, the water absorbed fibers shows highest elongation by stretching of expanded cellulose walls and the bonding strength between the fiber cellulose is low. The load carrying capability was stable at 20 hours of water absorption time. This could be directly affecting the interfacial bonding between the fiber/matrix and composite strength. The chemically treated fibers carry higher load and lower elongation which is due to removal of lignin, hemicellulose and wax content. The water time absorption decreases the tensile strength of the composites. The chemically SEF reinforced composites shows highest tensile strength compared to untreated SEF reinforced composites. This was due to highest bonding area between the fiber/matrix. This was proven in the morphology at the fracture zone of the composites. The intra-fiber debonding was occurred by water capsulation in the fiber cellulose. Among all, the tensile strength was found to be highest for KMnO4 treated SEF reinforced composite compared to other composites. This was due to better interfacial bonding between the fiber-matrix compared to other treated fiber composites. The percentage of water absorption of composites increased with time of water absorption. The percentage weight gain of chemically treated SEF composites at 4 hours to zero water absorption are 9, 9, 10, 10.8 and 9.5 for NaOH, BP, BC, KMnO4 and SA respectively. The percentage weight gain of chemically treated SEF composites at 24 hours to zero water absorption 5.2, 7.3, 12.5, 16.7 and 13.5 for NaOH, BP, BC, KMnO4 and SA respectively. Hence the lowest weight gain was found for KMnO4 treated SEF composites by highest percentage with lowest water uptake. However the chemically treated SEF reinforced composites is possible materials for automotive application like body panels, bumpers and interior parts, and household application like tables and racks etc.Keywords: fibres, polymer-matrix composites (PMCs), mechanical properties, scanning electron microscopy (SEM)
Procedia PDF Downloads 41097 Stress and Distress among Physician Trainees: A Wellbeing Workshop
Authors: Carmen Axisa, Louise Nash, Patrick Kelly, Simon Willcock
Abstract:
Introduction: Doctors experience high levels of burnout, stress and psychiatric morbidity. This can affect the health of the doctor and impact patient care. Study Aims: To evaluate the effectiveness of a workshop intervention to promote wellbeing for Australian Physician Trainees. Methods: A workshop was developed in consultation with specialist clinicians to promote health and wellbeing for physician trainees. The workshop objectives were to improve participant understanding about factors affecting their health and wellbeing, to outline strategies on how to improve health and wellbeing and to encourage participants to apply these strategies in their own lives. There was a focus on building resilience and developing long term healthy behaviours as part of the physician trainee daily lifestyle. Trainees had the opportunity to learn practical strategies for stress management, gain insight into their behaviour and take steps to improve their health and wellbeing. The workshop also identified resources and support systems available to trainees. The workshop duration was four and a half hours including a thirty- minute meal break where a catered meal was provided for the trainees. Workshop evaluations were conducted at the end of the workshop. Sixty-seven physician trainees from Adult Medicine and Paediatric training programs in Sydney Australia were randomised into intervention and control groups. The intervention group attended a workshop facilitated by specialist clinicians and the control group did not. Baseline and post intervention measurements were taken for both groups to evaluate the impact and effectiveness of the workshop. Forty-six participants completed all three measurements (69%). Demographic, personal and self-reported data regarding work/life patterns was collected. Outcome measures include Depression Anxiety Stress Scale (DASS), Professional Quality of Life Scale (ProQOL) and Alcohol Use Disorders Identification Test (AUDIT). Results: The workshop was well received by the physician trainees and workshop evaluations showed that the majority of trainees strongly agree or agree that the training was relevant to their needs (96%) and met their expectations (92%). All trainees strongly agree or agree that they would recommend the workshop to their medical colleagues. In comparison to the control group we observed a reduction in alcohol use, depression and burnout but an increase in stress, anxiety and secondary traumatic stress in the intervention group, at the primary endpoint measured at 6 months. However, none of these differences reached statistical significance (p > 0.05). Discussion: Although the study did not reach statistical significance, the workshop may be beneficial to physician trainees. Trainees had the opportunity to share ideas, gain insight into their own behaviour, learn practical strategies for stress management and discuss approach to work, life and self-care. The workshop discussions enabled trainees to share their experiences in a supported environment where they learned that other trainees experienced stress and burnout and they were not alone in needing to acquire successful coping mechanisms and stress management strategies. Conclusion: These findings suggest that physician trainees are a vulnerable group who may benefit from initiatives that promote wellbeing and from a more supportive work environment.Keywords: doctors' health, physician burnout, physician resilience, wellbeing workshop
Procedia PDF Downloads 19296 Pervasive Computing: Model to Increase Arable Crop Yield through Detection Intrusion System (IDS)
Authors: Idowu Olugbenga Adewumi, Foluke Iyabo Oluwatoyinbo
Abstract:
Presently, there are several discussions on the food security with increase in yield of arable crop throughout the world. This article, briefly present research efforts to create digital interfaces to nature, in particular to area of crop production in agriculture with increase in yield with interest on pervasive computing. The approach goes beyond the use of sensor networks for environmental monitoring but also by emphasizing the development of a system architecture that detect intruder (Intrusion Process) which reduce the yield of the farmer at the end of the planting/harvesting period. The objective of the work is to set a model for setting up the hand held or portable device for increasing the quality and quantity of arable crop. This process incorporates the use of infrared motion image sensor with security alarm system which can send a noise signal to intruder on the farm. This model of the portable image sensing device in monitoring or scaring human, rodent, birds and even pests activities will reduce post harvest loss which will increase the yield on farm. The nano intelligence technology was proposed to combat and minimize intrusion process that usually leads to low quality and quantity of produce from farm. Intranet system will be in place with wireless radio (WLAN), router, server, and client computer system or hand held device e.g PDAs or mobile phone. This approach enables the development of hybrid systems which will be effective as a security measure on farm. Since, precision agriculture has developed with the computerization of agricultural production systems and the networking of computerized control systems. In the intelligent plant production system of controlled greenhouses, information on plant responses, measured by sensors, is used to optimize the system. Further work must be carry out on modeling using pervasive computing environment to solve problems of agriculture, as the use of electronics in agriculture will attracts more youth involvement in the industry.Keywords: pervasive computing, intrusion detection, precision agriculture, security, arable crop
Procedia PDF Downloads 40695 Systems Intelligence in Management (High Performing Organizations and People Score High in Systems Intelligence)
Authors: Raimo P. Hämäläinen, Juha Törmänen, Esa Saarinen
Abstract:
Systems thinking has been acknowledged as an important approach in the strategy and management literature ever since the seminal works of Ackhoff in the 1970´s and Senge in the 1990´s. The early literature was very much focused on structures and organizational dynamics. Understanding systems is important but making improvements also needs ways to understand human behavior in systems. Peter Senge´s book The Fifth Discipline gave the inspiration to the development of the concept of Systems Intelligence. The concept integrates the concepts of personal mastery and systems thinking. SI refers to intelligent behavior in the context of complex systems involving interaction and feedback. It is a competence related to the skills needed in strategy and the environment of modern industrial engineering and management where people skills and systems are in an increasingly important role. The eight factors of Systems Intelligence have been identified from extensive surveys and the factors relate to perceiving, attitude, thinking and acting. The personal self-evaluation test developed consists of 32 items which can also be applied in a peer evaluation mode. The concept and test extend to organizations too. One can talk about organizational systems intelligence. This paper reports the results of an extensive survey based on peer evaluation. The results show that systems intelligence correlates positively with professional performance. People in a managerial role score higher in SI than others. Age improves the SI score but there is no gender difference. Top organizations score higher in all SI factors than lower ranked ones. The SI-tests can also be used as leadership and management development tools helping self-reflection and learning. Finding ways of enhancing learning organizational development is important. Today gamification is a new promising approach. The items in the SI test have been used to develop an interactive card game following the Topaasia game approach. It is an easy way of engaging people in a process which both helps participants see and approach problems in their organization. It also helps individuals in identifying challenges in their own behavior and in improving in their SI.Keywords: gamification, management competence, organizational learning, systems thinking
Procedia PDF Downloads 9894 Rapid and Long-term Alien Language Analysis - Forming Frameworks for the Interpretation of Alien Communication for More Intelligent Life
Authors: Samiksha Raviraja, Junaid Arif
Abstract:
One of the most important abilities in species is the ability to communicate. This paper proposes steps to take when and if aliens came in contact with humans, and how humans would communicate with them. The situation would be a time-sensitive scenario, meaning that communication is at the utmost importance if such an event were to happen. First, humans would need to establish mutual peace by conveying that there is no threat to the alien race. Second, the aliens would need to acknowledge this understanding and reciprocate. This would be extremely difficult to do regardless of their intelligence level unless they are very human-like and have similarities to our way of communicating. The first step towards understanding their mind is to analyze their level of intelligence - Level 1-Low intelligence, Level 2-Human-like intelligence or Level 3-Advanced or High Intelligence. These three levels go hand in hand with the Kardashev scale. Further, the Barrow scale will also be used to categorize alien species in hopes of developing a common universal language to communicate in. This paper will delve into how the level of intelligence can be used toward achieving communication with aliens by predicting various possible scenarios and outcomes by proposing an intensive categorization system. This can be achieved by studying their Emotional and Intelligence Quotient (along with technological and scientific knowledge/intelligence). The limitations and capabilities of their intelligence must also be studied. By observing how they respond and react (expressions and senses) to different kinds of scenarios, items and people, the data will help enable good categorisation. It can be hypothesised that the more human-like aliens are or can relate to humans, the more likely it is that communication is possible. Depending on the situation, either human can teach aliens a human language, or humans can learn an alien language, or both races work together to develop a mutual understanding or mode of communication. There are three possible ways of contact. Aliens visit Earth, or humans discover aliens while on space exploration or through technology in the form of signals. A much rarer case would be humans and aliens running into each other during a space expedition of their own. The first two possibilities allow a more in-depth analysis of the alien life and enhanced results compared. The importance of finding a method of talking with aliens is important in order to not only protect Earth and humans but rather for the advancement of Science through the shared knowledge between the two species.Keywords: intelligence, Kardashev scale, Barrow scale, alien civilizations, emotional and intelligence quotient
Procedia PDF Downloads 7393 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 31092 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks
Authors: Tesfaye Mengistu
Abstract:
Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net
Procedia PDF Downloads 11491 Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China
Authors: Danying Gu, Xiaoyan Li, Yuanlei He
Abstract:
As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.Keywords: design knowledge management, digital nuclear power plant, knowledge transfer, life cycle
Procedia PDF Downloads 27390 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System
Authors: Abisha Isaac Mohanlal
Abstract:
Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery
Procedia PDF Downloads 22589 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions
Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira
Abstract:
Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).Keywords: change management, technology acceptance model, organizational change, health and safety
Procedia PDF Downloads 4688 A Multilingual Model in the Multicultural World
Authors: Marina Petrova
Abstract:
Language policy issues related to the preservation and development of the native languages of the Russian peoples and the state languages of the national republics are increasingly becoming the focus of recent attention of educators and parents, public and national figures. Is it legal to teach the national language or the mother tongue as the state language? Due to that dispute language phobia moods easily evolve into xenophobia among the population. However, a civilized, intelligent multicultural personality can only be formed if the country develops bilingualism and multilingualism, and languages as a political tool help to find ‘keys’ to sufficiently closed national communities both within a poly-ethnic state and in internal relations of multilingual countries. The purpose of this study is to design and theoretically substantiate an efficient model of language education in the innovatively developing Republic of Sakha. 800 participants from different educational institutions of Yakutia worked at developing a multilingual model of education. This investigation is of considerable practical importance because researchers could build a methodical system designed to create conditions for the formation of a cultural language personality and the development of the multilingual communicative competence of Yakut youth, necessary for communication in native, Russian and foreign languages. The selected methodology of humane-personal and competence approaches is reliable and valid. Researchers used a variety of sources of information, including access to related scientific fields (philosophy of education, sociology, humane and social pedagogy, psychology, effective psychotherapy, methods of teaching Russian, psycholinguistics, socio-cultural education, ethnoculturology, ethnopsychology). Of special note is the application of theoretical and empirical research methods, a combination of academic analysis of the problem and experienced training, positive results of experimental work, representative series, correct processing and statistical reliability of the obtained data. It ensures the validity of the investigation’s findings as well as their broad introduction into practice of life-long language education.Keywords: intercultural communication, language policy, multilingual and multicultural education, the Sakha Republic of Yakutia
Procedia PDF Downloads 22487 Web Development in Information Technology with Javascript, Machine Learning and Artificial Intelligence
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
Online developers now have the tools necessary to create online apps that are not only reliable but also highly interactive, thanks to the introduction of JavaScript frameworks and APIs. The objective is to give a broad overview of the recent advances in the area. The fusion of machine learning (ML) and artificial intelligence (AI) has expanded the possibilities for web development. Modern websites now include chatbots, clever recommendation systems, and customization algorithms built in. In the rapidly evolving landscape of modern websites, it has become increasingly apparent that user engagement and personalization are key factors for success. To meet these demands, websites now incorporate a range of innovative technologies. One such technology is chatbots, which provide users with instant assistance and support, enhancing their overall browsing experience. These intelligent bots are capable of understanding natural language and can answer frequently asked questions, offer product recommendations, and even help with troubleshooting. Moreover, clever recommendation systems have emerged as a powerful tool on modern websites. By analyzing user behavior, preferences, and historical data, these systems can intelligently suggest relevant products, articles, or services tailored to each user's unique interests. This not only saves users valuable time but also increases the chances of conversions and customer satisfaction. Additionally, customization algorithms have revolutionized the way websites interact with users. By leveraging user preferences, browsing history, and demographic information, these algorithms can dynamically adjust the website's layout, content, and functionalities to suit individual user needs. This level of personalization enhances user engagement, boosts conversion rates, and ultimately leads to a more satisfying online experience. In summary, the integration of chatbots, clever recommendation systems, and customization algorithms into modern websites is transforming the way users interact with online platforms. These advanced technologies not only streamline user experiences but also contribute to increased customer satisfaction, improved conversions, and overall website success.Keywords: Javascript, machine learning, artificial intelligence, web development
Procedia PDF Downloads 8186 Exploring Behavioural Biases among Indian Investors: A Qualitative Inquiry
Authors: Satish Kumar, Nisha Goyal
Abstract:
In the stock market, individual investors exhibit different kinds of behaviour. Traditional finance is built on the notion of 'homo economics', which states that humans always make perfectly rational choices to maximize their wealth and minimize risk. That is, traditional finance has concern for how investors should behave rather than how actual investors are behaving. Behavioural finance provides the explanation for this phenomenon. Although finance has been studied for thousands of years, behavioural finance is an emerging field that combines the behavioural or psychological aspects with conventional economic and financial theories to provide explanations on how emotions and cognitive factors influence investors’ behaviours. These emotions and cognitive factors are known as behavioural biases. Because of these biases, investors make irrational investment decisions. Besides, the emotional and cognitive factors, the social influence of media as well as friends, relatives and colleagues also affect investment decisions. Psychological factors influence individual investors’ investment decision making, but few studies have used qualitative methods to understand these factors. The aim of this study is to explore the behavioural factors or biases that affect individuals’ investment decision making. For the purpose of this exploratory study, an in-depth interview method was used because it provides much more exhaustive information and a relaxed atmosphere in which people feel more comfortable to provide information. Twenty investment advisors having a minimum 5 years’ experience in securities firms were interviewed. In this study, thematic content analysis was used to analyse interview transcripts. Thematic content analysis process involves analysis of transcripts, coding and identification of themes from data. Based on the analysis we categorized the statements of advisors into various themes. Past market returns and volatility; preference for safe returns; tendency to believe they are better than others; tendency to divide their money into different accounts/assets; tendency to hold on to loss-making assets; preference to invest in familiar securities; tendency to believe that past events were predictable; tendency to rely on the reference point; tendency to rely on other sources of information; tendency to have regret for making past decisions; tendency to have more sensitivity towards losses than gains; tendency to rely on own skills; tendency to buy rising stocks with the expectation that this rise will continue etc. are some of the major concerns showed by experts about investors. The findings of the study revealed 13 biases such as overconfidence bias, disposition effect, familiarity bias, framing effect, anchoring bias, availability bias, self-attribution bias, representativeness, mental accounting, hindsight bias, regret aversion, loss aversion and herding bias/media biases present in Indian investors. These biases have a negative connotation because they produce a distortion in the calculation of an outcome. These biases are classified under three categories such as cognitive errors, emotional biases and social interaction. The findings of this study may assist both financial service providers and researchers to understand the various psychological biases of individual investors in investment decision making. Additionally, individual investors will also be aware of the behavioural biases that will aid them to make sensible and efficient investment decisions.Keywords: financial advisors, individual investors, investment decisions, psychological biases, qualitative thematic content analysis
Procedia PDF Downloads 16985 Placement of Inflow Control Valve for Horizontal Oil Well
Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj
Abstract:
Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation
Procedia PDF Downloads 42084 Customer Acquisition through Time-Aware Marketing Campaign Analysis in Banking Industry
Authors: Harneet Walia, Morteza Zihayat
Abstract:
Customer acquisition has become one of the critical issues of any business in the 21st century; having a healthy customer base is the essential asset of the bank business. Term deposits act as a major source of cheap funds for the banks to invest and benefit from interest rate arbitrage. To attract customers, the marketing campaigns at most financial institutions consist of multiple outbound telephonic calls with more than one contact to a customer which is a very time-consuming process. Therefore, customized direct marketing has become more critical than ever for attracting new clients. As customer acquisition is becoming more difficult to archive, having an intelligent and redefined list is necessary to sell a product smartly. Our aim of this research is to increase the effectiveness of campaigns by predicting customers who will most likely subscribe to the fixed deposit and suggest the most suitable month to reach out to customers. We design a Time Aware Upsell Prediction Framework (TAUPF) using two different approaches, with an aim to find the best approach and technique to build the prediction model. TAUPF is implemented using Upsell Prediction Approach (UPA) and Clustered Upsell Prediction Approach (CUPA). We also address the data imbalance problem by examining and comparing different methods of sampling (Up-sampling and down-sampling). Our results have shown building such a model is quite feasible and profitable for the financial institutions. The Time Aware Upsell Prediction Framework (TAUPF) can be easily used in any industry such as telecom, automobile, tourism, etc. where the TAUPF (Clustered Upsell Prediction Approach (CUPA) or Upsell Prediction Approach (UPA)) holds valid. In our case, CUPA books more reliable. As proven in our research, one of the most important challenges is to define measures which have enough predictive power as the subscription to a fixed deposit depends on highly ambiguous situations and cannot be easily isolated. While we have shown the practicality of time-aware upsell prediction model where financial institutions can benefit from contacting the customers at the specified month, further research needs to be done to understand the specific time of the day. In addition, a further empirical/pilot study on real live customer needs to be conducted to prove the effectiveness of the model in the real world.Keywords: customer acquisition, predictive analysis, targeted marketing, time-aware analysis
Procedia PDF Downloads 12583 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice
Authors: T. Ewetumo, K. D. Adedayo, Festus Ben
Abstract:
Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation
Procedia PDF Downloads 35782 Environmental Analysis of Urban Communities: A Case Study of Air Pollutant Distribution in Smouha Arteries, Alexandria Egypt
Authors: Sammar Zain Allam
Abstract:
Smart Growth, intelligent cities, and healthy cities cited by WHO world health organization; they all call for clean air and minimizing air pollutants considering human health. Air quality is a thriving matter to achieve ecological cities; towards sustainable environmental development of urban fabric design. Selection criteria depends on the strategic location of our area as it is located at the entry of the city of Alexandria from its agricultural road. Besides, it represents the city center for retail, business, and educational amenities. Our study is analyzing readings of definite factors affecting air quality in a centric area in Alexandria. Our readings will be compared to standard measures of carbon dioxide, carbon monoxide, suspended particles, and air velocity or air flow. Carbon emissions are pondered in our study, in addition to suspended particles and the air velocity or air flow. Carbon dioxide and carbon monoxide crystalize the main elements to necessitate environmental and sustainable studies with the appearance of global warming and the glass house effect. Nevertheless, particulate matters are increasing causing breath issues especially to children and elder people; still threatening future generations to meet their own needs; sustainable development definition. Analysis of carbon dioxide, carbon monoxide, suspended particles together with air velocity or air flow has taken place in our area of study to manifest the relationship between these elements and the urban fabric design and land use distribution. For conclusion, dense urban fabric affecting air flow, and thus result in the concentration of air pollutants in certain zones. The appearance of open space with green areas allow the fading of air pollutants and help in their absorption. Along with dense urban fabric, high rise buildings trap air carriers which contribute to high readings of our elements. Also, street design may facilitate the circulation of air which helps carrying these pollutant away and distribute it to a wider space which decreases its harms and effects.Keywords: carbon emissions, air quality measurements, arteries air quality, airflow or air velocity, particulate matter, clean air, urban density
Procedia PDF Downloads 42781 Facilitating Primary Care Practitioners to Improve Outcomes for People With Oropharyngeal Dysphagia Living in the Community: An Ongoing Realist Review
Authors: Caroline Smith, Professor Debi Bhattacharya, Sion Scott
Abstract:
Introduction: Oropharyngeal Dysphagia (OD) effects around 15% of older people, however it is often unrecognised and under diagnosed until they are hospitalised. There is a need for primary care healthcare practitioners (HCPs) to assume a proactive role in identifying and managing OD to prevent adverse outcomes such as aspiration pneumonia. Understanding the determinants of primary care HCPs undertaking this new behaviour provides the intervention targets for addressing. This realist review, underpinned by the Theoretical Domains Framework (TDF), aims to synthesise relevant literature and develop programme theories to understand what interventions work, how they work and under what circumstances to facilitate HCPs to prevent harm from OD. Combining realist methodology with behavioural science will permit conceptualisation of intervention components as theoretical behavioural constructs, thus informing the design of a future behaviour change intervention. Furthermore, through the TDF’s linkage to a taxonomy of behaviour change techniques, we will identify corresponding behaviour change techniques to include in this intervention. Methods & analysis: We are following the five steps for undertaking a realist review: 1) clarify the scope 2) Literature search 3) appraise and extract data 4) evidence synthesis 5) evaluation. We have searched Medline, Google scholar, PubMed, EMBASE, CINAHL, AMED, Scopus and PsycINFO databases. We are obtaining additional evidence through grey literature, snowball sampling, lateral searching and consulting the stakeholder group. Literature is being screened, evaluated and synthesised in Excel and Nvivo. We will appraise evidence in relation to its relevance and rigour. Data will be extracted and synthesised according to its relation to Initial programme theories (IPTs). IPTs were constructed after the preliminary literature search, informed by the TDF and with input from a stakeholder group of patient and public involvement advisors, general practitioners, speech and language therapists, geriatricians and pharmacists. We will follow the Realist and Meta-narrative Evidence Syntheses: Evolving Standards (RAMESES) quality and publication standards to report study results. Results: In this ongoing review our search has identified 1417 manuscripts with approximately 20% progressing to full text screening. We inductively generated 10 IPTs that hypothesise practitioners require: the knowledge to spot the signs and symptoms of OD; the skills to provide initial advice and support; and access to resources in their working environment to support them conducting these new behaviours. We mapped the 10 IPTs to 8 TDF domains and then generated a further 12 IPTs deductively using domain definitions to fulfil the remaining 6 TDF domains. Deductively generated IPTs broadened our thinking to consider domains such as ‘Emotion,’ ‘Optimism’ and ‘Social Influence’, e.g. If practitioners perceive that patients, carers and relatives expect initial advice and support, then they will be more likely to provide this, because they will feel obligated to do so. After prioritisation with stakeholders using a modified nominal group technique approach, a maximum of 10 IPTs will progress to test against the literature.Keywords: behaviour change, deglutition disorders, primary healthcare, realist review
Procedia PDF Downloads 8680 A Study of Seismic Design Approaches for Steel Sheet Piles: Hydrodynamic Pressures and Reduction Factors Using CFD and Dynamic Calculations
Authors: Helena Pera, Arcadi Sanmartin, Albert Falques, Rafael Rebolo, Xavier Ametller, Heiko Zillgen, Cecile Prum, Boris Even, Eric Kapornyai
Abstract:
Sheet piles system can be an interesting solution when dealing with harbors or quays designs. However, current design methods lead to conservative approaches due to the lack of specific basis of design. For instance, some design features still deal with pseudo-static approaches, although being a dynamic problem. Under this concern, the study particularly focuses on hydrodynamic water pressure definition and stability analysis of sheet pile system under seismic loads. During a seismic event, seawater produces hydrodynamic pressures on structures. Currently, design methods introduce hydrodynamic forces by means of Westergaard formulation and Eurocodes recommendations. They apply constant hydrodynamic pressure on the front sheet pile during the entire earthquake. As a result, the hydrodynamic load may represent 20% of the total forces produced on the sheet pile. Nonetheless, some studies question that approach. Hence, this study assesses the soil-structure-fluid interaction of sheet piles under seismic action in order to evaluate if current design strategies overestimate hydrodynamic pressures. For that purpose, this study performs various simulations by Plaxis 2D, a well-known geotechnical software, and CFD models, which treat fluid dynamic behaviours. Knowing that neither Plaxis nor CFD can resolve a soil-fluid coupled problem, the investigation imposes sheet pile displacements from Plaxis as input data for the CFD model. Then, it provides hydrodynamic pressures under seismic action, which fit theoretical Westergaard pressures if calculated using the acceleration at each moment of the earthquake. Thus, hydrodynamic pressures fluctuate during seismic action instead of remaining constant, as design recommendations propose. Additionally, these findings detect that hydrodynamic pressure contributes a 5% to the total load applied on sheet pile due to its instantaneous nature. These results are in line with other studies that use added masses methods for hydrodynamic pressures. Another important feature in sheet pile design is the assessment of the geotechnical overall stability. It uses pseudo-static analysis since the dynamic analysis cannot provide a safety calculation. Consequently, it estimates the seismic action. One of its relevant factors is the selection of the seismic reduction factor. A huge amount of studies discusses the importance of it but also about all its uncertainties. Moreover, current European standards do not propose a clear statement on that, and they recommend using a reduction factor equal to 1. This leads to conservative requirements when compared with more advanced methods. Under this situation, the study calibrates seismic reduction factor by fitting results from pseudo-static to dynamic analysis. The investigation concludes that pseudo-static analyses could reduce seismic action by 40-50%. These results are in line with some studies from Japanese and European working groups. In addition, it seems suitable to account for the flexibility of the sheet pile-soil system. Nevertheless, the calibrated reduction factor is subjected to particular conditions of each design case. Further research would contribute to specifying recommendations for selecting reduction factor values in the early stages of the design. In conclusion, sheet pile design still has chances for improving its design methodologies and approaches. Consequently, design could propose better seismic solutions thanks to advanced methods such as findings of this study.Keywords: computational fluid dynamics, hydrodynamic pressures, pseudo-static analysis, quays, seismic design, steel sheet pile
Procedia PDF Downloads 14379 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 11178 Dietetics Practice in the Scope of Disease Prevention in Community Settings: A School-Based Obesity Prevention Program
Authors: Elham Abbas Aljaaly, Nahlaa Abdulwahab Khalifa
Abstract:
The active method of disease prevention is seen as the most affordable and sustainable action to deal with risks of non-communicable diseases such as obesity. This eight-week project aimed to pilot the feasibility and acceptability of a school-based programme, which is proposed to prevent and modify overweight status and possible related risk factors among student girls 'at the intermediate level' in Jeddah city. The programme was conducted through comprehensible approach targeting physical environment and school policies (nutritional/exercise/behavioural approach). The programme was designed to cultivate the personal and environmental awareness in schools for girls. This was applied by promoting healthy eating and physical activity through policies, physical education, healthier options for school canteens, and the creation of school health teams. The prevention programme was applied on 68 students (who agreed to participate) from grades 7th, 8th and 9th. A pre and post assessment questionnaire was employed on 66 students. The questionnaires were designed to obtain information on students' knowledge about health, nutrition and physical activity. Survey questions included information about nutrients, food consumption patterns, food intake and lifestyle. Physical education included training sessions for new opportunities for physical activities to be performed during school or after school hours. A running competition 'to enhance students’ performance for physical activities' was also conducted during the school visit. A visit to the school canteen was conducted to check, observe, record and assess all available food/beverage items and meals. The assessment method was a subjective method for the type of food/beverages if high in saturated fat, salt and sugar (HFSS) or non-HFSS. The school canteen administrators were encouraged to provide healthy food/beverage items and a sample healthy canteen was provided for implementation. Two healthy options were introduced to the school canteen. A follow up for students’ preferences for the introduced options and the purchasing power were assessed. Thirty-eight percent of young girls (n=26) were not participating in any form of physical activities inside or outside school. Skipping breakfast was stated by 42% (n=28) of students with no daily consumption (19%, n=13) for fruit/vegetables. Significant changes were noticed in students’ (n=66) overall responses to the pre and post questions (P value=.001). All students had participated in the conducted running competition sessions and reported satisfaction and enjoyment about the sessions. No absence was reported by the research team for attending physical education and activity sessions throughout the delivered programme. The purchasing power of the introduced healthy options of 'Salad and oatmeal' was increased to 18% in 8 weeks at the school canteen, and slightly affected the purchase for other less healthy options. The piloted programme indorsed better health and nutrition knowledge, healthy eating and lifestyle attitude, which could help young girls to obtain sustainable changes. It is expected that the outcomes of the programme will be a cornerstone for the futuristic national study that will assist policy makers and participants to build a knowledgeable health promotion scenario and make sure that school students have access to healthy foods, physical exercise and healthy lifestyle.Keywords: adolescents, diet, exercise, behaviours, overweight/obesity, prevention-intervention programme, Saudi Arabia, schoolgirls
Procedia PDF Downloads 13077 Flipping the Script: Opportunities, Challenges, and Threats of a Digital Revolution in Higher Education
Authors: James P. Takona
Abstract:
In a world that is experiencing sharp digital transformations guided by digital technologies, the potential of technology to drive transformation and evolution in the higher is apparent. Higher education is facing a paradigm shift that exposes susceptibilities and threats to fully online programs in the face of post-Covid-19 trends of commodification. This historical moment is likely to be remembered as a critical turning point from analog to digital degree-focused learning modalities, where the default became the pivot point of competition between higher education institutions. Fall 2020 marks a significant inflection point in higher education as students, educators, and government leaders scrutinize higher education's price and value propositions through the new lens of traditional lecture halls versus multiple digitized delivery modes. Online education has since tiled the way for a pedagogical shift in how teachers teach and students learn. The incremental growth of online education in the west can now be attributed to the increasing patronage among students, faculty, and institution administrators. More often than not, college instructors assume paraclete roles in this learning mode, while students become active collaborators and no longer passive learners. This paper offers valuable discernments into the threats, challenges, and opportunities of a massive digital revolution in servicing degree programs. To view digital instruction and learning demands for instructional practices that revolve around collaborative work, engaging students in learning activities, and an engagement that promotes active efforts to solicit strong connections between course activities and expected learning pace for all students. Appropriate digital technologies demand instructors and students need prior solid skills. Need for the use of digital technology to support instruction and learning, intelligent tutoring offers great promise, and failures at implementing digital learning may not improve outcomes for specific student populations. Digital learning benefits students differently depending on their circumstances and background and those of the institution and/or program. Students have alternative options, access to the convenience of learning anytime and anywhere, and the possibility of acquiring and developing new skills leading to lifelong learning.Keywords: digi̇tized learning, digital education, collaborative work, high education, online education, digitize delivery
Procedia PDF Downloads 9376 Measurements for Risk Analysis and Detecting Hazards by Active Wearables
Authors: Werner Grommes
Abstract:
Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects
Procedia PDF Downloads 11075 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 21374 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams
Authors: Yu-Chen Wei
Abstract:
The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.Keywords: human capital divergence, team human capital, team performance, team level research
Procedia PDF Downloads 24173 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport
Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene
Abstract:
Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.Keywords: bullying, focus group, harassment, narrative, sport, qualitative research
Procedia PDF Downloads 18272 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations
Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang
Abstract:
Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments
Procedia PDF Downloads 17571 The Effects of Culture and Language on Social Impression Formation from Voice Pleasantness: A Study with French and Iranian People
Authors: L. Bruckert, A. Mansourzadeh
Abstract:
The voice has a major influence on interpersonal communication in everyday life via the perception of pleasantness. The evolutionary perspective postulates that the mechanisms underlying the pleasantness judgments are universal adaptations that have evolved in the service of choosing a mate (through the process of sexual selection). From this point of view, the favorite voices would be those with more marked sexually dimorphic characteristics; for example, in men with lower voice pitch, pitch is the main criterion. On the other hand, one can postulate that the mechanisms involved are gradually established since childhood through exposure to the environment, and thus the prosodic elements could take precedence in everyday life communication as it conveys information about the speaker's attitude (willingness to communicate, interest toward the interlocutors). Our study focuses on voice pleasantness and its relationship with social impression formation, exploring both the spectral aspects (pitch, timbre) and the prosodic ones. In our study, we recorded the voices through two vocal corpus (five vowels and a reading text) of 25 French males speaking French and 25 Iranian males speaking Farsi. French listeners (40 male/40 female) listened to the French voices and made a judgment either on the voice's pleasantness or on the speaker (judgment about his intelligence, honesty, sociability). The regression analyses from our acoustic measures showed that the prosodic elements (for example, the intonation and the speech rate) are the most important criteria concerning pleasantness, whatever the corpus or the listener's gender. Moreover, the correlation analyses showed that the speakers with the voices judged as the most pleasant are considered the most intelligent, sociable, and honest. The voices in Farsi have been judged by 80 other French listeners (40 male/40 female), and we found the same effect of intonation concerning the judgment of pleasantness with the corpus «vowel» whereas with the corpus «text» the pitch is more important than the prosody. It may suggest that voice perception contains some elements invariant across culture/language, whereas others are influenced by the cultural/linguistic background of the listener. Shortly in the future, Iranian people will be asked to listen either to the French voices for half of them or to the Farsi voices for the other half and produce the same judgments as the French listeners. This experimental design could potentially make it possible to distinguish what is linked to culture and what is linked to language in the case of differences in voice perception.Keywords: cross-cultural psychology, impression formation, pleasantness, voice perception
Procedia PDF Downloads 7070 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10969 The Exploitation of the MOSES Project Outcomes on Supply Chain Optimisation
Authors: Reza Karimpour
Abstract:
Ports play a decisive role in the EU's external and internal trade, as about 74% of imports and exports and 37% of exchanges go through ports. Although ports, especially Deep Sea Shipping (DSS) ports, are integral nodes within multimodal logistic flows, Short Sea Shipping (SSS) and inland waterways are not so well integrated. The automated vessels and supply chain optimisations for sustainable shortsea shipping (MOSES) project aims to enhance the short sea shipping component of the European supply chain by addressing the vulnerabilities and strains related to the operation of large containerships. The MOSES concept can be shortly described as a large containership (mother-vessel) approaching a DSS port (or a large container terminal). Upon her arrival, a combined intelligent mega-system consisting of the MOSES Autonomous tugboat swarm for manoeuvring and the MOSES adapted AutoMoor system. Then, container handling processes are ready to start moving containers to their destination via hinterland connections (trucks and/or rail) or to be shipped to destinations near small ports (on the mainland or island). For the first case, containers are stored in a dedicated port area (Storage area), waiting to be moved via trucks and/or rail. For the second case, containers are stacked by existing port equipment near-dedicated berths of the DSS port. They then are loaded on the MOSES Innovative Feeder Vessel, equipped with the MOSES Robotic Container-Handling System that provides (semi-) autonomous (un) feeding of the feeder. The Robotic Container-Handling System is remotely monitored through a Shore Control Centre. When the MOSES innovative Feeder vessel approaches the small port, where her docking is achieved without tugboats, she automatically unloads the containers using the Robotic Container-Handling System on the quay or directly on trucks. As a result, ports with minimal or no available infrastructure may be effectively integrated with the container supply chain. Then, the MOSES innovative feeder vessel continues her voyage to the next small port, or she returns to the DSS port. MOSES exploitation activity mainly aims to exploit research outcomes beyond the project, facilitate utilisation of the pilot results by others, and continue the pilot service after the project ends. By the mid-lifetime of the project, the exploitation plan introduces the reader to the MOSES project and its key exploitable results. It provides a plan for delivering the MOSES innovations to the market as part of the overall exploitation plan.Keywords: automated vessels, exploitation, shortsea shipping, supply chain
Procedia PDF Downloads 112