Search results for: intelligent tutoring
96 Affective Robots: Evaluation of Automatic Emotion Recognition Approaches on a Humanoid Robot towards Emotionally Intelligent Machines
Authors: Silvia Santano Guillén, Luigi Lo Iacono, Christian Meder
Abstract:
One of the main aims of current social robotic research is to improve the robots’ abilities to interact with humans. In order to achieve an interaction similar to that among humans, robots should be able to communicate in an intuitive and natural way and appropriately interpret human affects during social interactions. Similarly to how humans are able to recognize emotions in other humans, machines are capable of extracting information from the various ways humans convey emotions—including facial expression, speech, gesture or text—and using this information for improved human computer interaction. This can be described as Affective Computing, an interdisciplinary field that expands into otherwise unrelated fields like psychology and cognitive science and involves the research and development of systems that can recognize and interpret human affects. To leverage these emotional capabilities by embedding them in humanoid robots is the foundation of the concept Affective Robots, which has the objective of making robots capable of sensing the user’s current mood and personality traits and adapt their behavior in the most appropriate manner based on that. In this paper, the emotion recognition capabilities of the humanoid robot Pepper are experimentally explored, based on the facial expressions for the so-called basic emotions, as well as how it performs in contrast to other state-of-the-art approaches with both expression databases compiled in academic environments and real subjects showing posed expressions as well as spontaneous emotional reactions. The experiments’ results show that the detection accuracy amongst the evaluated approaches differs substantially. The introduced experiments offer a general structure and approach for conducting such experimental evaluations. The paper further suggests that the most meaningful results are obtained by conducting experiments with real subjects expressing the emotions as spontaneous reactions.Keywords: affective computing, emotion recognition, humanoid robot, human-robot-interaction (HRI), social robots
Procedia PDF Downloads 23595 Systems Intelligence in Management (High Performing Organizations and People Score High in Systems Intelligence)
Authors: Raimo P. Hämäläinen, Juha Törmänen, Esa Saarinen
Abstract:
Systems thinking has been acknowledged as an important approach in the strategy and management literature ever since the seminal works of Ackhoff in the 1970´s and Senge in the 1990´s. The early literature was very much focused on structures and organizational dynamics. Understanding systems is important but making improvements also needs ways to understand human behavior in systems. Peter Senge´s book The Fifth Discipline gave the inspiration to the development of the concept of Systems Intelligence. The concept integrates the concepts of personal mastery and systems thinking. SI refers to intelligent behavior in the context of complex systems involving interaction and feedback. It is a competence related to the skills needed in strategy and the environment of modern industrial engineering and management where people skills and systems are in an increasingly important role. The eight factors of Systems Intelligence have been identified from extensive surveys and the factors relate to perceiving, attitude, thinking and acting. The personal self-evaluation test developed consists of 32 items which can also be applied in a peer evaluation mode. The concept and test extend to organizations too. One can talk about organizational systems intelligence. This paper reports the results of an extensive survey based on peer evaluation. The results show that systems intelligence correlates positively with professional performance. People in a managerial role score higher in SI than others. Age improves the SI score but there is no gender difference. Top organizations score higher in all SI factors than lower ranked ones. The SI-tests can also be used as leadership and management development tools helping self-reflection and learning. Finding ways of enhancing learning organizational development is important. Today gamification is a new promising approach. The items in the SI test have been used to develop an interactive card game following the Topaasia game approach. It is an easy way of engaging people in a process which both helps participants see and approach problems in their organization. It also helps individuals in identifying challenges in their own behavior and in improving in their SI.Keywords: gamification, management competence, organizational learning, systems thinking
Procedia PDF Downloads 9894 Rapid and Long-term Alien Language Analysis - Forming Frameworks for the Interpretation of Alien Communication for More Intelligent Life
Authors: Samiksha Raviraja, Junaid Arif
Abstract:
One of the most important abilities in species is the ability to communicate. This paper proposes steps to take when and if aliens came in contact with humans, and how humans would communicate with them. The situation would be a time-sensitive scenario, meaning that communication is at the utmost importance if such an event were to happen. First, humans would need to establish mutual peace by conveying that there is no threat to the alien race. Second, the aliens would need to acknowledge this understanding and reciprocate. This would be extremely difficult to do regardless of their intelligence level unless they are very human-like and have similarities to our way of communicating. The first step towards understanding their mind is to analyze their level of intelligence - Level 1-Low intelligence, Level 2-Human-like intelligence or Level 3-Advanced or High Intelligence. These three levels go hand in hand with the Kardashev scale. Further, the Barrow scale will also be used to categorize alien species in hopes of developing a common universal language to communicate in. This paper will delve into how the level of intelligence can be used toward achieving communication with aliens by predicting various possible scenarios and outcomes by proposing an intensive categorization system. This can be achieved by studying their Emotional and Intelligence Quotient (along with technological and scientific knowledge/intelligence). The limitations and capabilities of their intelligence must also be studied. By observing how they respond and react (expressions and senses) to different kinds of scenarios, items and people, the data will help enable good categorisation. It can be hypothesised that the more human-like aliens are or can relate to humans, the more likely it is that communication is possible. Depending on the situation, either human can teach aliens a human language, or humans can learn an alien language, or both races work together to develop a mutual understanding or mode of communication. There are three possible ways of contact. Aliens visit Earth, or humans discover aliens while on space exploration or through technology in the form of signals. A much rarer case would be humans and aliens running into each other during a space expedition of their own. The first two possibilities allow a more in-depth analysis of the alien life and enhanced results compared. The importance of finding a method of talking with aliens is important in order to not only protect Earth and humans but rather for the advancement of Science through the shared knowledge between the two species.Keywords: intelligence, Kardashev scale, Barrow scale, alien civilizations, emotional and intelligence quotient
Procedia PDF Downloads 7393 An Intelligent Text Independent Speaker Identification Using VQ-GMM Model Based Multiple Classifier System
Authors: Ben Soltane Cheima, Ittansa Yonas Kelbesa
Abstract:
Speaker Identification (SI) is the task of establishing identity of an individual based on his/her voice characteristics. The SI task is typically achieved by two-stage signal processing: training and testing. The training process calculates speaker specific feature parameters from the speech and generates speaker models accordingly. In the testing phase, speech samples from unknown speakers are compared with the models and classified. Even though performance of speaker identification systems has improved due to recent advances in speech processing techniques, there is still need of improvement. In this paper, a Closed-Set Tex-Independent Speaker Identification System (CISI) based on a Multiple Classifier System (MCS) is proposed, using Mel Frequency Cepstrum Coefficient (MFCC) as feature extraction and suitable combination of vector quantization (VQ) and Gaussian Mixture Model (GMM) together with Expectation Maximization algorithm (EM) for speaker modeling. The use of Voice Activity Detector (VAD) with a hybrid approach based on Short Time Energy (STE) and Statistical Modeling of Background Noise in the pre-processing step of the feature extraction yields a better and more robust automatic speaker identification system. Also investigation of Linde-Buzo-Gray (LBG) clustering algorithm for initialization of GMM, for estimating the underlying parameters, in the EM step improved the convergence rate and systems performance. It also uses relative index as confidence measures in case of contradiction in identification process by GMM and VQ as well. Simulation results carried out on voxforge.org speech database using MATLAB highlight the efficacy of the proposed method compared to earlier work.Keywords: feature extraction, speaker modeling, feature matching, Mel frequency cepstrum coefficient (MFCC), Gaussian mixture model (GMM), vector quantization (VQ), Linde-Buzo-Gray (LBG), expectation maximization (EM), pre-processing, voice activity detection (VAD), short time energy (STE), background noise statistical modeling, closed-set tex-independent speaker identification system (CISI)
Procedia PDF Downloads 30992 Smart Defect Detection in XLPE Cables Using Convolutional Neural Networks
Authors: Tesfaye Mengistu
Abstract:
Power cables play a crucial role in the transmission and distribution of electrical energy. As the electricity generation, transmission, distribution, and storage systems become smarter, there is a growing emphasis on incorporating intelligent approaches to ensure the reliability of power cables. Various types of electrical cables are employed for transmitting and distributing electrical energy, with cross-linked polyethylene (XLPE) cables being widely utilized due to their exceptional electrical and mechanical properties. However, insulation defects can occur in XLPE cables due to subpar manufacturing techniques during production and cable joint installation. To address this issue, experts have proposed different methods for monitoring XLPE cables. Some suggest the use of interdigital capacitive (IDC) technology for online monitoring, while others propose employing continuous wave (CW) terahertz (THz) imaging systems to detect internal defects in XLPE plates used for power cable insulation. In this study, we have developed models that employ a custom dataset collected locally to classify the physical safety status of individual power cables. Our models aim to replace physical inspections with computer vision and image processing techniques to classify defective power cables from non-defective ones. The implementation of our project utilized the Python programming language along with the TensorFlow package and a convolutional neural network (CNN). The CNN-based algorithm was specifically chosen for power cable defect classification. The results of our project demonstrate the effectiveness of CNNs in accurately classifying power cable defects. We recommend the utilization of similar or additional datasets to further enhance and refine our models. Additionally, we believe that our models could be used to develop methodologies for detecting power cable defects from live video feeds. We firmly believe that our work makes a significant contribution to the field of power cable inspection and maintenance. Our models offer a more efficient and cost-effective approach to detecting power cable defects, thereby improving the reliability and safety of power grids.Keywords: artificial intelligence, computer vision, defect detection, convolutional neural net
Procedia PDF Downloads 11491 Practice on Design Knowledge Management and Transfer across the Life Cycle of a New-Built Nuclear Power Plant in China
Authors: Danying Gu, Xiaoyan Li, Yuanlei He
Abstract:
As a knowledge-intensive industry, nuclear industry highly values the importance of safety and quality. The life cycle of a NPP (Nuclear Power Plant) can last 100 years from the initial research and design to its decommissioning. How to implement the high-quality knowledge management and how to contribute to a more safe, advanced and economic NPP (Nuclear Power Plant) is the most important issue and responsibility for knowledge management. As the lead of nuclear industry, nuclear research and design institute has competitive advantages of its advanced technology, knowledge and information, DKM (Design Knowledge Management) of nuclear research and design institute is the core of the knowledge management in the whole nuclear industry. In this paper, the study and practice on DKM and knowledge transfer across the life cycle of a new-built NPP in China is introduced. For this digital intelligent NPP, the whole design process is based on a digital design platform which includes NPP engineering and design dynamic analyzer, visualization engineering verification platform, digital operation maintenance support platform and digital equipment design, manufacture integrated collaborative platform. In order to make all the design data and information transfer across design, construction, commissioning and operation, the overall architecture of new-built digital NPP should become a modern knowledge management system. So a digital information transfer model across the NPP life cycle is proposed in this paper. The challenges related to design knowledge transfer is also discussed, such as digital information handover, data center and data sorting, unified data coding system. On the other hand, effective delivery of design information during the construction and operation phase will contribute to the comprehensive understanding of design ideas and components and systems for the construction contractor and operation unit, largely increasing the safety, quality and economic benefits during the life cycle. The operation and maintenance records generated from the NPP operation process have great significance for maintaining the operating state of NPP, especially the comprehensiveness, validity and traceability of the records. So the requirements of an online monitoring and smart diagnosis system of NPP is also proposed, to help utility-owners to improve the safety and efficiency.Keywords: design knowledge management, digital nuclear power plant, knowledge transfer, life cycle
Procedia PDF Downloads 27390 Critical Evaluation of the Transformative Potential of Artificial Intelligence in Law: A Focus on the Judicial System
Authors: Abisha Isaac Mohanlal
Abstract:
Amidst all suspicions and cynicism raised by the legal fraternity, Artificial Intelligence has found its way into the legal system and has revolutionized the conventional forms of legal services delivery. Be it legal argumentation and research or resolution of complex legal disputes; artificial intelligence has crept into all legs of modern day legal services. Its impact has been largely felt by way of big data, legal expert systems, prediction tools, e-lawyering, automated mediation, etc., and lawyers around the world are forced to upgrade themselves and their firms to stay in line with the growth of technology in law. Researchers predict that the future of legal services would belong to artificial intelligence and that the age of human lawyers will soon rust. But as far as the Judiciary is concerned, even in the developed countries, the system has not fully drifted away from the orthodoxy of preferring Natural Intelligence over Artificial Intelligence. Since Judicial decision-making involves a lot of unstructured and rather unprecedented situations which have no single correct answer, and looming questions of legal interpretation arise in most of the cases, discretion and Emotional Intelligence play an unavoidable role. Added to that, there are several ethical, moral and policy issues to be confronted before permitting the intrusion of Artificial Intelligence into the judicial system. As of today, the human judge is the unrivalled master of most of the judicial systems around the globe. Yet, scientists of Artificial Intelligence claim that robot judges can replace human judges irrespective of how daunting the complexity of issues is and how sophisticated the cognitive competence required is. They go on to contend that even if the system is too rigid to allow robot judges to substitute human judges in the recent future, Artificial Intelligence may still aid in other judicial tasks such as drafting judicial documents, intelligent document assembly, case retrieval, etc., and also promote overall flexibility, efficiency, and accuracy in the disposal of cases. By deconstructing the major challenges that Artificial Intelligence has to overcome in order to successfully invade the human- dominated judicial sphere, and critically evaluating the potential differences it would make in the system of justice delivery, the author tries to argue that penetration of Artificial Intelligence into the Judiciary could surely be enhancive and reparative, if not fully transformative.Keywords: artificial intelligence, judicial decision making, judicial systems, legal services delivery
Procedia PDF Downloads 22489 Navigating through Organizational Change: TAM-Based Manual for Digital Skills and Safety Transitions
Authors: Margarida Porfírio Tomás, Paula Pereira, José Palma Oliveira
Abstract:
Robotic grasping is advancing rapidly, but transferring techniques from rigid to deformable objects remains a challenge. Deformable and flexible items, such as food containers, demand nuanced handling due to their changing shapes. Bridging this gap is crucial for applications in food processing, surgical robotics, and household assistance. AGILEHAND, a Horizon project, focuses on developing advanced technologies for sorting, handling, and packaging soft and deformable products autonomously. These technologies serve as strategic tools to enhance flexibility, agility, and reconfigurability within the production and logistics systems of European manufacturing companies. Key components include intelligent detection, self-adaptive handling, efficient sorting, and agile, rapid reconfiguration. The overarching goal is to optimize work environments and equipment, ensuring both efficiency and safety. As new technologies emerge in the food industry, there will be some implications, such as labour force, safety problems and acceptance of the new technologies. To overcome these implications, AGILEHAND emphasizes the integration of social sciences and humanities, for example, the application of the Technology Acceptance Model (TAM). The project aims to create a change management manual, that will outline strategies for developing digital skills and managing health and safety transitions. It will also provide best practices and models for organizational change. Additionally, AGILEHAND will design effective training programs to enhance employee skills and knowledge. This information will be obtained through a combination of case studies, structured interviews, questionnaires, and a comprehensive literature review. The project will explore how organizations adapt during periods of change and identify factors influencing employee motivation and job satisfaction. This project received funding from European Union’s Horizon 2020/Horizon Europe research and innovation program under grant agreement No101092043 (AGILEHAND).Keywords: change management, technology acceptance model, organizational change, health and safety
Procedia PDF Downloads 4688 A Multilingual Model in the Multicultural World
Authors: Marina Petrova
Abstract:
Language policy issues related to the preservation and development of the native languages of the Russian peoples and the state languages of the national republics are increasingly becoming the focus of recent attention of educators and parents, public and national figures. Is it legal to teach the national language or the mother tongue as the state language? Due to that dispute language phobia moods easily evolve into xenophobia among the population. However, a civilized, intelligent multicultural personality can only be formed if the country develops bilingualism and multilingualism, and languages as a political tool help to find ‘keys’ to sufficiently closed national communities both within a poly-ethnic state and in internal relations of multilingual countries. The purpose of this study is to design and theoretically substantiate an efficient model of language education in the innovatively developing Republic of Sakha. 800 participants from different educational institutions of Yakutia worked at developing a multilingual model of education. This investigation is of considerable practical importance because researchers could build a methodical system designed to create conditions for the formation of a cultural language personality and the development of the multilingual communicative competence of Yakut youth, necessary for communication in native, Russian and foreign languages. The selected methodology of humane-personal and competence approaches is reliable and valid. Researchers used a variety of sources of information, including access to related scientific fields (philosophy of education, sociology, humane and social pedagogy, psychology, effective psychotherapy, methods of teaching Russian, psycholinguistics, socio-cultural education, ethnoculturology, ethnopsychology). Of special note is the application of theoretical and empirical research methods, a combination of academic analysis of the problem and experienced training, positive results of experimental work, representative series, correct processing and statistical reliability of the obtained data. It ensures the validity of the investigation’s findings as well as their broad introduction into practice of life-long language education.Keywords: intercultural communication, language policy, multilingual and multicultural education, the Sakha Republic of Yakutia
Procedia PDF Downloads 22487 Web Development in Information Technology with Javascript, Machine Learning and Artificial Intelligence
Authors: Abdul Basit Kiani, Maryam Kiani
Abstract:
Online developers now have the tools necessary to create online apps that are not only reliable but also highly interactive, thanks to the introduction of JavaScript frameworks and APIs. The objective is to give a broad overview of the recent advances in the area. The fusion of machine learning (ML) and artificial intelligence (AI) has expanded the possibilities for web development. Modern websites now include chatbots, clever recommendation systems, and customization algorithms built in. In the rapidly evolving landscape of modern websites, it has become increasingly apparent that user engagement and personalization are key factors for success. To meet these demands, websites now incorporate a range of innovative technologies. One such technology is chatbots, which provide users with instant assistance and support, enhancing their overall browsing experience. These intelligent bots are capable of understanding natural language and can answer frequently asked questions, offer product recommendations, and even help with troubleshooting. Moreover, clever recommendation systems have emerged as a powerful tool on modern websites. By analyzing user behavior, preferences, and historical data, these systems can intelligently suggest relevant products, articles, or services tailored to each user's unique interests. This not only saves users valuable time but also increases the chances of conversions and customer satisfaction. Additionally, customization algorithms have revolutionized the way websites interact with users. By leveraging user preferences, browsing history, and demographic information, these algorithms can dynamically adjust the website's layout, content, and functionalities to suit individual user needs. This level of personalization enhances user engagement, boosts conversion rates, and ultimately leads to a more satisfying online experience. In summary, the integration of chatbots, clever recommendation systems, and customization algorithms into modern websites is transforming the way users interact with online platforms. These advanced technologies not only streamline user experiences but also contribute to increased customer satisfaction, improved conversions, and overall website success.Keywords: Javascript, machine learning, artificial intelligence, web development
Procedia PDF Downloads 8186 Placement of Inflow Control Valve for Horizontal Oil Well
Authors: S. Thanabanjerdsin, F. Srisuriyachai, J. Chewaroungroj
Abstract:
Drilling horizontal well is one of the most cost-effective method to exploit reservoir by increasing exposure area between well and formation. Together with horizontal well technology, intelligent completion is often co-utilized to increases petroleum production by monitoring/control downhole production. Combination of both technological results in an opportunity to lower water cresting phenomenon, a detrimental problem that does not lower only oil recovery but also cause environmental problem due to water disposal. Flow of reservoir fluid is a result from difference between reservoir and wellbore pressure. In horizontal well, reservoir fluid around the heel location enters wellbore at higher rate compared to the toe location. As a consequence, Oil-Water Contact (OWC) at the heel side of moves upward relatively faster compared to the toe side. This causes the well to encounter an early water encroachment problem. Installation of Inflow Control Valve (ICV) in particular sections of horizontal well can involve several parameters such as number of ICV, water cut constrain of each valve, length of each section. This study is mainly focused on optimization of ICV configuration to minimize water production and at the same time, to enhance oil production. A reservoir model consisting of high aspect ratio of oil bearing zone to underneath aquifer is drilled with horizontal well and completed with variation of ICV segments. Optimization of the horizontal well configuration is firstly performed by varying number of ICV, segment length, and individual preset water cut for each segment. Simulation results show that installing ICV can increase oil recovery factor up to 5% of Original Oil In Place (OOIP) and can reduce of produced water depending on ICV segment length as well as ICV parameters. For equally partitioned-ICV segment, more number of segment results in better oil recovery. However, number of segment exceeding 10 may not give a significant additional recovery. In first production period, deformation of OWC strongly depends on number of segment along the well. Higher number of segment results in smoother deformation of OWC. After water breakthrough at heel location segment, the second production period begins. Deformation of OWC is principally dominated by ICV parameters. In certain situations that OWC is unstable such as high production rate, high viscosity fluid above aquifer and strong aquifer, second production period may give wide enough window to ICV parameter to take the roll.Keywords: horizontal well, water cresting, inflow control valve, reservoir simulation
Procedia PDF Downloads 42085 Customer Acquisition through Time-Aware Marketing Campaign Analysis in Banking Industry
Authors: Harneet Walia, Morteza Zihayat
Abstract:
Customer acquisition has become one of the critical issues of any business in the 21st century; having a healthy customer base is the essential asset of the bank business. Term deposits act as a major source of cheap funds for the banks to invest and benefit from interest rate arbitrage. To attract customers, the marketing campaigns at most financial institutions consist of multiple outbound telephonic calls with more than one contact to a customer which is a very time-consuming process. Therefore, customized direct marketing has become more critical than ever for attracting new clients. As customer acquisition is becoming more difficult to archive, having an intelligent and redefined list is necessary to sell a product smartly. Our aim of this research is to increase the effectiveness of campaigns by predicting customers who will most likely subscribe to the fixed deposit and suggest the most suitable month to reach out to customers. We design a Time Aware Upsell Prediction Framework (TAUPF) using two different approaches, with an aim to find the best approach and technique to build the prediction model. TAUPF is implemented using Upsell Prediction Approach (UPA) and Clustered Upsell Prediction Approach (CUPA). We also address the data imbalance problem by examining and comparing different methods of sampling (Up-sampling and down-sampling). Our results have shown building such a model is quite feasible and profitable for the financial institutions. The Time Aware Upsell Prediction Framework (TAUPF) can be easily used in any industry such as telecom, automobile, tourism, etc. where the TAUPF (Clustered Upsell Prediction Approach (CUPA) or Upsell Prediction Approach (UPA)) holds valid. In our case, CUPA books more reliable. As proven in our research, one of the most important challenges is to define measures which have enough predictive power as the subscription to a fixed deposit depends on highly ambiguous situations and cannot be easily isolated. While we have shown the practicality of time-aware upsell prediction model where financial institutions can benefit from contacting the customers at the specified month, further research needs to be done to understand the specific time of the day. In addition, a further empirical/pilot study on real live customer needs to be conducted to prove the effectiveness of the model in the real world.Keywords: customer acquisition, predictive analysis, targeted marketing, time-aware analysis
Procedia PDF Downloads 12584 Development of an Instrument for Measurement of Thermal Conductivity and Thermal Diffusivity of Tropical Fruit Juice
Authors: T. Ewetumo, K. D. Adedayo, Festus Ben
Abstract:
Knowledge of the thermal properties of foods is of fundamental importance in the food industry to establish the design of processing equipment. However, for tropical fruit juice, there is very little information in literature, seriously hampering processing procedures. This research work describes the development of an instrument for automated thermal conductivity and thermal diffusivity measurement of tropical fruit juice using a transient thermal probe technique based on line heat principle. The system consists of two thermocouple sensors, constant current source, heater, thermocouple amplifier, microcontroller, microSD card shield and intelligent liquid crystal. A fixed distance of 6.50mm was maintained between the two probes. When heat is applied, the temperature rise at the heater probe measured with time at time interval of 4s for 240s. The measuring element conforms as closely as possible to an infinite line source of heat in an infinite fluid. Under these conditions, thermal conductivity and thermal diffusivity are simultaneously measured, with thermal conductivity determined from the slope of a plot of the temperature rise of the heating element against the logarithm of time while thermal diffusivity was determined from the time it took the sample to attain a peak temperature and the time duration over a fixed diffusivity distance. A constant current source was designed to apply a power input of 16.33W/m to the probe throughout the experiment. The thermal probe was interfaced with a digital display and data logger by using an application program written in C++. Calibration of the instrument was done by determining the thermal properties of distilled water. Error due to convection was avoided by adding 1.5% agar to the water. The instrument has been used for measurement of thermal properties of banana, orange and watermelon. Thermal conductivity values of 0.593, 0.598, 0.586 W/m^o C and thermal diffusivity values of 1.053 ×〖10〗^(-7), 1.086 ×〖10〗^(-7), and 0.959 ×〖10〗^(-7) 〖m/s〗^2 were obtained for banana, orange and water melon respectively. Measured values were stored in a microSD card. The instrument performed very well as it measured the thermal conductivity and thermal diffusivity of the tropical fruit juice samples with statistical analysis (ANOVA) showing no significant difference (p>0.05) between the literature standards and estimated averages of each sample investigated with the developed instrument.Keywords: thermal conductivity, thermal diffusivity, tropical fruit juice, diffusion equation
Procedia PDF Downloads 35783 Environmental Analysis of Urban Communities: A Case Study of Air Pollutant Distribution in Smouha Arteries, Alexandria Egypt
Authors: Sammar Zain Allam
Abstract:
Smart Growth, intelligent cities, and healthy cities cited by WHO world health organization; they all call for clean air and minimizing air pollutants considering human health. Air quality is a thriving matter to achieve ecological cities; towards sustainable environmental development of urban fabric design. Selection criteria depends on the strategic location of our area as it is located at the entry of the city of Alexandria from its agricultural road. Besides, it represents the city center for retail, business, and educational amenities. Our study is analyzing readings of definite factors affecting air quality in a centric area in Alexandria. Our readings will be compared to standard measures of carbon dioxide, carbon monoxide, suspended particles, and air velocity or air flow. Carbon emissions are pondered in our study, in addition to suspended particles and the air velocity or air flow. Carbon dioxide and carbon monoxide crystalize the main elements to necessitate environmental and sustainable studies with the appearance of global warming and the glass house effect. Nevertheless, particulate matters are increasing causing breath issues especially to children and elder people; still threatening future generations to meet their own needs; sustainable development definition. Analysis of carbon dioxide, carbon monoxide, suspended particles together with air velocity or air flow has taken place in our area of study to manifest the relationship between these elements and the urban fabric design and land use distribution. For conclusion, dense urban fabric affecting air flow, and thus result in the concentration of air pollutants in certain zones. The appearance of open space with green areas allow the fading of air pollutants and help in their absorption. Along with dense urban fabric, high rise buildings trap air carriers which contribute to high readings of our elements. Also, street design may facilitate the circulation of air which helps carrying these pollutant away and distribute it to a wider space which decreases its harms and effects.Keywords: carbon emissions, air quality measurements, arteries air quality, airflow or air velocity, particulate matter, clean air, urban density
Procedia PDF Downloads 42782 A Digital Twin Approach to Support Real-time Situational Awareness and Intelligent Cyber-physical Control in Energy Smart Buildings
Authors: Haowen Xu, Xiaobing Liu, Jin Dong, Jianming Lian
Abstract:
Emerging smart buildings often employ cyberinfrastructure, cyber-physical systems, and Internet of Things (IoT) technologies to increase the automation and responsiveness of building operations for better energy efficiency and lower carbon emission. These operations include the control of Heating, Ventilation, and Air Conditioning (HVAC) and lighting systems, which are often considered a major source of energy consumption in both commercial and residential buildings. Developing energy-saving control models for optimizing HVAC operations usually requires the collection of high-quality instrumental data from iterations of in-situ building experiments, which can be time-consuming and labor-intensive. This abstract describes a digital twin approach to automate building energy experiments for optimizing HVAC operations through the design and development of an adaptive web-based platform. The platform is created to enable (a) automated data acquisition from a variety of IoT-connected HVAC instruments, (b) real-time situational awareness through domain-based visualizations, (c) adaption of HVAC optimization algorithms based on experimental data, (d) sharing of experimental data and model predictive controls through web services, and (e) cyber-physical control of individual instruments in the HVAC system using outputs from different optimization algorithms. Through the digital twin approach, we aim to replicate a real-world building and its HVAC systems in an online computing environment to automate the development of building-specific model predictive controls and collaborative experiments in buildings located in different climate zones in the United States. We present two case studies to demonstrate our platform’s capability for real-time situational awareness and cyber-physical control of the HVAC in the flexible research platforms within the Oak Ridge National Laboratory (ORNL) main campus. Our platform is developed using adaptive and flexible architecture design, rendering the platform generalizable and extendable to support HVAC optimization experiments in different types of buildings across the nation.Keywords: energy-saving buildings, digital twins, HVAC, cyber-physical system, BIM
Procedia PDF Downloads 11181 Measurements for Risk Analysis and Detecting Hazards by Active Wearables
Authors: Werner Grommes
Abstract:
Intelligent wearables (illuminated vests or hand and foot-bands, smart watches with a laser diode, Bluetooth smart glasses) overflow the market today. They are integrated with complex electronics and are worn very close to the body. Optical measurements and limitation of the maximum light density are needed. Smart watches are equipped with a laser diode or control different body currents. Special glasses generate readable text information that is received via radio transmission. Small high-performance batteries (lithium-ion/polymer) supply the electronics. All these products have been tested and evaluated for risk. These products must, for example, meet the requirements for electromagnetic compatibility as well as the requirements for electromagnetic fields affecting humans or implant wearers. Extensive analyses and measurements were carried out for this purpose. Many users are not aware of these risks. The result of this study should serve as a suggestion to do it better in the future or simply to point out these risks. Commercial LED warning vests, LED hand and foot-bands, illuminated surfaces with inverter (high voltage), flashlights, smart watches, and Bluetooth smart glasses were checked for risks. The luminance, the electromagnetic emissions in the low-frequency as well as in the high-frequency range, audible noises, and nervous flashing frequencies were checked by measurements and analyzed. Rechargeable lithium-ion or lithium-polymer batteries can burn or explode under special conditions like overheating, overcharging, deep discharge or using out of the temperature specification. Some risk analysis becomes necessary. The result of this study is that many smart wearables are worn very close to the body, and an extensive risk analysis becomes necessary. Wearers of active implants like a pacemaker or implantable cardiac defibrillator must be considered. If the wearable electronics include switching regulators or inverter circuits, active medical implants in the near field can be disturbed. A risk analysis is necessary.Keywords: safety and hazards, electrical safety, EMC, EMF, active medical implants, optical radiation, illuminated warning vest, electric luminescent, hand and head lamps, LED, e-light, safety batteries, light density, optical glare effects
Procedia PDF Downloads 11080 The Use of Artificial Intelligence in Digital Forensics and Incident Response in a Constrained Environment
Authors: Dipo Dunsin, Mohamed C. Ghanem, Karim Ouazzane
Abstract:
Digital investigators often have a hard time spotting evidence in digital information. It has become hard to determine which source of proof relates to a specific investigation. A growing concern is that the various processes, technology, and specific procedures used in the digital investigation are not keeping up with criminal developments. Therefore, criminals are taking advantage of these weaknesses to commit further crimes. In digital forensics investigations, artificial intelligence is invaluable in identifying crime. It has been observed that an algorithm based on artificial intelligence (AI) is highly effective in detecting risks, preventing criminal activity, and forecasting illegal activity. Providing objective data and conducting an assessment is the goal of digital forensics and digital investigation, which will assist in developing a plausible theory that can be presented as evidence in court. Researchers and other authorities have used the available data as evidence in court to convict a person. This research paper aims at developing a multiagent framework for digital investigations using specific intelligent software agents (ISA). The agents communicate to address particular tasks jointly and keep the same objectives in mind during each task. The rules and knowledge contained within each agent are dependent on the investigation type. A criminal investigation is classified quickly and efficiently using the case-based reasoning (CBR) technique. The MADIK is implemented using the Java Agent Development Framework and implemented using Eclipse, Postgres repository, and a rule engine for agent reasoning. The proposed framework was tested using the Lone Wolf image files and datasets. Experiments were conducted using various sets of ISA and VMs. There was a significant reduction in the time taken for the Hash Set Agent to execute. As a result of loading the agents, 5 percent of the time was lost, as the File Path Agent prescribed deleting 1,510, while the Timeline Agent found multiple executable files. In comparison, the integrity check carried out on the Lone Wolf image file using a digital forensic tool kit took approximately 48 minutes (2,880 ms), whereas the MADIK framework accomplished this in 16 minutes (960 ms). The framework is integrated with Python, allowing for further integration of other digital forensic tools, such as AccessData Forensic Toolkit (FTK), Wireshark, Volatility, and Scapy.Keywords: artificial intelligence, computer science, criminal investigation, digital forensics
Procedia PDF Downloads 21379 Human Capital Divergence and Team Performance: A Study of Major League Baseball Teams
Authors: Yu-Chen Wei
Abstract:
The relationship between organizational human capital and organizational effectiveness have been a common topic of interest to organization researchers. Much of this research has concluded that higher human capital can predict greater organizational outcomes. Whereas human capital research has traditionally focused on organizations, the current study turns to the team level human capital. In addition, there are no known empirical studies assessing the effect of human capital divergence on team performance. Team human capital refers to the sum of knowledge, ability, and experience embedded in team members. Team human capital divergence is defined as the variation of human capital within a team. This study is among the first to assess the role of human capital divergence as a moderator of the effect of team human capital on team performance. From the traditional perspective, team human capital represents the collective ability to solve problems and reducing operational risk of all team members. Hence, the higher team human capital, the higher the team performance. This study further employs social learning theory to explain the relationship between team human capital and team performance. According to this theory, the individuals will look for progress by way of learning from teammates in their teams. They expect to have upper human capital, in turn, to achieve high productivity, obtain great rewards and career success eventually. Therefore, the individual can have more chances to improve his or her capability by learning from peers of the team if the team members have higher average human capital. As a consequence, all team members can develop a quick and effective learning path in their work environment, and in turn enhance their knowledge, skill, and experience, leads to higher team performance. This is the first argument of this study. Furthermore, the current study argues that human capital divergence is negative to a team development. For the individuals with lower human capital in the team, they always feel the pressure from their outstanding colleagues. Under the pressure, they cannot give full play to their own jobs and lose more and more confidence. For the smart guys in the team, they are reluctant to be colleagues with the teammates who are not as intelligent as them. Besides, they may have lower motivation to move forward because they are prominent enough compared with their teammates. Therefore, human capital divergence will moderate the relationship between team human capital and team performance. These two arguments were tested in 510 team-seasons drawn from major league baseball (1998–2014). Results demonstrate that there is a positive relationship between team human capital and team performance which is consistent with previous research. In addition, the variation of human capital within a team weakens the above relationships. That is to say, an individual working with teammates who are comparable to them can produce better performance than working with people who are either too smart or too stupid to them.Keywords: human capital divergence, team human capital, team performance, team level research
Procedia PDF Downloads 24178 Selection of Qualitative Research Strategy for Bullying and Harassment in Sport
Authors: J. Vveinhardt, V. B. Fominiene, L. Jeseviciute-Ufartiene
Abstract:
Relevance of Research: Qualitative research is still regarded as highly subjective and not sufficiently scientific in order to achieve objective research results. However, it is agreed that a qualitative study allows revealing the hidden motives of the research participants, creating new theories, and highlighting the field of problem. There is enough research done to reveal these qualitative research aspects. However, each research area has its own specificity, and sport is unique due to the image of its participants, who are understood as strong and invincible. Therefore, a sport participant might have personal issues to recognize himself as a victim in the context of bullying and harassment. Accordingly, researcher has a dilemma in general making to speak a victim in sport. Thus, ethical aspects of qualitative research become relevant. The plenty fields of sport make a problem determining the sample size of research. Thus, the corresponding problem of this research is which and why qualitative research strategies are the most suitable revealing the phenomenon of bullying and harassment in sport. Object of research is qualitative research strategy for bullying and harassment in sport. Purpose of the research is to analyze strategies of qualitative research selecting suitable one for bullying and harassment in sport. Methods of research were scientific research analyses of qualitative research application for bullying and harassment research. Research Results: Four mane strategies are applied in the qualitative research; inductive, deductive, retroductive, and abductive. Inductive and deductive strategies are commonly used researching bullying and harassment in sport. The inductive strategy is applied as quantitative research in order to reveal and describe the prevalence of bullying and harassment in sport. The deductive strategy is used through qualitative methods in order to explain the causes of bullying and harassment and to predict the actions of the participants of bullying and harassment in sport and the possible consequences of these actions. The most commonly used qualitative method for the research of bullying and harassment in sports is semi-structured interviews in speech and in written. However, these methods may restrict the openness of the participants in the study when recording on the dictator or collecting incomplete answers when the participant in the survey responds in writing because it is not possible to refine the answers. Qualitative researches are more prevalent in terms of technology-defined research data. For example, focus group research in a closed forum allows participants freely interact with each other because of the confidentiality of the selected participants in the study. The moderator can purposefully formulate and submit problem-solving questions to the participants. Hence, the application of intelligent technology through in-depth qualitative research can help discover new and specific information on bullying and harassment in sport. Acknowledgement: This research is funded by the European Social Fund according to the activity ‘Improvement of researchers’ qualification by implementing world-class R&D projects of Measure No. 09.3.3-LMT-K-712.Keywords: bullying, focus group, harassment, narrative, sport, qualitative research
Procedia PDF Downloads 18277 Synthesis of Temperature Sensitive Nano/Microgels by Soap-Free Emulsion Polymerization and Their Application in Hydrate Sediments Drilling Operations
Authors: Xuan Li, Weian Huang, Jinsheng Sun, Fuhao Zhao, Zhiyuan Wang, Jintang Wang
Abstract:
Natural gas hydrates (NGHs) as promising alternative energy sources have gained increasing attention. Hydrate-bearing formation in marine areas is highly unconsolidated formation and is fragile, which is composed of weakly cemented sand-clay and silty sediments. During the drilling process, the invasion of drilling fluid can easily lead to excessive water content in the formation. It will change the soil liquid plastic limit index, which significantly affects the formation quality, leading to wellbore instability due to the metastable character of hydrate-bearing sediments. Therefore, controlling the filtrate loss into the formation in the drilling process has to be highly regarded for protecting the stability of the wellbore. In this study, the temperature-sensitive nanogel of P(NIPAM-co-AMPS-co-tBA) was prepared by soap-free emulsion polymerization, and the temperature-sensitive behavior was employed to achieve self-adaptive plugging in hydrate sediments. First, the effects of additional amounts of AMPS, tBA, and cross-linker MBA on the microgel synthesis process and temperature-sensitive behaviors were investigated. Results showed that, as a reactive emulsifier, AMPS can not only participate in the polymerization reaction but also act as an emulsifier to stabilize micelles and enhance the stability of nanoparticles. The volume phase transition temperature (VPTT) of nanogels gradually decreased with the increase of the contents of hydrophobic monomer tBA. An increase in the content of the cross-linking agent MBA can lead to a rise in the coagulum content and instability of the emulsion. The plugging performance of nanogel was evaluated in a core sample with a pore size distribution range of 100-1000nm. The temperature-sensitive nanogel can effectively improve the microfiltration performance of drilling fluid. Since a combination of a series of nanogels could have a wide particle size distribution at any temperature, around 200nm to 800nm, the self-adaptive plugging capacity of nanogels for the hydrate sediments was revealed. Thermosensitive nanogel is a potential intelligent plugging material for drilling operations in natural gas hydrate-bearing sediments.Keywords: temperature-sensitive nanogel, NIPAM, self-adaptive plugging performance, drilling operations, hydrate-bearing sediments
Procedia PDF Downloads 17576 The Effects of Culture and Language on Social Impression Formation from Voice Pleasantness: A Study with French and Iranian People
Authors: L. Bruckert, A. Mansourzadeh
Abstract:
The voice has a major influence on interpersonal communication in everyday life via the perception of pleasantness. The evolutionary perspective postulates that the mechanisms underlying the pleasantness judgments are universal adaptations that have evolved in the service of choosing a mate (through the process of sexual selection). From this point of view, the favorite voices would be those with more marked sexually dimorphic characteristics; for example, in men with lower voice pitch, pitch is the main criterion. On the other hand, one can postulate that the mechanisms involved are gradually established since childhood through exposure to the environment, and thus the prosodic elements could take precedence in everyday life communication as it conveys information about the speaker's attitude (willingness to communicate, interest toward the interlocutors). Our study focuses on voice pleasantness and its relationship with social impression formation, exploring both the spectral aspects (pitch, timbre) and the prosodic ones. In our study, we recorded the voices through two vocal corpus (five vowels and a reading text) of 25 French males speaking French and 25 Iranian males speaking Farsi. French listeners (40 male/40 female) listened to the French voices and made a judgment either on the voice's pleasantness or on the speaker (judgment about his intelligence, honesty, sociability). The regression analyses from our acoustic measures showed that the prosodic elements (for example, the intonation and the speech rate) are the most important criteria concerning pleasantness, whatever the corpus or the listener's gender. Moreover, the correlation analyses showed that the speakers with the voices judged as the most pleasant are considered the most intelligent, sociable, and honest. The voices in Farsi have been judged by 80 other French listeners (40 male/40 female), and we found the same effect of intonation concerning the judgment of pleasantness with the corpus «vowel» whereas with the corpus «text» the pitch is more important than the prosody. It may suggest that voice perception contains some elements invariant across culture/language, whereas others are influenced by the cultural/linguistic background of the listener. Shortly in the future, Iranian people will be asked to listen either to the French voices for half of them or to the Farsi voices for the other half and produce the same judgments as the French listeners. This experimental design could potentially make it possible to distinguish what is linked to culture and what is linked to language in the case of differences in voice perception.Keywords: cross-cultural psychology, impression formation, pleasantness, voice perception
Procedia PDF Downloads 7075 Profiling Risky Code Using Machine Learning
Authors: Zunaira Zaman, David Bohannon
Abstract:
This study explores the application of machine learning (ML) for detecting security vulnerabilities in source code. The research aims to assist organizations with large application portfolios and limited security testing capabilities in prioritizing security activities. ML-based approaches offer benefits such as increased confidence scores, false positives and negatives tuning, and automated feedback. The initial approach using natural language processing techniques to extract features achieved 86% accuracy during the training phase but suffered from overfitting and performed poorly on unseen datasets during testing. To address these issues, the study proposes using the abstract syntax tree (AST) for Java and C++ codebases to capture code semantics and structure and generate path-context representations for each function. The Code2Vec model architecture is used to learn distributed representations of source code snippets for training a machine-learning classifier for vulnerability prediction. The study evaluates the performance of the proposed methodology using two datasets and compares the results with existing approaches. The Devign dataset yielded 60% accuracy in predicting vulnerable code snippets and helped resist overfitting, while the Juliet Test Suite predicted specific vulnerabilities such as OS-Command Injection, Cryptographic, and Cross-Site Scripting vulnerabilities. The Code2Vec model achieved 75% accuracy and a 98% recall rate in predicting OS-Command Injection vulnerabilities. The study concludes that even partial AST representations of source code can be useful for vulnerability prediction. The approach has the potential for automated intelligent analysis of source code, including vulnerability prediction on unseen source code. State-of-the-art models using natural language processing techniques and CNN models with ensemble modelling techniques did not generalize well on unseen data and faced overfitting issues. However, predicting vulnerabilities in source code using machine learning poses challenges such as high dimensionality and complexity of source code, imbalanced datasets, and identifying specific types of vulnerabilities. Future work will address these challenges and expand the scope of the research.Keywords: code embeddings, neural networks, natural language processing, OS command injection, software security, code properties
Procedia PDF Downloads 10874 The Exploitation of the MOSES Project Outcomes on Supply Chain Optimisation
Authors: Reza Karimpour
Abstract:
Ports play a decisive role in the EU's external and internal trade, as about 74% of imports and exports and 37% of exchanges go through ports. Although ports, especially Deep Sea Shipping (DSS) ports, are integral nodes within multimodal logistic flows, Short Sea Shipping (SSS) and inland waterways are not so well integrated. The automated vessels and supply chain optimisations for sustainable shortsea shipping (MOSES) project aims to enhance the short sea shipping component of the European supply chain by addressing the vulnerabilities and strains related to the operation of large containerships. The MOSES concept can be shortly described as a large containership (mother-vessel) approaching a DSS port (or a large container terminal). Upon her arrival, a combined intelligent mega-system consisting of the MOSES Autonomous tugboat swarm for manoeuvring and the MOSES adapted AutoMoor system. Then, container handling processes are ready to start moving containers to their destination via hinterland connections (trucks and/or rail) or to be shipped to destinations near small ports (on the mainland or island). For the first case, containers are stored in a dedicated port area (Storage area), waiting to be moved via trucks and/or rail. For the second case, containers are stacked by existing port equipment near-dedicated berths of the DSS port. They then are loaded on the MOSES Innovative Feeder Vessel, equipped with the MOSES Robotic Container-Handling System that provides (semi-) autonomous (un) feeding of the feeder. The Robotic Container-Handling System is remotely monitored through a Shore Control Centre. When the MOSES innovative Feeder vessel approaches the small port, where her docking is achieved without tugboats, she automatically unloads the containers using the Robotic Container-Handling System on the quay or directly on trucks. As a result, ports with minimal or no available infrastructure may be effectively integrated with the container supply chain. Then, the MOSES innovative feeder vessel continues her voyage to the next small port, or she returns to the DSS port. MOSES exploitation activity mainly aims to exploit research outcomes beyond the project, facilitate utilisation of the pilot results by others, and continue the pilot service after the project ends. By the mid-lifetime of the project, the exploitation plan introduces the reader to the MOSES project and its key exploitable results. It provides a plan for delivering the MOSES innovations to the market as part of the overall exploitation plan.Keywords: automated vessels, exploitation, shortsea shipping, supply chain
Procedia PDF Downloads 11273 MARISTEM: A COST Action Focused on Stem Cells of Aquatic Invertebrates
Authors: Arzu Karahan, Loriano Ballarin, Baruch Rinkevich
Abstract:
Marine invertebrates, the highly diverse phyla of multicellular organisms, represent phenomena that are either not found or highly restricted in the vertebrates. These include phenomena like budding, fission, a fusion of ramets, and high regeneration power, such as the ability to create whole new organisms from either tiny parental fragment, many of which are controlled by totipotent, pluripotent, and multipotent stem cells. Thus, there is very much that can be learned from these organisms on the practical and evolutionary levels, further resembling Darwin's words, “It is not the strongest of the species that survives, nor the most intelligent, but the one most responsive to change”. The ‘stem cell’ notion highlights a cell that has the ability to continuously divide and differentiate into various progenitors and daughter cells. In vertebrates, adult stem cells are rare cells defined as lineage-restricted (multipotent at best) with tissue or organ-specific activities that are located in defined niches and further regulate the machinery of homeostasis, repair, and regeneration. They are usually categorized by their morphology, tissue of origin, plasticity, and potency. The above description not always holds when comparing the vertebrates with marine invertebrates’ stem cells that display wider ranges of plasticity and diversity at the taxonomic and the cellular levels. While marine/aquatic invertebrates stem cells (MISC) have recently raised more scientific interest, the know-how is still behind the attraction they deserve. MISC, not only are highly potent but, in many cases, are abundant (e.g., 1/3 of the entire animal cells), do not locate in permanent niches, participates in delayed-aging and whole-body regeneration phenomena, the knowledge of which can be clinically relevant. Moreover, they have massive hidden potential for the discovery of new bioactive molecules that can be used for human health (antitumor, antimicrobial) and biotechnology. The MARISTEM COST action (Stem Cells of Marine/Aquatic Invertebrates: From Basic Research to Innovative Applications) aims to connect the European fragmented MISC community. Under this scientific umbrella, the action conceptualizes the idea for adult stem cells that do not share many properties with the vertebrates’ stem cells, organizes meetings, summer schools, and workshops, stimulating young researchers, supplying technical and adviser support via short-term scientific studies, making new bridges between the MISC community and biomedical disciplines.Keywords: aquatic/marine invertebrates, adult stem cell, regeneration, cell cultures, bioactive molecules
Procedia PDF Downloads 16972 Analysing Competitive Advantage of IoT and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic has not only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of normal design, construction, and operation of cities provides a unique opportunity to improve the connection between people. The Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the research contribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: data analytics, smart cities, competitive advantage, internet of things
Procedia PDF Downloads 13671 Application of Improved Semantic Communication Technology in Remote Sensing Data Transmission
Authors: Tingwei Shu, Dong Zhou, Chengjun Guo
Abstract:
Semantic communication is an emerging form of communication that realize intelligent communication by extracting semantic information of data at the source and transmitting it, and recovering the data at the receiving end. It can effectively solve the problem of data transmission under the situation of large data volume, low SNR and restricted bandwidth. With the development of Deep Learning, semantic communication further matures and is gradually applied in the fields of the Internet of Things, Uumanned Air Vehicle cluster communication, remote sensing scenarios, etc. We propose an improved semantic communication system for the situation where the data volume is huge and the spectrum resources are limited during the transmission of remote sensing images. At the transmitting, we need to extract the semantic information of remote sensing images, but there are some problems. The traditional semantic communication system based on Convolutional Neural Network cannot take into account the global semantic information and local semantic information of the image, which results in less-than-ideal image recovery at the receiving end. Therefore, we adopt the improved vision-Transformer-based structure as the semantic encoder instead of the mainstream one using CNN to extract the image semantic features. In this paper, we first perform pre-processing operations on remote sensing images to improve the resolution of the images in order to obtain images with more semantic information. We use wavelet transform to decompose the image into high-frequency and low-frequency components, perform bilinear interpolation on the high-frequency components and bicubic interpolation on the low-frequency components, and finally perform wavelet inverse transform to obtain the preprocessed image. We adopt the improved Vision-Transformer structure as the semantic coder to extract and transmit the semantic information of remote sensing images. The Vision-Transformer structure can better train the huge data volume and extract better image semantic features, and adopt the multi-layer self-attention mechanism to better capture the correlation between semantic features and reduce redundant features. Secondly, to improve the coding efficiency, we reduce the quadratic complexity of the self-attentive mechanism itself to linear so as to improve the image data processing speed of the model. We conducted experimental simulations on the RSOD dataset and compared the designed system with a semantic communication system based on CNN and image coding methods such as BGP and JPEG to verify that the method can effectively alleviate the problem of excessive data volume and improve the performance of image data communication.Keywords: semantic communication, transformer, wavelet transform, data processing
Procedia PDF Downloads 7970 Deep Reinforcement Learning Approach for Trading Automation in The Stock Market
Authors: Taylan Kabbani, Ekrem Duman
Abstract:
The design of adaptive systems that take advantage of financial markets while reducing the risk can bring more stagnant wealth into the global market. However, most efforts made to generate successful deals in trading financial assets rely on Supervised Learning (SL), which suffered from various limitations. Deep Reinforcement Learning (DRL) offers to solve these drawbacks of SL approaches by combining the financial assets price "prediction" step and the "allocation" step of the portfolio in one unified process to produce fully autonomous systems capable of interacting with its environment to make optimal decisions through trial and error. In this paper, a continuous action space approach is adopted to give the trading agent the ability to gradually adjust the portfolio's positions with each time step (dynamically re-allocate investments), resulting in better agent-environment interaction and faster convergence of the learning process. In addition, the approach supports the managing of a portfolio with several assets instead of a single one. This work represents a novel DRL model to generate profitable trades in the stock market, effectively overcoming the limitations of supervised learning approaches. We formulate the trading problem, or what is referred to as The Agent Environment as Partially observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market, such as liquidity and transaction costs. More specifically, we design an environment that simulates the real-world trading process by augmenting the state representation with ten different technical indicators and sentiment analysis of news articles for each stock. We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm, which can learn policies in high-dimensional and continuous action spaces like those typically found in the stock market environment. From the point of view of stock market forecasting and the intelligent decision-making mechanism, this paper demonstrates the superiority of deep reinforcement learning in financial markets over other types of machine learning such as supervised learning and proves its credibility and advantages of strategic decision-making.Keywords: the stock market, deep reinforcement learning, MDP, twin delayed deep deterministic policy gradient, sentiment analysis, technical indicators, autonomous agent
Procedia PDF Downloads 17869 Analyzing Competitive Advantage of Internet of Things and Data Analytics in Smart City Context
Authors: Petra Hofmann, Dana Koniel, Jussi Luukkanen, Walter Nieminen, Lea Hannola, Ilkka Donoghue
Abstract:
The Covid-19 pandemic forced people to isolate and become physically less connected. The pandemic hasnot only reshaped people’s behaviours and needs but also accelerated digital transformation (DT). DT of cities has become an imperative with the outlook of converting them into smart cities in the future. Embedding digital infrastructure and smart city initiatives as part of the normal design, construction, and operation of cities provides a unique opportunity to improve connection between people. Internet of Things (IoT) is an emerging technology and one of the drivers in DT. It has disrupted many industries by introducing different services and business models, and IoT solutions are being applied in multiple fields, including smart cities. As IoT and data are fundamentally linked together, IoT solutions can only create value if the data generated by the IoT devices is analysed properly. Extracting relevant conclusions and actionable insights by using established techniques, data analytics contributes significantly to the growth and success of IoT applications and investments. Companies must grasp DT and be prepared to redesign their offerings and business models to remain competitive in today’s marketplace. As there are many IoT solutions available today, the amount of data is tremendous. The challenge for companies is to understand what solutions to focus on and how to prioritise and which data to differentiate from the competition. This paper explains how IoT and data analytics can impact competitive advantage and how companies should approach IoT and data analytics to translate them into concrete offerings and solutions in the smart city context. The study was carried out as a qualitative, literature-based research. A case study is provided to validate the preservation of company’s competitive advantage through smart city solutions. The results of the researchcontribution provide insights into the different factors and considerations related to creating competitive advantage through IoT and data analytics deployment in the smart city context. Furthermore, this paper proposes a framework that merges the factors and considerations with examples of offerings and solutions in smart cities. The data collected through IoT devices, and the intelligent use of it, can create a competitive advantage to companies operating in smart city business. Companies should take into consideration the five forces of competition that shape industries and pay attention to the technological, organisational, and external contexts which define factors for consideration of competitive advantages in the field of IoT and data analytics. Companies that can utilise these key assets in their businesses will most likely conquer the markets and have a strong foothold in the smart city business.Keywords: internet of things, data analytics, smart cities, competitive advantage
Procedia PDF Downloads 9568 Diselenide-Linked Redox Stimuli-Responsive Methoxy Poly(Ethylene Glycol)-b-Poly(Lactide-Co-Glycolide) Micelles for the Delivery of Doxorubicin in Cancer Cells
Authors: Yihenew Simegniew Birhan, Hsieh Chih Tsai
Abstract:
The recent advancements in synthetic chemistry and nanotechnology fostered the development of different nanocarriers for enhanced intracellular delivery of pharmaceutical agents to tumor cells. Polymeric micelles (PMs), characterized by small size, appreciable drug loading capacity (DLC), better accumulation in tumor tissue via enhanced permeability and retention (EPR) effect, and the ability to avoid detection and subsequent clearance by the mononuclear phagocyte (MNP) system, are convenient to improve the poor solubility, slow absorption and non-selective biodistribution of payloads embedded in their hydrophobic cores and hence, enhance the therapeutic efficacy of chemotherapeutic agents. Recently, redox-responsive polymeric micelles have gained significant attention for the delivery and controlled release of anticancer drugs in tumor cells. In this study, we synthesized redox-responsive diselenide bond containing amphiphilic polymer, Bi(mPEG-PLGA)-Se₂ from mPEG-PLGA, and 3,3'-diselanediyldipropanoic acid (DSeDPA) using DCC/DMAP as coupling agents. The successful synthesis of the copolymers was verified by different spectroscopic techniques. Above the critical micelle concentration, the amphiphilic copolymer, Bi(mPEG-PLGA)-Se₂, self-assembled into stable micelles. The DLS data indicated that the hydrodynamic diameter of the micelles (123.9 ± 0.85 nm) was suitable for extravasation into the tumor cells through the EPR effect. The drug loading content (DLC) and encapsulation efficiency (EE) of DOX-loaded micelles were found to be 6.61 wt% and 54.9%, respectively. The DOX-loaded micelles showed initial burst release accompanied by sustained release trend where 73.94% and 69.54% of encapsulated DOX was released upon treatment with 6mM GSH and 0.1% H₂O₂, respectively. The biocompatible nature of Bi(mPEG-PLGA)-Se₂ copolymer was confirmed by the cell viability study. In addition, the DOX-loaded micelles exhibited significant inhibition against HeLa cells (44.46%), at a maximum dose of 7.5 µg/mL. The fluorescent microscope images of HeLa cells treated with 3 µg/mL (equivalent DOX concentration) revealed efficient internalization and accumulation of DOX-loaded Bi(mPEG-PLGA)-Se₂ micelles in the cytosol of cancer cells. In conclusion, the intelligent, biocompatible, and the redox stimuli-responsive behavior of Bi(mPEG-PLGA)-Se₂ copolymer marked the potential applications of diselenide-linked mPEG-PLGA micelles for the delivery and on-demand release of chemotherapeutic agents in cancer cells.Keywords: anticancer drug delivery, diselenide bond, polymeric micelles, redox-responsive
Procedia PDF Downloads 11067 Security Issues in Long Term Evolution-Based Vehicle-To-Everything Communication Networks
Authors: Mujahid Muhammad, Paul Kearney, Adel Aneiba
Abstract:
The ability for vehicles to communicate with other vehicles (V2V), the physical (V2I) and network (V2N) infrastructures, pedestrians (V2P), etc. – collectively known as V2X (Vehicle to Everything) – will enable a broad and growing set of applications and services within the intelligent transport domain for improving road safety, alleviate traffic congestion and support autonomous driving. The telecommunication research and industry communities and standardization bodies (notably 3GPP) has finally approved in Release 14, cellular communications connectivity to support V2X communication (known as LTE – V2X). LTE – V2X system will combine simultaneous connectivity across existing LTE network infrastructures via LTE-Uu interface and direct device-to-device (D2D) communications. In order for V2X services to function effectively, a robust security mechanism is needed to ensure legal and safe interaction among authenticated V2X entities in the LTE-based V2X architecture. The characteristics of vehicular networks, and the nature of most V2X applications, which involve human safety makes it significant to protect V2X messages from attacks that can result in catastrophically wrong decisions/actions include ones affecting road safety. Attack vectors include impersonation attacks, modification, masquerading, replay, MiM attacks, and Sybil attacks. In this paper, we focus our attention on LTE-based V2X security and access control mechanisms. The current LTE-A security framework provides its own access authentication scheme, the AKA protocol for mutual authentication and other essential cryptographic operations between UEs and the network. V2N systems can leverage this protocol to achieve mutual authentication between vehicles and the mobile core network. However, this protocol experiences technical challenges, such as high signaling overhead, lack of synchronization, handover delay and potential control plane signaling overloads, as well as privacy preservation issues, which cannot satisfy the adequate security requirements for majority of LTE-based V2X services. This paper examines these challenges and points to possible ways by which they can be addressed. One possible solution, is the implementation of the distributed peer-to-peer LTE security mechanism based on the Bitcoin/Namecoin framework, to allow for security operations with minimal overhead cost, which is desirable for V2X services. The proposed architecture can ensure fast, secure and robust V2X services under LTE network while meeting V2X security requirements.Keywords: authentication, long term evolution, security, vehicle-to-everything
Procedia PDF Downloads 168