Search results for: crash data
22361 Testing the Life Cycle Theory on the Capital Structure Dynamics of Trade-Off and Pecking Order Theories: A Case of Retail, Industrial and Mining Sectors
Authors: Freddy Munzhelele
Abstract:
Setting: the empirical research has shown that the life cycle theory has an impact on the firms’ financing decisions, particularly the dividend pay-outs. Accordingly, the life cycle theory posits that as a firm matures, it gets to a level and capacity where it distributes more cash as dividends. On the other hand, the young firms prioritise investment opportunities sets and their financing; thus, they pay little or no dividends. The research on firms’ financing decisions also demonstrated, among others, the adoption of trade-off and pecking order theories on the dynamics of firms capital structure. The trade-off theory talks to firms holding a favourable position regarding debt structures particularly as to the cost and benefits thereof; and pecking order is concerned with firms preferring a hierarchical order as to choosing financing sources. The case of life cycle hypothesis explaining the financial managers’ decisions as regards the firms’ capital structure dynamics appears to be an interesting link, yet this link has been neglected in corporate finance research. If this link is to be explored as an empirical research, the financial decision-making alternatives will be enhanced immensely, since no conclusive evidence has been found yet as to the dynamics of capital structure. Aim: the aim of this study is to examine the impact of life cycle theory on the capital structure dynamics trade-off and pecking order theories of firms listed in retail, industrial and mining sectors of the JSE. These sectors are among the key contributors to the GDP in the South African economy. Design and methodology: following the postpositivist research paradigm, the study is quantitative in nature and utilises secondary data obtainable from the financial statements of sampled firm for the period 2010 – 2022. The firms’ financial statements will be extracted from the IRESS database. Since the data will be in panel form, a combination of the static and dynamic panel data estimators will used to analyse data. The overall data analyses will be done using STATA program. Value add: this study directly investigates the link between the life cycle theory and the dynamics of capital structure decisions, particularly the trade-off and pecking order theories.Keywords: life cycle theory, trade-off theory, pecking order theory, capital structure, JSE listed firms
Procedia PDF Downloads 6122360 Neural Synchronization - The Brain’s Transfer of Sensory Data
Authors: David Edgar
Abstract:
To understand how the brain’s subconscious and conscious functions, we must conquer the physics of Unity, which leads to duality’s algorithm. Where the subconscious (bottom-up) and conscious (top-down) processes function together to produce and consume intelligence, we use terms like ‘time is relative,’ but we really do understand the meaning. In the brain, there are different processes and, therefore, different observers. These different processes experience time at different rates. A sensory system such as the eyes cycles measurement around 33 milliseconds, the conscious process of the frontal lobe cycles at 300 milliseconds, and the subconscious process of the thalamus cycle at 5 milliseconds. Three different observers experience time differently. To bridge observers, the thalamus, which is the fastest of the processes, maintains a synchronous state and entangles the different components of the brain’s physical process. The entanglements form a synchronous cohesion between the brain components allowing them to share the same state and execute in the same measurement cycle. The thalamus uses the shared state to control the firing sequence of the brain’s linear subconscious process. Sharing state also allows the brain to cheat on the amount of sensory data that must be exchanged between components. Only unpredictable motion is transferred through the synchronous state because predictable motion already exists in the shared framework. The brain’s synchronous subconscious process is entirely based on energy conservation, where prediction regulates energy usage. So, the eyes every 33 milliseconds dump their sensory data into the thalamus every day. The thalamus is going to perform a motion measurement to identify the unpredictable motion in the sensory data. Here is the trick. The thalamus conducts its measurement based on the original observation time of the sensory system (33 ms), not its own process time (5 ms). This creates a data payload of synchronous motion that preserves the original sensory observation. Basically, a frozen moment in time (Flat 4D). The single moment in time can then be processed through the single state maintained by the synchronous process. Other processes, such as consciousness (300 ms), can interface with the synchronous state to generate awareness of that moment. Now, synchronous data traveling through a separate faster synchronous process creates a theoretical time tunnel where observation time is tunneled through the synchronous process and is reproduced on the other side in the original time-relativity. The synchronous process eliminates time dilation by simply removing itself from the equation so that its own process time does not alter the experience. To the original observer, the measurement appears to be instantaneous, but in the thalamus, a linear subconscious process generating sensory perception and thought production is being executed. It is all just occurring in the time available because other observation times are slower than thalamic measurement time. For life to exist in the physical universe requires a linear measurement process, it just hides by operating at a faster time relativity. What’s interesting is time dilation is not the problem; it’s the solution. Einstein said there was no universal time.Keywords: neural synchronization, natural intelligence, 99.95% IoT data transmission savings, artificial subconscious intelligence (ASI)
Procedia PDF Downloads 12722359 Dogmatic Analysis of Legal Risks of Using Artificial Intelligence: The European Union and Polish Perspective
Authors: Marianna Iaroslavska
Abstract:
ChatGPT is becoming commonplace. However, only a few people think about the legal risks of using Large Language Model in their daily work. The main dilemmas concern the following areas: who owns the copyright to what somebody creates through ChatGPT; what can OpenAI do with the prompt you enter; can you accidentally infringe on another creator's rights through ChatGPT; what about the protection of the data somebody enters into the chat. This paper will present these and other legal risks of using large language models at work using dogmatic methods and case studies. The paper will present a legal analysis of AI risks against the background of European Union law and Polish law. This analysis will answer questions about how to protect data, how to make sure you do not violate copyright, and what is at stake with the AI Act, which recently came into force in the EU. If your work is related to the EU area, and you use AI in your work, this paper will be a real goldmine for you. The copyright law in force in Poland does not protect your rights to a work that is created with the help of AI. So if you start selling such a work, you may face two main problems. First, someone may steal your work, and you will not be entitled to any protection because work created with AI does not have any legal protection. Second, the AI may have created the work by infringing on another person's copyright, so they will be able to claim damages from you. In addition, the EU's current AI Act imposes a number of additional obligations related to the use of large language models. The AI Act divides artificial intelligence into four risk levels and imposes different requirements depending on the level of risk. The EU regulation is aimed primarily at those developing and marketing artificial intelligence systems in the EU market. In addition to the above obstacles, personal data protection comes into play, which is very strictly regulated in the EU. If you violate personal data by entering information into ChatGPT, you will be liable for violations. When using AI within the EU or in cooperation with entities located in the EU, you have to take into account a lot of risks. This paper will highlight such risks and explain how they can be avoided.Keywords: EU, AI act, copyright, polish law, LLM
Procedia PDF Downloads 2122358 The Impact of Electronic Commerce on Organisational Efectiveness: A Study of Zenith Bank Plc
Authors: Olusola Abiodun Arinde
Abstract:
This research work was prompted by the very important role e-commerce plays in every organization, be it private or public. The underlying objective of this study is to have a critical appraisal of the extent to which e-commerce impacts on organizational effectiveness. This research was carried out using Zenith Bank Plc as a case study. Relevant data were collected through structured questionnaire, oral interview, journals, newspapers, and textbooks. The data collected were analyzed and hypotheses were tested. Based on the result of the hypotheses, it was observed that e-commerce is significant to every organization. Through e-commerce, fast services delivery would be guaranteed to customers, this would lead to higher productivity and profit for organizations. E-commerce should be managed in such a way that it does not alienate customers; it should also prevent enormous risks that are associated with e-commerce.Keywords: e-commerce, fast service, productivity, profit
Procedia PDF Downloads 24622357 Actionable Personalised Learning Strategies to Improve a Growth-Mindset in an Educational Setting Using Artificial Intelligence
Authors: Garry Gorman, Nigel McKelvey, James Connolly
Abstract:
This study will evaluate a growth mindset intervention with Junior Cycle Coding and Senior Cycle Computer Science students in Ireland, where gamification will be used to incentivise growth mindset behaviour. An artificial intelligence (AI) driven personalised learning system will be developed to present computer programming learning tasks in a manner that is best suited to the individuals’ own learning preferences while incentivising and rewarding growth mindset behaviour of persistence, mastery response to challenge, and challenge seeking. This research endeavours to measure mindset with before and after surveys (conducted nationally) and by recording growth mindset behaviour whilst playing a digital game. This study will harness the capabilities of AI and aims to determine how a personalised learning (PL) experience can impact the mindset of a broad range of students. The focus of this study will be to determine how personalising the learning experience influences female and disadvantaged students' sense of belonging in the computer science classroom when tasks are presented in a manner that is best suited to the individual. Whole Brain Learning will underpin this research and will be used as a framework to guide the research in identifying key areas such as thinking and learning styles, cognitive potential, motivators and fears, and emotional intelligence. This research will be conducted in multiple school types over one academic year. Digital games will be played multiple times over this period, and the data gathered will be used to inform the AI algorithm. The three data sets are described as follows: (i) Before and after survey data to determine the grit scores and mindsets of the participants, (ii) The Growth Mind-Set data from the game, which will measure multiple growth mindset behaviours, such as persistence, response to challenge and use of strategy, (iii) The AI data to guide PL. This study will highlight the effectiveness of an AI-driven personalised learning experience. The data will position AI within the Irish educational landscape, with a specific focus on the teaching of CS. These findings will benefit coding and computer science teachers by providing a clear pedagogy for the effective delivery of personalised learning strategies for computer science education. This pedagogy will help prevent students from developing a fixed mindset while helping pupils to exhibit persistence of effort, use of strategy, and a mastery response to challenges.Keywords: computer science education, artificial intelligence, growth mindset, pedagogy
Procedia PDF Downloads 8822356 Theoretical Study of the Structural and Elastic Properties of Semiconducting Rare Earth Chalcogenide Sm1-XEuXS under Pressure
Authors: R. Dubey, M. Sarwan, S. Singh
Abstract:
We have investigated the phase transition pressure and associated volume collapse in Sm1– X EuX S alloy (0≤x≤1) which shows transition from discontinuous to continuous as x is reduced. The calculated results from present approach are in good agreement with experimental data available for the end point members (x=0 and x=1). The results for the alloy counter parts are also in fair agreement with experimental data generated from the vegard’s law. An improved interaction potential model has been developed which includes coulomb, three body interaction, polarizability effect and overlap repulsive interaction operative up to second neighbor ions. It is found that the inclusion of polarizability effect has improved our results.Keywords: elastic constants, high pressure, phase transition, rare earth compound
Procedia PDF Downloads 42022355 Revolutionizing Healthcare Facility Maintenance: A Groundbreaking AI, BIM, and IoT Integration Framework
Authors: Mina Sadat Orooje, Mohammad Mehdi Latifi, Behnam Fereydooni Eftekhari
Abstract:
The integration of cutting-edge Internet of Things (IoT) technologies with advanced Artificial Intelligence (AI) systems is revolutionizing healthcare facility management. However, the current landscape of hospital building maintenance suffers from slow, repetitive, and disjointed processes, leading to significant financial, resource, and time losses. Additionally, the potential of Building Information Modeling (BIM) in facility maintenance is hindered by a lack of data within digital models of built environments, necessitating a more streamlined data collection process. This paper presents a robust framework that harmonizes AI with BIM-IoT technology to elevate healthcare Facility Maintenance Management (FMM) and address these pressing challenges. The methodology begins with a thorough literature review and requirements analysis, providing insights into existing technological landscapes and associated obstacles. Extensive data collection and analysis efforts follow to deepen understanding of hospital infrastructure and maintenance records. Critical AI algorithms are identified to address predictive maintenance, anomaly detection, and optimization needs alongside integration strategies for BIM and IoT technologies, enabling real-time data collection and analysis. The framework outlines protocols for data processing, analysis, and decision-making. A prototype implementation is executed to showcase the framework's functionality, followed by a rigorous validation process to evaluate its efficacy and gather user feedback. Refinement and optimization steps are then undertaken based on evaluation outcomes. Emphasis is placed on the scalability of the framework in real-world scenarios and its potential applications across diverse healthcare facility contexts. Finally, the findings are meticulously documented and shared within the healthcare and facility management communities. This framework aims to significantly boost maintenance efficiency, cut costs, provide decision support, enable real-time monitoring, offer data-driven insights, and ultimately enhance patient safety and satisfaction. By tackling current challenges in healthcare facility maintenance management it paves the way for the adoption of smarter and more efficient maintenance practices in healthcare facilities.Keywords: artificial intelligence, building information modeling, healthcare facility maintenance, internet of things integration, maintenance efficiency
Procedia PDF Downloads 5922354 Predicting Personality and Psychological Distress Using Natural Language Processing
Authors: Jihee Jang, Seowon Yoon, Gaeun Son, Minjung Kang, Joon Yeon Choeh, Kee-Hong Choi
Abstract:
Background: Self-report multiple choice questionnaires have been widely utilized to quantitatively measure one’s personality and psychological constructs. Despite several strengths (e.g., brevity and utility), self-report multiple-choice questionnaires have considerable limitations in nature. With the rise of machine learning (ML) and Natural language processing (NLP), researchers in the field of psychology are widely adopting NLP to assess psychological constructs to predict human behaviors. However, there is a lack of connections between the work being performed in computer science and that psychology due to small data sets and unvalidated modeling practices. Aims: The current article introduces the study method and procedure of phase II, which includes the interview questions for the five-factor model (FFM) of personality developed in phase I. This study aims to develop the interview (semi-structured) and open-ended questions for the FFM-based personality assessments, specifically designed with experts in the field of clinical and personality psychology (phase 1), and to collect the personality-related text data using the interview questions and self-report measures on personality and psychological distress (phase 2). The purpose of the study includes examining the relationship between natural language data obtained from the interview questions, measuring the FFM personality constructs, and psychological distress to demonstrate the validity of the natural language-based personality prediction. Methods: The phase I (pilot) study was conducted on fifty-nine native Korean adults to acquire the personality-related text data from the interview (semi-structured) and open-ended questions based on the FFM of personality. The interview questions were revised and finalized with the feedback from the external expert committee, consisting of personality and clinical psychologists. Based on the established interview questions, a total of 425 Korean adults were recruited using a convenience sampling method via an online survey. The text data collected from interviews were analyzed using natural language processing. The results of the online survey, including demographic data, depression, anxiety, and personality inventories, were analyzed together in the model to predict individuals’ FFM of personality and the level of psychological distress (phase 2).Keywords: personality prediction, psychological distress prediction, natural language processing, machine learning, the five-factor model of personality
Procedia PDF Downloads 7922353 Suitable Site Selection of Small Dams Using Geo-Spatial Technique: A Case Study of Dadu Tehsil, Sindh
Authors: Zahid Khalil, Saad Ul Haque, Asif Khan
Abstract:
Decision making about identifying suitable sites for any project by considering different parameters is difficult. Using GIS and Multi-Criteria Analysis (MCA) can make it easy for those projects. This technology has proved to be an efficient and adequate in acquiring the desired information. In this study, GIS and MCA were employed to identify the suitable sites for small dams in Dadu Tehsil, Sindh. The GIS software is used to create all the spatial parameters for the analysis. The parameters that derived are slope, drainage density, rainfall, land use / land cover, soil groups, Curve Number (CN) and runoff index with a spatial resolution of 30m. The data used for deriving above layers include 30-meter resolution SRTM DEM, Landsat 8 imagery, and rainfall from National Centre of Environment Prediction (NCEP) and soil data from World Harmonized Soil Data (WHSD). Land use/Land cover map is derived from Landsat 8 using supervised classification. Slope, drainage network and watershed are delineated by terrain processing of DEM. The Soil Conservation Services (SCS) method is implemented to estimate the surface runoff from the rainfall. Prior to this, SCS-CN grid is developed by integrating the soil and land use/land cover raster. These layers with some technical and ecological constraints are assigned weights on the basis of suitability criteria. The pairwise comparison method, also known as Analytical Hierarchy Process (AHP) is taken into account as MCA for assigning weights on each decision element. All the parameters and group of parameters are integrated using weighted overlay in GIS environment to produce suitable sites for the Dams. The resultant layer is then classified into four classes namely, best suitable, suitable, moderate and less suitable. This study reveals a contribution to decision-making about suitable sites analysis for small dams using geospatial data with minimal amount of ground data. This suitability maps can be helpful for water resource management organizations in determination of feasible rainwater harvesting structures (RWH).Keywords: Remote sensing, GIS, AHP, RWH
Procedia PDF Downloads 38922352 Travel Behavior Simulation of Bike-Sharing System Users in Kaoshiung City
Authors: Hong-Yi Lin, Feng-Tyan Lin
Abstract:
In a Bike-sharing system (BSS), users can easily rent bikes from any station in the city for mid-range or short-range trips. BSS can also be integrated with other types of transport system, especially Green Transportation system, such as rail transport, bus etc. Since BSS records time and place of each pickup and return, the operational data can reflect more authentic and dynamic state of user behaviors. Furthermore, land uses around docking stations are highly associated with origins and destinations for the BSS users. As urban researchers, what concerns us more is to take BSS into consideration during the urban planning process and enhance the quality of urban life. This research focuses on the simulation of travel behavior of BSS users in Kaohsiung. First, rules of users’ behavior were derived by analyzing operational data and land use patterns nearby docking stations. Then, integrating with Monte Carlo method, these rules were embedded into a travel behavior simulation model, which was implemented by NetLogo, an agent-based modeling tool. The simulation model allows us to foresee the rent-return behaviour of BSS in order to choose potential locations of the docking stations. Also, it can provide insights and recommendations about planning and policies for the future BSS.Keywords: agent-based model, bike-sharing system, BSS operational data, simulation
Procedia PDF Downloads 33322351 Identification of Rainfall Trends in Qatar
Authors: Abdullah Al Mamoon, Ataur Rahman
Abstract:
Due to climate change, future rainfall will change at many locations on earth; however, the spatial and temporal patterns of this change are not easy to predict. One approach of predicting such future changes is to examine the trends in the historical rainfall data at a given region and use the identified trends to make future prediction. For this, a statistical trend test is commonly applied to the historical data. This paper examines the trends of daily extreme rainfall events from 30 rain gauges located in the State of Qatar. Rainfall data covering from 1962 to 2011 were used in the analysis. A combination of four non-parametric and parametric tests was applied to identify trends at 10%, 5%, and 1% significance levels. These tests are Mann-Kendall (MK), Spearman’s Rho (SR), Linear Regression (LR) and CUSUM tests. These tests showed both positive and negative trends throughout the country. Only eight stations showed positive (upward) trend, which were however not statistically significant. In contrast, significant negative (downward) trends were found at the 5% and 10% levels of significance in six stations. The MK, SR and LR tests exhibited very similar results. This finding has important implications in the derivation/upgrade of design rainfall for Qatar, which will affect design and operation of future urban drainage infrastructure in Qatar.Keywords: trends, extreme rainfall, daily rainfall, Mann-Kendall test, climate change, Qatar
Procedia PDF Downloads 56222350 The Operating Results of the English General Music Course on the Education Platform
Authors: Shan-Ken Chine
Abstract:
This research aims to a one-year course run of String Music Appreciation, an international online course launched on the British open education platform. It explains how to present music teaching videos with three main features. They are music lesson explanations, instrumental playing demonstrations, and live music performances. The plan of this course is with four major themes and a total of 97 steps. In addition, the paper also uses the testing data provided by the education platform to analyze the performance of learners and to understand the operation of the course. It contains three test data in the statistics dashboard. They are course-run measures, total statistics, and statistics by week. The paper ends with a review of the course's star rating in this one-year run. The result of this course run will be adjusted when it starts again in the future.Keywords: music online courses, MOOCs, ubiquitous learning, string music, general music education
Procedia PDF Downloads 3822349 Impact of Graduates’ Quality of Education and Research on ICT Adoption at Workplace
Authors: Mohammed Kafaji
Abstract:
This paper aims to investigate the influence of quality of education and quality of research, provided by local educational institutions, on the adoption of Information and Communication Technology (ICT) in managing business operations for companies in Saudi market. A model was developed and tested using data collected from 138 CEO’s of foreign companies in diverse business sectors. The data is analysed and managed using multivariate approaches through standard statistical packages. The results showed that educational quality has little contribution to the ICT adoption while research quality seems to play a more prominent role. These results are analysed in terms of business environment and market constraints and further extended to the perceived effectiveness of applied pedagogical approaches in schools and universities.Keywords: quality of education, quality of research, mediation, domestic competition, ICT adoption
Procedia PDF Downloads 45622348 Evaluation of the Role of Advocacy and the Quality of Care in Reducing Health Inequalities for People with Autism, Intellectual and Developmental Disabilities at Sheffield Teaching Hospitals
Authors: Jonathan Sahu, Jill Aylott
Abstract:
Individuals with Autism, Intellectual and Developmental disabilities (AIDD) are one of the most vulnerable groups in society, hampered not only by their own limitations to understand and interact with the wider society, but also societal limitations in perception and understanding. Communication to express their needs and wishes is fundamental to enable such individuals to live and prosper in society. This research project was designed as an organisational case study, in a large secondary health care hospital within the National Health Service (NHS), to assess the quality of care provided to people with AIDD and to review the role of advocacy to reduce health inequalities in these individuals. Methods: The research methodology adopted was as an “insider researcher”. Data collection included both quantitative and qualitative data i.e. a mixed method approach. A semi-structured interview schedule was designed and used to obtain qualitative and quantitative primary data from a wide range of interdisciplinary frontline health care workers to assess their understanding and awareness of systems, processes and evidence based practice to offer a quality service to people with AIDD. Secondary data were obtained from sources within the organisation, in keeping with “Case Study” as a primary method, and organisational performance data were then compared against national benchmarking standards. Further data sources were accessed to help evaluate the effectiveness of different types of advocacy that were present in the organisation. This was gauged by measures of user and carer experience in the form of retrospective survey analysis, incidents and complaints. Results: Secondary data demonstrate near compliance of the Organisation with the current national benchmarking standard (Monitor Compliance Framework). However, primary data demonstrate poor knowledge of the Mental Capacity Act 2005, poor knowledge of organisational systems, processes and evidence based practice applied for people with AIDD. In addition there was poor knowledge and awareness of frontline health care workers of advocacy and advocacy schemes for this group. Conclusions: A significant amount of work needs to be undertaken to improve the quality of care delivered to individuals with AIDD. An operational strategy promoting the widespread dissemination of information may not be the best approach to deliver quality care and optimal patient experience and patient advocacy. In addition, a more robust set of standards, with appropriate metrics, needs to be developed to assess organisational performance which will stand the test of professional and public scrutiny.Keywords: advocacy, autism, health inequalities, intellectual developmental disabilities, quality of care
Procedia PDF Downloads 21722347 Exploring the Development of Communicative Skills in English Teaching Students: A Phenomenological Study During Online Instruction
Authors: Estephanie S. López Contreras, Vicente Aranda Palacios, Daniela Flores Silva, Felipe Oliveros Olivares, Romina Riquelme Escobedo, Iñaki Westerhout Usabiaga
Abstract:
This research explored whether the context of online instruction has influenced the development of first-year English-teaching students' communication skills, being these speaking and listening. The theoretical basis finds its niche in the need to bridge the gap in knowledge about the Chilean online educational context and the development of English communicative skills. An interpretative paradigm and a phenomenological design were implemented in this study. Twenty- two first-year students and two teachers from an English teaching training program participated in the study. The students' ages ranged from 18 to 26 years of age, and the teachers' years of experience ranged from 5 to 13 years in the program. For data collection purposes, semi- structured interviews were applied to both students and teachers. Interview questions were based on the initial conceptualization of the central phenomenon. Observations, field notes, and focus groups with the students are also part of the data collection process. Data analysis considered two-cycle methods. The first included descriptive coding for field notes, initial coding for interviews, and creating a codebook. The second cycle included axial coding for both field notes and interviews. After data analysis, the findings show that students perceived online classes as instances in which active communication cannot always occur. In addition, changes made to the curricula as a consequence of the COVID-19 pandemic have affected students' speaking and listening skills.Keywords: attitudes, communicative skills, EFL teaching training program, online instruction, and perceptions
Procedia PDF Downloads 12122346 Collaborative Online Learning for Lecturers
Authors: Lee Bih Ni, Emily Doreen Lee, Wee Hui Yean
Abstract:
This paper was prepared to see the perceptions of online lectures regarding collaborative learning, in terms of how lecturers view online collaborative learning in the higher learning institution. The purpose of this study was conducted to determine the perceptions of online lectures about collaborative learning, especially how lecturers see online collaborative learning in the university. Adult learning education enhance collaborative learning culture with the target of involving learners in the learning process to make teaching and learning more effective and open at the university. This will finally make students learning that will assist each other. It is also to cut down the pressure of loneliness and isolation might felt among adult learners. Their ways in collaborative online was also determined. In this paper, researchers collect data using questionnaires instruments. The collected data were analyzed and interpreted. By analyzing the data, researchers report the results according the proof taken from the respondents. Results from the study, it is not only dependent on the lecturer but also a student to shape a good collaborative learning practice. Rational concepts and pattern to achieve these targets be clear right from the beginning and may be good seen by a number of proposals submitted and include how the higher learning institution has trained with ongoing lectures online. Advantages of online collaborative learning show that lecturers should be trained effectively. Studies have seen that the lecturer aware of online collaborative learning. This positive attitude will encourage the higher learning institution to continue to give the knowledge and skills required.Keywords: collaborative online learning, lecturers’ training, learning, online
Procedia PDF Downloads 45722345 Multi-Level Clustering Based Congestion Control Protocol for Cyber Physical Systems
Authors: Manpreet Kaur, Amita Rani, Sanjay Kumar
Abstract:
The Internet of Things (IoT), a cyber-physical paradigm, allows a large number of devices to connect and send the sensory data in the network simultaneously. This tremendous amount of data generated leads to very high network load consequently resulting in network congestion. It further amounts to frequent loss of useful information and depletion of significant amount of nodes’ energy. Therefore, there is a need to control congestion in IoT so as to prolong network lifetime and improve the quality of service (QoS). Hence, we propose a two-level clustering based routing algorithm considering congestion score and packet priority metrics that focus on minimizing the network congestion. In the proposed Priority based Congestion Control (PBCC) protocol the sensor nodes in IoT network form clusters that reduces the amount of traffic and the nodes are prioritized to emphasize important data. Simultaneously, a congestion score determines the occurrence of congestion at a particular node. The proposed protocol outperforms the existing Packet Discard Network Clustering (PDNC) protocol in terms of buffer size, packet transmission range, network region and number of nodes, under various simulation scenarios.Keywords: internet of things, cyber-physical systems, congestion control, priority, transmission rate
Procedia PDF Downloads 30822344 Correlation Matrix for Automatic Identification of Meal-Taking Activity
Authors: Ghazi Bouaziz, Abderrahim Derouiche, Damien Brulin, Hélène Pigot, Eric Campo
Abstract:
Automatic ADL classification is a crucial part of ambient assisted living technologies. It allows to monitor the daily life of the elderly and to detect any changes in their behavior that could be related to health problem. But detection of ADLs is a challenge, especially because each person has his/her own rhythm for performing them. Therefore, we used a correlation matrix to extract custom rules that enable to detect ADLs, including eating activity. Data collected from 3 different individuals between 35 and 105 days allows the extraction of personalized eating patterns. The comparison of the results of the process of eating activity extracted from the correlation matrices with the declarative data collected during the survey shows an accuracy of 90%.Keywords: elderly monitoring, ADL identification, matrix correlation, meal-taking activity
Procedia PDF Downloads 9322343 Digital Twin for Retail Store Security
Authors: Rishi Agarwal
Abstract:
Digital twins are emerging as a strong technology used to imitate and monitor physical objects digitally in real time across sectors. It is not only dealing with the digital space, but it is also actuating responses in the physical space in response to the digital space processing like storage, modeling, learning, simulation, and prediction. This paper explores the application of digital twins for enhancing physical security in retail stores. The retail sector still relies on outdated physical security practices like manual monitoring and metal detectors, which are insufficient for modern needs. There is a lack of real-time data and system integration, leading to ineffective emergency response and preventative measures. As retail automation increases, new digital frameworks must control safety without human intervention. To address this, the paper proposes implementing an intelligent digital twin framework. This collects diverse data streams from in-store sensors, surveillance, external sources, and customer devices and then Advanced analytics and simulations enable real-time monitoring, incident prediction, automated emergency procedures, and stakeholder coordination. Overall, the digital twin improves physical security through automation, adaptability, and comprehensive data sharing. The paper also analyzes the pros and cons of implementation of this technology through an Emerging Technology Analysis Canvas that analyzes different aspects of this technology through both narrow and wide lenses to help decision makers in their decision of implementing this technology. On a broader scale, this showcases the value of digital twins in transforming legacy systems across sectors and how data sharing can create a safer world for both retail store customers and owners.Keywords: digital twin, retail store safety, digital twin in retail, digital twin for physical safety
Procedia PDF Downloads 7222342 Determinant Factor Analysis of Foreign Direct Investment in Asean-6 Countries Period 2004-2012
Authors: Eleonora Sofilda, Ria Amalia, Muhammad Zilal Hamzah
Abstract:
Foreign direct investment is one of the sources of financing or capital that important for a country, especially for developing countries. This investment also provides a great contribution to development through the transfer of assets, management improving, and transfer of technology in enhancing the economy of a country. In the other side currently in ASEAN countries emerge the interesting phenomenon where some big producers are re-locate their basic production among those countries. This research is aimed to analyze the factors that affect capital inflows of foreign direct investment into the 6 ASEAN countries (Indonesia, Malaysia, Singapore, Thailand, Philippines, and Vietnam) in period 2004-2012. This study uses panel data analysis to determine the factors that affect of foreign direct investment in 6 ASEAN. The factors that affect of foreign direct investment (FDI) are the gross domestic product (GDP), global competitiveness (GCI), interest rate, exchange rate and trade openness (TO). Result of panel data analysis show that three independent variables (GCI, GDP, and TO) have a significant effect to the FDI in 6 ASEAN Countries.Keywords: foreign direct investment, the gross domestic product, global competitiveness, interest rate, exchange rate, trade openness, panel data analysis
Procedia PDF Downloads 47022341 Comparison of Propofol versus Ketamine-Propofol Combination as an Anesthetic Agent in Supratentorial Tumors: A Randomized Controlled Study
Authors: Jakkireddy Sravani
Abstract:
Introduction: The maintenance of hemodynamic stability is of pivotal importance in supratentorial surgeries. Anesthesia for supratentorial tumors requires an understanding of localized or generalized rising ICP, regulation, and maintenance of intracerebral perfusion, and avoidance of secondary systemic ischemic insults. We aimed to compare the effects of the combination of ketamine and propofol with propofol alone when used as an induction and maintenance anesthetic agent during supratentorial tumors. Methodology: This prospective, randomized, double-blinded controlled study was conducted at AIIMS Raipur after obtaining the institute Ethics Committee approval (1212/IEC-AIIMSRPR/2022 dated 15/10/2022), CTRI/2023/01/049298 registration and written informed consent. Fifty-two supratentorial tumor patients posted for craniotomy and excision were included in the study. The patients were randomized into two groups. One group received a combination of ketamine and propofol, and the other group received propofol for induction and maintenance of anesthesia. Intraoperative hemodynamic stability and quality of brain relaxation were studied in both groups. Statistical analysis and technique: An MS Excel spreadsheet program was used to code and record the data. Data analysis was done using IBM Corp SPSS v23. The independent sample "t" test was applied for continuously dispersed data when two groups were compared, the chi-square test for categorical data, and the Wilcoxon test for not normally distributed data. Results: The patients were comparable in terms of demographic profile, duration of the surgery, and intraoperative input-output status. The trends in BIS over time were similar between the two groups (p-value = 1.00). Intraoperative hemodynamics (SBP, DBP, MAP) were better maintained in the ketamine and propofol combination group during induction and maintenance (p-value < 0.01). The quality of brain relaxation was comparable between the two groups (p-value = 0.364). Conclusion: Ketamine and propofol combination for the induction and maintenance of anesthesia was associated with superior hemodynamic stability, required fewer vasopressors during excision of supratentorial tumors, provided adequate brain relaxation, and some degree of neuroprotection compared to propofol alone.Keywords: supratentorial tumors, hemodynamic stability, brain relaxation, ketamine, propofol
Procedia PDF Downloads 2522340 Exploring the Influence of Wind on Wildfire Behavior in China: A Data-Driven Study Using Machine Learning and Remote Sensing
Authors: Rida Kanwal, Wang Yuhui, Song Weiguo
Abstract:
Wildfires are one of the most prominent threats to ecosystems, human health, and economic activities, with wind acting as a critical driving factor. This study combines machine learning (ML) and remote sensing (RS) to assess the effects of wind on wildfires in Chongqing Province from August 16-23, 2022. Landsat 8 satellite images were used to estimate the difference normalized burn ratio (dNBR), representing prefire and postfire vegetation conditions. Wind data was analyzed through geographic information system (GIS) mapping. Correlation analysis between wind speed and fire radiative power (FRP) revealed a significant relationship. An autoregressive integrated moving average (ARIMA) model was developed for wind forecasting, and linear regression was applied to determine the effect of wind speed on FRP. The results identified high wind speed as a key factor contributing to the surge in FRP. Wind-rose plots showed winds blowing to the northwest (NW), aligning with the wildfire spread. This model was further validated with data from other provinces across China. This study integrated ML, RS, and GIS to analyze wildfire behavior, providing effective strategies for prediction and management.Keywords: wildfires, machine learning, remote sensing, wind speed, GIS, wildfire behavior
Procedia PDF Downloads 2022339 Assessment of Dimensions and Gully Recovery With GPS Receiver and RPA (Drone)
Authors: Mariana Roberta Ribeiro, Isabela de Cássia Caramello, Roberto Saverio Souza Costa
Abstract:
Currently, one of the most important environmental problems is soil degradation. This wear is the result of inadequate agricultural practices, with water erosion as the main agent. As the runoff water is concentrated in certain points, it can reach a more advanced stage, which are the gullies. In view of this, the objective of this work was to evaluate which methodology is most suitable for the purpose of elaborating a project for the recovery of a gully, relating work time, data reliability, and the final cost. The work was carried out on a rural road in Monte Alto - SP, where there is 0.30 hectares of area under the influence of a gully. For the evaluation, an aerophotogrammetric survey was used with RPA, with georeferenced points, and with a GNSS L1/L2 receiver. To assess the importance of georeferenced points, there was a comparison of altimetric data using the support points with altimetric data using only the aircraft's internal GPS. Another method used was the survey by conventional topography, where coordinates were collected by total station and L1/L2 Geodetic GPS receiver. Statistical analysis was performed using analysis of variance (ANOVA) using the F test (p<0.05), and the means between treatments were compared using the Tukey test (p<0.05). The results showed that the surveys carried out by aerial photogrammetry and by conventional topography showed no significant difference for the analyzed parameters. Considering the data presented, it is possible to conclude that, when comparing the parameters of accuracy, the final volume of the gully, and cost, for the purpose of elaborating a project for the recovery of a gully, the methodologies of aerial photogrammetric survey and conventional topography do not differ significantly. However, when working time, use of labor, and project detail are compared, the aerial photogrammetric survey proves to be more viable.Keywords: drones, erosion, soil conservation, technology in agriculture
Procedia PDF Downloads 11522338 Novel GPU Approach in Predicting the Directional Trend of the S&P500
Authors: A. J. Regan, F. J. Lidgey, M. Betteridge, P. Georgiou, C. Toumazou, K. Hayatleh, J. R. Dibble
Abstract:
Our goal is development of an algorithm capable of predicting the directional trend of the Standard and Poor’s 500 index (S&P 500). Extensive research has been published attempting to predict different financial markets using historical data testing on an in-sample and trend basis, with many authors employing excessively complex mathematical techniques. In reviewing and evaluating these in-sample methodologies, it became evident that this approach was unable to achieve sufficiently reliable prediction performance for commercial exploitation. For these reasons, we moved to an out-of-sample strategy based on linear regression analysis of an extensive set of financial data correlated with historical closing prices of the S&P 500. We are pleased to report a directional trend accuracy of greater than 55% for tomorrow (t+1) in predicting the S&P 500.Keywords: financial algorithm, GPU, S&P 500, stock market prediction
Procedia PDF Downloads 35022337 Impact of Grade Sensitivity on Learning Motivation and Academic Performance
Authors: Salwa Aftab, Sehrish Riaz
Abstract:
The objective of this study was to check the impact of grade sensitivity on learning motivation and academic performance of students and to remove the degree of difference that exists among students regarding the cause of their learning motivation and also to gain knowledge about this matter since it has not been adequately researched. Data collection was primarily done through the academic sector of Pakistan and was depended upon the responses given by students solely. A sample size of 208 university students was selected. Both paper and online surveys were used to collect data from respondents. The results of the study revealed that grade sensitivity has a positive relationship with the learning motivation of students and their academic performance. These findings were carried out through systematic correlation and regression analysis.Keywords: academic performance, correlation, grade sensitivity, learning motivation, regression
Procedia PDF Downloads 40022336 Optimization of Electric Vehicle (EV) Charging Station Allocation Based on Multiple Data - Taking Nanjing (China) as an Example
Authors: Yue Huang, Yiheng Feng
Abstract:
Due to the global pressure on climate and energy, many countries are vigorously promoting electric vehicles and building charging (public) charging facilities. Faced with the supply-demand gap of existing electric vehicle charging stations and unreasonable space usage in China, this paper takes the central city of Nanjing as an example, establishes a site selection model through multivariate data integration, conducts multiple linear regression SPSS analysis, gives quantitative site selection results, and provides optimization models and suggestions for charging station layout planning.Keywords: electric vehicle, charging station, allocation optimization, urban mobility, urban infrastructure, nanjing
Procedia PDF Downloads 9222335 A Nonlinear Feature Selection Method for Hyperspectral Image Classification
Authors: Pei-Jyun Hsieh, Cheng-Hsuan Li, Bor-Chen Kuo
Abstract:
For hyperspectral image classification, feature reduction is an important pre-processing for avoiding the Hughes phenomena due to the difficulty for collecting training samples. Hence, lots of researches developed feature selection methods such as F-score, HSIC (Hilbert-Schmidt Independence Criterion), and etc., to improve hyperspectral image classification. However, most of them only consider the class separability in the original space, i.e., a linear class separability. In this study, we proposed a nonlinear class separability measure based on kernel trick for selecting an appropriate feature subset. The proposed nonlinear class separability was formed by a generalized RBF kernel with different bandwidths with respect to different features. Moreover, it considered the within-class separability and the between-class separability. A genetic algorithm was applied to tune these bandwidths such that the smallest with-class separability and the largest between-class separability simultaneously. This indicates the corresponding feature space is more suitable for classification. In addition, the corresponding nonlinear classification boundary can separate classes very well. These optimal bandwidths also show the importance of bands for hyperspectral image classification. The reciprocals of these bandwidths can be viewed as weights of bands. The smaller bandwidth, the larger weight of the band, and the more importance for classification. Hence, the descending order of the reciprocals of the bands gives an order for selecting the appropriate feature subsets. In the experiments, three hyperspectral image data sets, the Indian Pine Site data set, the PAVIA data set, and the Salinas A data set, were used to demonstrate the selected feature subsets by the proposed nonlinear feature selection method are more appropriate for hyperspectral image classification. Only ten percent of samples were randomly selected to form the training dataset. All non-background samples were used to form the testing dataset. The support vector machine was applied to classify these testing samples based on selected feature subsets. According to the experiments on the Indian Pine Site data set with 220 bands, the highest accuracies by applying the proposed method, F-score, and HSIC are 0.8795, 0.8795, and 0.87404, respectively. However, the proposed method selects 158 features. F-score and HSIC select 168 features and 217 features, respectively. Moreover, the classification accuracies increase dramatically only using first few features. The classification accuracies with respect to feature subsets of 10 features, 20 features, 50 features, and 110 features are 0.69587, 0.7348, 0.79217, and 0.84164, respectively. Furthermore, only using half selected features (110 features) of the proposed method, the corresponding classification accuracy (0.84168) is approximate to the highest classification accuracy, 0.8795. For other two hyperspectral image data sets, the PAVIA data set and Salinas A data set, we can obtain the similar results. These results illustrate our proposed method can efficiently find feature subsets to improve hyperspectral image classification. One can apply the proposed method to determine the suitable feature subset first according to specific purposes. Then researchers can only use the corresponding sensors to obtain the hyperspectral image and classify the samples. This can not only improve the classification performance but also reduce the cost for obtaining hyperspectral images.Keywords: hyperspectral image classification, nonlinear feature selection, kernel trick, support vector machine
Procedia PDF Downloads 26522334 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 25822333 Multi Object Tracking for Predictive Collision Avoidance
Authors: Bruk Gebregziabher
Abstract:
The safe and efficient operation of Autonomous Mobile Robots (AMRs) in complex environments, such as manufacturing, logistics, and agriculture, necessitates accurate multiobject tracking and predictive collision avoidance. This paper presents algorithms and techniques for addressing these challenges using Lidar sensor data, emphasizing ensemble Kalman filter. The developed predictive collision avoidance algorithm employs the data provided by lidar sensors to track multiple objects and predict their velocities and future positions, enabling the AMR to navigate safely and effectively. A modification to the dynamic windowing approach is introduced to enhance the performance of the collision avoidance system. The overall system architecture encompasses object detection, multi-object tracking, and predictive collision avoidance control. The experimental results, obtained from both simulation and real-world data, demonstrate the effectiveness of the proposed methods in various scenarios, which lays the foundation for future research on global planners, other controllers, and the integration of additional sensors. This thesis contributes to the ongoing development of safe and efficient autonomous systems in complex and dynamic environments.Keywords: autonomous mobile robots, multi-object tracking, predictive collision avoidance, ensemble Kalman filter, lidar sensors
Procedia PDF Downloads 8422332 Revolutionizing Accounting: Unleashing the Power of Artificial Intelligence
Authors: Sogand Barghi
Abstract:
The integration of artificial intelligence (AI) in accounting practices is reshaping the landscape of financial management. This paper explores the innovative applications of AI in the realm of accounting, emphasizing its transformative impact on efficiency, accuracy, decision-making, and financial insights. By harnessing AI's capabilities in data analysis, pattern recognition, and automation, accounting professionals can redefine their roles, elevate strategic decision-making, and unlock unparalleled value for businesses. This paper delves into AI-driven solutions such as automated data entry, fraud detection, predictive analytics, and intelligent financial reporting, highlighting their potential to revolutionize the accounting profession. Artificial intelligence has swiftly emerged as a game-changer across industries, and accounting is no exception. This paper seeks to illuminate the profound ways in which AI is reshaping accounting practices, transcending conventional boundaries, and propelling the profession toward a new era of efficiency and insight-driven decision-making. One of the most impactful applications of AI in accounting is automation. Tasks that were once labor-intensive and time-consuming, such as data entry and reconciliation, can now be streamlined through AI-driven algorithms. This not only reduces the risk of errors but also allows accountants to allocate their valuable time to more strategic and analytical tasks. AI's ability to analyze vast amounts of data in real time enables it to detect irregularities and anomalies that might go unnoticed by traditional methods. Fraud detection algorithms can continuously monitor financial transactions, flagging any suspicious patterns and thereby bolstering financial security. AI-driven predictive analytics can forecast future financial trends based on historical data and market variables. This empowers organizations to make informed decisions, optimize resource allocation, and develop proactive strategies that enhance profitability and sustainability. Traditional financial reporting often involves extensive manual effort and data manipulation. With AI, reporting becomes more intelligent and intuitive. Automated report generation not only saves time but also ensures accuracy and consistency in financial statements. While the potential benefits of AI in accounting are undeniable, there are challenges to address. Data privacy and security concerns, the need for continuous learning to keep up with evolving AI technologies, and potential biases within algorithms demand careful attention. The convergence of AI and accounting marks a pivotal juncture in the evolution of financial management. By harnessing the capabilities of AI, accounting professionals can transcend routine tasks, becoming strategic advisors and data-driven decision-makers. The applications discussed in this paper underline the transformative power of AI, setting the stage for an accounting landscape that is smarter, more efficient, and more insightful than ever before. The future of accounting is here, and it's driven by artificial intelligence.Keywords: artificial intelligence, accounting, automation, predictive analytics, financial reporting
Procedia PDF Downloads 71