Search results for: moral intelligence
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2009

Search results for: moral intelligence

689 Islamic Finance and Trade Promotion in the African Continental Free Trade Area: An Exploratory Study

Authors: Shehu Usman Rano Aliyu

Abstract:

Despite the significance of finance as a major trade lubricant, evidence in the literature alludes to its scarcity and increasing cost, especially in developing countries where small and medium-scale enterprises are worst affected. The creation of the African Continental Free Trade Area (AFCFTA) in 2018, an organ of the African Union (AU), was meant to serve as a beacon for deepening economic integration through the removal of trade barriers inhibiting intra-African trade and movement of persons, among others. Hence, this research explores the role Islamic trade finance (ITF) could play in spurring intra- and inter-African trade. The study involves six countries; Egypt, Kenya, Malaysia, Morocco, Nigeria, and Saudi Arabia, and employs survey research, a total of 430 sample data, and SmartPLS Structural Equation Modelling (SEM) techniques in its analyses. We find strong evidence that Shari’ah, legal and regulatory compliance issues of the ITF institutions rhythm with the internal, national, and international compliance requirements equally as the unique instruments applied in ITF. In addition, ITF was found to be largely driven by global economic and political stability, socially responsible finance, ethical and moral considerations, risk-sharing, and resilience of the global Islamic finance industry. Further, SMEs, Governments, and Importers are the major beneficiary sectors. By and large, AfCFTA’s protocols align with the principles of ITF and are therefore suited for the proliferation of Islamic finance in the continent. And, while AML/KYC and BASEL requirements, compliance to AAOIFI and IFSB standards, paucity of Shari'ah experts, threats to global security, and increasing global economic uncertainty pose as major impediments, the future of ITF would be shaped by a greater need for institutional and policy support, global economic cum political stability, robust regulatory framework, and digital technology/fintech. The study calls for the licensing of more ITF institutions in the continent, participation of multilateral institutions in ITF, and harmonization of Shariah standards.

Keywords: AfCFTA, islamic trade finance, murabaha, letter of credit, forwarding

Procedia PDF Downloads 56
688 Digital Twin for a Floating Solar Energy System with Experimental Data Mining and AI Modelling

Authors: Danlei Yang, Luofeng Huang

Abstract:

The integration of digital twin technology with renewable energy systems offers an innovative approach to predicting and optimising performance throughout the entire lifecycle. A digital twin is a continuously updated virtual replica of a real-world entity, synchronised with data from its physical counterpart and environment. Many digital twin companies today claim to have mature digital twin products, but their focus is primarily on equipment visualisation. However, the core of a digital twin should be its model, which can mirror, shadow, and thread with the real-world entity, which is still underdeveloped. For a floating solar energy system, a digital twin model can be defined in three aspects: (a) the physical floating solar energy system along with environmental factors such as solar irradiance and wave dynamics, (b) a digital model powered by artificial intelligence (AI) algorithms, and (c) the integration of real system data with the AI-driven model and a user interface. The experimental setup for the floating solar energy system, is designed to replicate real-ocean conditions of floating solar installations within a controlled laboratory environment. The system consists of a water tank that simulates an aquatic surface, where a floating catamaran structure supports a solar panel. The solar simulator is set up in three positions: one directly above and two inclined at a 45° angle in front and behind the solar panel. This arrangement allows the simulation of different sun angles, such as sunrise, midday, and sunset. The solar simulator is positioned 400 mm away from the solar panel to maintain consistent solar irradiance on its surface. Stability for the floating structure is achieved through ropes attached to anchors at the bottom of the tank, which simulates the mooring systems used in real-world floating solar applications. The floating solar energy system's sensor setup includes various devices to monitor environmental and operational parameters. An irradiance sensor measures solar irradiance on the photovoltaic (PV) panel. Temperature sensors monitor ambient air and water temperatures, as well as the PV panel temperature. Wave gauges measure wave height, while load cells capture mooring force. Inclinometers and ultrasonic sensors record heave and pitch amplitudes of the floating system’s motions. An electric load measures the voltage and current output from the solar panel. All sensors collect data simultaneously. Artificial neural network (ANN) algorithms are central to developing the digital model, which processes historical and real-time data, identifies patterns, and predicts the system’s performance in real time. The data collected from various sensors are partly used to train the digital model, with the remaining data reserved for validation and testing. The digital twin model combines the experimental setup with the ANN model, enabling monitoring, analysis, and prediction of the floating solar energy system's operation. The digital model mirrors the functionality of the physical setup, running in sync with the experiment to provide real-time insights and predictions. It provides useful industrial benefits, such as informing maintenance plans as well as design and control strategies for optimal energy efficiency. In long term, this digital twin will help improve overall solar energy yield whilst minimising the operational costs and risks.

Keywords: digital twin, floating solar energy system, experiment setup, artificial intelligence

Procedia PDF Downloads 9
687 Image Captioning with Vision-Language Models

Authors: Promise Ekpo Osaine, Daniel Melesse

Abstract:

Image captioning is an active area of research in the multi-modal artificial intelligence (AI) community as it connects vision and language understanding, especially in settings where it is required that a model understands the content shown in an image and generates semantically and grammatically correct descriptions. In this project, we followed a standard approach to a deep learning-based image captioning model, injecting architecture for the encoder-decoder setup, where the encoder extracts image features, and the decoder generates a sequence of words that represents the image content. As such, we investigated image encoders, which are ResNet101, InceptionResNetV2, EfficientNetB7, EfficientNetV2M, and CLIP. As a caption generation structure, we explored long short-term memory (LSTM). The CLIP-LSTM model demonstrated superior performance compared to the encoder-decoder models, achieving a BLEU-1 score of 0.904 and a BLEU-4 score of 0.640. Additionally, among the CNN-LSTM models, EfficientNetV2M-LSTM exhibited the highest performance with a BLEU-1 score of 0.896 and a BLEU-4 score of 0.586 while using a single-layer LSTM.

Keywords: multi-modal AI systems, image captioning, encoder, decoder, BLUE score

Procedia PDF Downloads 77
686 Prediction Fluid Properties of Iranian Oil Field with Using of Radial Based Neural Network

Authors: Abdolreza Memari

Abstract:

In this article in order to estimate the viscosity of crude oil,a numerical method has been used. We use this method to measure the crude oil's viscosity for 3 states: Saturated oil's viscosity, viscosity above the bubble point and viscosity under the saturation pressure. Then the crude oil's viscosity is estimated by using KHAN model and roller ball method. After that using these data that include efficient conditions in measuring viscosity, the estimated viscosity by the presented method, a radial based neural method, is taught. This network is a kind of two layered artificial neural network that its stimulation function of hidden layer is Gaussian function and teaching algorithms are used to teach them. After teaching radial based neural network, results of experimental method and artificial intelligence are compared all together. Teaching this network, we are able to estimate crude oil's viscosity without using KHAN model and experimental conditions and under any other condition with acceptable accuracy. Results show that radial neural network has high capability of estimating crude oil saving in time and cost is another advantage of this investigation.

Keywords: viscosity, Iranian crude oil, radial based, neural network, roller ball method, KHAN model

Procedia PDF Downloads 501
685 Reinforcement Learning for Self Driving Racing Car Games

Authors: Adam Beaunoyer, Cory Beaunoyer, Mohammed Elmorsy, Hanan Saleh

Abstract:

This research aims to create a reinforcement learning agent capable of racing in challenging simulated environments with a low collision count. We present a reinforcement learning agent that can navigate challenging tracks using both a Deep Q-Network (DQN) and a Soft Actor-Critic (SAC) method. A challenging track includes curves, jumps, and varying road widths throughout. Using open-source code on Github, the environment used in this research is based on the 1995 racing game WipeOut. The proposed reinforcement learning agent can navigate challenging tracks rapidly while maintaining low racing completion time and collision count. The results show that the SAC model outperforms the DQN model by a large margin. We also propose an alternative multiple-car model that can navigate the track without colliding with other vehicles on the track. The SAC model is the basis for the multiple-car model, where it can complete the laps quicker than the single-car model but has a higher collision rate with the track wall.

Keywords: reinforcement learning, soft actor-critic, deep q-network, self-driving cars, artificial intelligence, gaming

Procedia PDF Downloads 46
684 The Revival of Asakusa Entertainment Streets and Social Conflicts Since Its Inceptive Point, the Post-war Time

Authors: Seung Oh, Satoshi Okada

Abstract:

Today, religious organizations that have long existed alongside local people are being challenged by social changes in the districts they control. The influence of religious organizations is declining everywhere as locals seeking diversity and economic benefits become more interested in developing projects that attract investors and increase market value instead of opting for conservation. Religions whose moral and philosophical stance rejects materialism have a limited capacity to act as agents of local development in modern society. However, in Tokyo, the city’s oldest temple, Senso-Ji played a vital role in the local development of Asakusa, as an entertainment district while nevertheless retaining the area’s traditional character, despite almost complete destruction caused by the Tokyo air raids. The temple played a vigorous role as a mediator between the community and the Tokyo Metropolitan Government as a spokesman for common interests. This research, therefore, examines the social conflicts that Senso-Ji has confronted with regard to the pressures of development of Asakusa on the one hand, and the legitimacy of perpetuating its traditional religious and cultural role in local society on the other. First, this article examines Senso-Ji’s place in society based on its location in the history of Japanese Buddhism, which existed to offer spiritual and practical help to the ordinary people, and to investigate its social legitimacy as a local stakeholder and historical institution. Second, this paper considers the impact of the social changes that Asakusa had undergone during the Meiji and Taisho eras, by examining the social conflicts and changes in the Asakusa entertainment district, taking the Tokyo Air Raids as the Inceptive Point (IP). Third, it reconsiders how Senso-Ji responded to today’s growth-oriented local developments, as proposed by Tokyo’s Metropolitan planning authorities along lines commonly seen in all cities. Studying the role of Senso-Ji in the development of Asakusa can serve as a case study to justify the involvement of religious institutions in local issues and as a useful and practical example of progressive development which nevertheless permitted conservation of traditional features, as a result of pressure from social groups in a way that may be useful for other places facing similar problems.

Keywords: Architecture, Urban Design, Urban Planning, Preservation, Conservation, Social Science

Procedia PDF Downloads 24
683 ADHD: Assessment of Pragmatic Skills in Adults

Authors: Elena Even-Simkin

Abstract:

Attention Deficit Hyperactivity Disorder (ADHD) is one of the most frequently diagnosed disorders in children, but in many cases, the diagnosis is not provided until adulthood. Diagnosing adults with ADHD faces different obstacles due to numerous factors, such as educational or under-resourced familial environment, high intelligence compensating for stress-inducing difficulties, and additional comorbidities. Undiagnosed children and adolescents with ADHD may become undiagnosed adults with ADHD, who miss out on the early treatment and may experience significant social and pragmatic difficulties, leading to functional problems that subsequently affect their lifestyle, education, and occupational functioning. The proposed study presents a cost-effective and unique consideration of the pragmatic aspect among adults with ADHD. It provides a systematic and standardized evaluation of the pragmatic level in adults with ADHD, based on a comprehensive approach introduced by Arcara & Bambini (2016) for the assessment of pragmatic abilities in neuro-typical individuals. This assessment tool can promote the inclusion of pragmatic skills in the cognitive profile in the diagnostic practice of ADHD, and, thus, the proposed instrument can help not only identify the pragmatic difficulties in the ADHD population but also advance effective intervention programs that specifically focus on pragmatic skills in the targeted population.

Keywords: ADHD, adults, assessment, pragmatics

Procedia PDF Downloads 76
682 Multiloop Fractional Order PID Controller Tuned Using Cuckoo Algorithm for Two Interacting Conical Tank Process

Authors: U. Sabura Banu, S. K. Lakshmanaprabu

Abstract:

The improvement of meta-heuristic algorithm encourages control engineer to design an optimal controller for industrial process. Most real-world industrial processes are non-linear multivariable process with high interaction. Even in sub-process unit, thousands of loops are available mostly interacting in nature. Optimal controller design for such process are still challenging task. Closed loop controller design by multiloop PID involves a tedious procedure by performing interaction study and then PID auto-tuning the loop with higher interaction. Finally, detuning the controller to accommodate the effects of the other process variables. Fractional order PID controllers are replacing integer order PID controllers recently. Design of Multiloop Fractional Order (MFO) PID controller is still more complicated. Cuckoo algorithm, a swarm intelligence technique is used to optimally tune the MFO PID controller with easiness minimizing Integral Time Absolute Error. The closed loop performance is tested under servo, regulatory and servo-regulatory conditions.

Keywords: Cuckoo algorithm, mutliloop fractional order PID controller, two Interacting conical tank process

Procedia PDF Downloads 500
681 The Importance of Artificial Intelligence on Arts and Design

Authors: Mariam Adel Hakim Fouad

Abstract:

This quantitative examine investigates innovative arts teachers' perceptions regarding the implementation of an Inclusive innovative Arts curriculum. The study employs a descriptive method utilizing a 5-point Likert scale questionnaire comprising 15 objects to acquire data from innovative arts educators. The Census, with a disproportionate stratified sampling approach, became utilized to pick out 226 teachers from five academic circuits (Circuit A, B, C, D & E) within Offinso Municipality, Ghana. The findings suggest that most innovative arts instructors maintain a wonderful belief in enforcing an inclusive, innovative arts curriculum. Wonderful perceptions and attitudes amongst teachers are correlated with improved scholar engagement and participation in class sports. This has a look at recommends organizing workshops and in-carrier schooling periods centered on inclusive innovative arts schooling for creative Arts instructors. Moreover, it shows that colleges of education and universities accountable for trainer schooling integrate foundational guides in creative arts and special schooling into their number one schooling teacher training packages.

Keywords: arts-in-health, evidence based medicine, arts for health, expressive arts therapiesarts, cultural heritage, digitalization, ICTarts, design, font, identity

Procedia PDF Downloads 24
680 Capacity for Care: A Management Model for Increasing Animal Live Release Rates, Reducing Animal Intake and Euthanasia Rates in an Australian Open Admission Animal Shelter

Authors: Ann Enright

Abstract:

More than ever, animal shelters need to identify ways to reduce the number of animals entering shelter facilities and the incidence of euthanasia. Managing animal overpopulation using euthanasia can have detrimental health and emotional consequences for the shelter staff involved. There are also community expectations with moral and financial implications to consider. To achieve the goals of reducing animal intake and the incidence of euthanasia, shelter best practice involves combining programs, procedures and partnerships to increase live release rates (LRR), reduce the incidence of disease, length of stay (LOS) and shelter intake whilst overall remaining financially viable. Analysing daily processes, tracking outcomes and implementing simple strategies enabled shelter staff to more effectively focus their efforts and achieve amazing results. The objective of this retrospective study was to assess the effect of implementing the capacity for care (C4C) management model. Data focusing on the average daily number of animals on site for a two year period (2016 – 2017) was exported from a shelter management system, Customer Logic (CL) Vet to Excel for manipulation and comparison. Following the implementation of C4C practices the average daily number of animals on site was reduced by >50%, (2016 average 103 compared to 2017 average 49), average LOS reduced by 50% from 8 weeks to 4 weeks and incidence of disease reduced from ≥ 70% to less than 2% of the cats on site at the completion of the study. The total number of stray cats entering the shelter due to council contracts reduced by 50% (486 to 248). Improved cat outcomes were attributed to strategies that increased adoptions and reduced euthanasia of poorly socialized cats, including foster programs. To continue to achieve improvements in LRR and LOS, strategies to decrease intake further would be beneficial, for example, targeted sterilisation programs. In conclusion, the study highlighted the benefits of using C4C as a management tool, delivering a significant reduction in animal intake and euthanasia with positive emotional, financial and community outcomes.

Keywords: animal welfare, capacity for care, cat, euthanasia, length of stay, managed intake, shelter

Procedia PDF Downloads 139
679 Use of computer and peripherals in the Archaeological Surveys of Sistan in Eastern Iran

Authors: Mahyar Mehrafarin, Reza Mehrafarin

Abstract:

The Sistan region in eastern Iran is a significant archaeological area in Iran and the Middle East, encompassing 10,000 square kilometers. Previous archeological field surveys have identified 1662 ancient sites dating from prehistoric periods to the Islamic period. Research Aim: This article aims to explore the utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, and the benefits derived from their implementation. Methodology: The research employs a descriptive-analytical approach combined with field methods. New technologies and software, such as GPS, drones, magnetometers, equipped cameras, satellite images, and software programs like GIS, Map source, and Excel, were utilized to collect information and analyze data. Findings: The use of modern technologies and computers in archaeological field surveys proved to be essential. Traditional archaeological activities, such as excavation and field surveys, are time-consuming and costly. Employing modern technologies helps in preserving ancient sites, accurately recording archaeological data, reducing errors and mistakes, and facilitating correct and accurate analysis. Creating a comprehensive and accessible database, generating statistics, and producing graphic designs and diagrams are additional advantages derived from the use of efficient technologies in archaeology. Theoretical Importance: The integration of computers and modern technologies in archaeology contributes to interdisciplinary collaborations and facilitates the involvement of specialists from various fields, such as geography, history, art history, anthropology, laboratory sciences, and computer engineering. The utilization of computers in archaeology spanned across diverse areas, including database creation, statistical analysis, graphics implementation, laboratory and engineering applications, and even artificial intelligence, which remains an unexplored area in Iranian archaeology. Data Collection and Analysis Procedures: Information was collected using modern technologies and software, capturing geographic coordinates, aerial images, archeogeophysical data, and satellite images. This data was then inputted into various software programs for analysis, including GIS, Map source, and Excel. The research employed both descriptive and analytical methods to present findings effectively. Question Addressed: The primary question addressed in this research is how the use of modern technologies and computers in archeological field surveys in Sistan, Iran, can enhance archaeological data collection, preservation, analysis, and accessibility. Conclusion: The utilization of modern technologies and computers in archaeological field surveys in Sistan, Iran, has proven to be necessary and beneficial. These technologies aid in preserving ancient sites, accurately recording archaeological data, reducing errors, and facilitating comprehensive analysis. The creation of accessible databases, statistics generation, graphic designs, and interdisciplinary collaborations are further advantages observed. It is recommended to explore the potential of artificial intelligence in Iranian archaeology as an unexplored area. The research has implications for cultural heritage organizations, archaeology students, and universities involved in archaeological field surveys in Sistan and Baluchistan province. Additionally, it contributes to enhancing the understanding and preservation of Iran's archaeological heritage.

Keywords: archaeological surveys, computer use, iran, modern technologies, sistan

Procedia PDF Downloads 78
678 Training of Future Computer Science Teachers Based on Machine Learning Methods

Authors: Meruert Serik, Nassipzhan Duisegaliyeva, Danara Tleumagambetova

Abstract:

The article highlights and describes the characteristic features of real-time face detection in images and videos using machine learning algorithms. Students of educational programs reviewed the research work "6B01511-Computer Science", "7M01511-Computer Science", "7M01525- STEM Education," and "8D01511-Computer Science" of Eurasian National University named after L.N. Gumilyov. As a result, the advantages and disadvantages of Haar Cascade (Haar Cascade OpenCV), HoG SVM (Histogram of Oriented Gradients, Support Vector Machine), and MMOD CNN Dlib (Max-Margin Object Detection, convolutional neural network) detectors used for face detection were determined. Dlib is a general-purpose cross-platform software library written in the programming language C++. It includes detectors used for determining face detection. The Cascade OpenCV algorithm is efficient for fast face detection. The considered work forms the basis for the development of machine learning methods by future computer science teachers.

Keywords: algorithm, artificial intelligence, education, machine learning

Procedia PDF Downloads 73
677 Curating Pluralistic Futures: Leveling up for Whole-Systems Change

Authors: Daniel Schimmelpfennig

Abstract:

This paper attempts to delineate the idea to curate the leveling up for whole-systems change. Curation is the act fo select, organize, look after, or present information from a professional point of view through expert knowledge. The trans-paradigmatic, trans-contextual, trans-disciplinary, trans-perspective of trans-media futures studies hopes to enable a move from a monochrome intellectual pursuit towards breathing a higher dimensionality. Progressing to the next level to equip actors for whole-systems change is in consideration of the commonly known symptoms of our time as well as in anticipation of future challenges, both a necessity and desirability. Systems of collective intelligence could potentially scale regenerative, adaptive, and anticipatory capacities. How could such a curation then be enacted and implemented, to initiate the process of leveling-up? The suggestion here is to focus on the metasystem transition, the bio-digital fusion, namely, by merging neurosciences, the ontological design of money as our operating system, and our understanding of the billions of years of time-proven permutations in nature, biomimicry, and biological metaphors like symbiogenesis. Evolutionary cybernetics accompanies the process of whole-systems change.

Keywords: bio-digital fusion, evolutionary cybernetics, metasystem transition, symbiogenesis, transmedia futures studies

Procedia PDF Downloads 155
676 Understanding Evolutionary Algorithms through Interactive Graphical Applications

Authors: Javier Barrachina, Piedad Garrido, Manuel Fogue, Julio A. Sanguesa, Francisco J. Martinez

Abstract:

It is very common to observe, especially in Computer Science studies that students have difficulties to correctly understand how some mechanisms based on Artificial Intelligence work. In addition, the scope and limitations of most of these mechanisms are usually presented by professors only in a theoretical way, which does not help students to understand them adequately. In this work, we focus on the problems found when teaching Evolutionary Algorithms (EAs), which imitate the principles of natural evolution, as a method to solve parameter optimization problems. Although this kind of algorithms can be very powerful to solve relatively complex problems, students often have difficulties to understand how they work, and how to apply them to solve problems in real cases. In this paper, we present two interactive graphical applications which have been specially designed with the aim of making Evolutionary Algorithms easy to be understood by students. Specifically, we present: (i) TSPS, an application able to solve the ”Traveling Salesman Problem”, and (ii) FotEvol, an application able to reconstruct a given image by using Evolution Strategies. The main objective is that students learn how these techniques can be implemented, and the great possibilities they offer.

Keywords: education, evolutionary algorithms, evolution strategies, interactive learning applications

Procedia PDF Downloads 338
675 Discrimination in Insurance Pricing: A Textual-Analysis Perspective

Authors: Ruijuan Bi

Abstract:

Discrimination in insurance pricing is a topic of increasing concern, particularly in the context of the rapid development of big data and artificial intelligence. There is a need to explore the various forms of discrimination, such as direct and indirect discrimination, proxy discrimination, algorithmic discrimination, and unfair discrimination, and understand their implications in insurance pricing models. This paper aims to analyze and interpret the definitions of discrimination in insurance pricing and explore measures to reduce discrimination. It utilizes a textual analysis methodology, which involves gathering qualitative data from relevant literature on definitions of discrimination. The research methodology focuses on exploring the various forms of discrimination and their implications in insurance pricing models. Through textual analysis, this paper identifies the specific characteristics and implications of each form of discrimination in the general insurance industry. This research contributes to the theoretical understanding of discrimination in insurance pricing. By analyzing and interpreting relevant literature, this paper provides insights into the definitions of discrimination and the laws and regulations surrounding it. This theoretical foundation can inform future empirical research on discrimination in insurance pricing using relevant theories of probability theory.

Keywords: algorithmic discrimination, direct and indirect discrimination, proxy discrimination, unfair discrimination, insurance pricing

Procedia PDF Downloads 73
674 Intrusion Detection and Prevention System (IDPS) in Cloud Computing Using Anomaly-Based and Signature-Based Detection Techniques

Authors: John Onyima, Ikechukwu Ezepue

Abstract:

Virtualization and cloud computing are among the fast-growing computing innovations in recent times. Organisations all over the world are moving their computing services towards the cloud this is because of its rapid transformation of the organization’s infrastructure and improvement of efficient resource utilization and cost reduction. However, this technology brings new security threats and challenges about safety, reliability and data confidentiality. Evidently, no single security technique can guarantee security or protection against malicious attacks on a cloud computing network hence an integrated model of intrusion detection and prevention system has been proposed. Anomaly-based and signature-based detection techniques will be integrated to enable the network and its host defend themselves with some level of intelligence. The anomaly-base detection was implemented using the local deviation factor graph-based (LDFGB) algorithm while the signature-based detection was implemented using the snort algorithm. Results from this collaborative intrusion detection and prevention techniques show robust and efficient security architecture for cloud computing networks.

Keywords: anomaly-based detection, cloud computing, intrusion detection, intrusion prevention, signature-based detection

Procedia PDF Downloads 307
673 The Concept of Neurostatistics as a Neuroscience

Authors: Igwenagu Chinelo Mercy

Abstract:

This study is on the concept of Neurostatistics in relation to neuroscience. Neuroscience also known as neurobiology is the scientific study of the nervous system. In the study of neuroscience, it has been noted that brain function and its relations to the process of acquiring knowledge and behaviour can be better explained by the use of various interrelated methods. The scope of neuroscience has broadened over time to include different approaches used to study the nervous system at different scales. On the other hand, Neurostatistics based on this study is viewed as a statistical concept that uses similar techniques of neuron mechanisms to solve some problems especially in the field of life science. This study is imperative in this era of Artificial intelligence/Machine leaning in the sense that clear understanding of the technique and its proper application could assist in solving some medical disorder that are mainly associated with the nervous system. This will also help in layman’s understanding of the technique of the nervous system in order to overcome some of the health challenges associated with it. For this concept to be well understood, an illustrative example using a brain associated disorder was used for demonstration. Structural equation modelling was adopted in the analysis. The results clearly show the link between the techniques of statistical model and nervous system. Hence, based on this study, the appropriateness of Neurostatistics application in relation to neuroscience could be based on the understanding of the behavioural pattern of both concepts.

Keywords: brain, neurons, neuroscience, neurostatistics, structural equation modeling

Procedia PDF Downloads 71
672 Early Recognition and Grading of Cataract Using a Combined Log Gabor/Discrete Wavelet Transform with ANN and SVM

Authors: Hadeer R. M. Tawfik, Rania A. K. Birry, Amani A. Saad

Abstract:

Eyes are considered to be the most sensitive and important organ for human being. Thus, any eye disorder will affect the patient in all aspects of life. Cataract is one of those eye disorders that lead to blindness if not treated correctly and quickly. This paper demonstrates a model for automatic detection, classification, and grading of cataracts based on image processing techniques and artificial intelligence. The proposed system is developed to ease the cataract diagnosis process for both ophthalmologists and patients. The wavelet transform combined with 2D Log Gabor Wavelet transform was used as feature extraction techniques for a dataset of 120 eye images followed by a classification process that classified the image set into three classes; normal, early, and advanced stage. A comparison between the two used classifiers, the support vector machine SVM and the artificial neural network ANN were done for the same dataset of 120 eye images. It was concluded that SVM gave better results than ANN. SVM success rate result was 96.8% accuracy where ANN success rate result was 92.3% accuracy.

Keywords: cataract, classification, detection, feature extraction, grading, log-gabor, neural networks, support vector machines, wavelet

Procedia PDF Downloads 332
671 Restoring Statecraft in the U.S. Economy: A Proposal for an American Entrepreneurial State

Authors: Miron Wolnicki

Abstract:

In the past 75 years the world was either influenced by, competing with or learning from U.S. corporations. This is no longer true. As the economic power shifts from the West to the East, U.S. corporations are lagging behind Asian competitors. Moreover, U.S. statecraft fails to address this decline. In a world dominated by interventionist and neo-mercantilist states, having an ineffective non-activist government becomes a costly neoclassic delusion which weakens the world’s largest economy. American conservative economists continue talking about the superiority of the free market system in generating new technologies. The reality is different. The U.S. is sliding further into an overregulated, over-taxed, anti-business state. This paper argues that in order to maintain its economic strength and technological leadership, the U.S. must reform federal institutions to increase support for artificial intelligence and other cutting-edge technologies. The author outlines a number of institutional reforms, under one umbrella, which he calls the American Entrepreneurial State (AES). The AES will improve productivity and bring about coherent business strategies for the next 10-15 years. The design and inspiration for the AES come from the experience of successful statecraft examples in Asia and also other parts the global economy.

Keywords: post-neoliberal system, entrepreneurial state, government and economy, American entrepreneurial state

Procedia PDF Downloads 124
670 Construction of an Assessment Tool for Early Childhood Development in the World of DiscoveryTM Curriculum

Authors: Divya Palaniappan

Abstract:

Early Childhood assessment tools must measure the quality and the appropriateness of a curriculum with respect to culture and age of the children. Preschool assessment tools lack psychometric properties and were developed to measure only few areas of development such as specific skills in music, art and adaptive behavior. Existing preschool assessment tools in India are predominantly informal and are fraught with judgmental bias of observers. The World of Discovery TM curriculum focuses on accelerating the physical, cognitive, language, social and emotional development of pre-schoolers in India through various activities. The curriculum caters to every child irrespective of their dominant intelligence as per Gardner’s Theory of Multiple Intelligence which concluded "even students as young as four years old present quite distinctive sets and configurations of intelligences". The curriculum introduces a new theme every week where, concepts are explained through various activities so that children with different dominant intelligences could understand it. For example: The ‘Insects’ theme is explained through rhymes, craft and counting corner, and hence children with one of these dominant intelligences: Musical, bodily-kinesthetic and logical-mathematical could grasp the concept. The child’s progress is evaluated using an assessment tool that measures a cluster of inter-dependent developmental areas: physical, cognitive, language, social and emotional development, which for the first time renders a multi-domain approach. The assessment tool is a 5-point rating scale that measures these Developmental aspects: Cognitive, Language, Physical, Social and Emotional. Each activity strengthens one or more of the developmental aspects. During cognitive corner, the child’s perceptual reasoning, pre-math abilities, hand-eye co-ordination and fine motor skills could be observed and evaluated. The tool differs from traditional assessment methodologies by providing a framework that allows teachers to assess a child’s continuous development with respect to specific activities in real time objectively. A pilot study of the tool was done with a sample data of 100 children in the age group 2.5 to 3.5 years. The data was collected over a period of 3 months across 10 centers in Chennai, India, scored by the class teacher once a week. The teachers were trained by psychologists on age-appropriate developmental milestones to minimize observer’s bias. The norms were calculated from the mean and standard deviation of the observed data. The results indicated high internal consistency among parameters and that cognitive development improved with physical development. A significant positive relationship between physical and cognitive development has been observed among children in a study conducted by Sibley and Etnier. In Children, the ‘Comprehension’ ability was found to be greater than ‘Reasoning’ and pre-math abilities as indicated by the preoperational stage of Piaget’s theory of cognitive development. The average scores of various parameters obtained through the tool corroborates the psychological theories on child development, offering strong face validity. The study provides a comprehensive mechanism to assess a child’s development and differentiate high performers from the rest. Based on the average scores, the difficulty level of activities could be increased or decreased to nurture the development of pre-schoolers and also appropriate teaching methodologies could be devised.

Keywords: child development, early childhood assessment, early childhood curriculum, quantitative assessment of preschool curriculum

Procedia PDF Downloads 362
669 Latency-Based Motion Detection in Spiking Neural Networks

Authors: Mohammad Saleh Vahdatpour, Yanqing Zhang

Abstract:

Understanding the neural mechanisms underlying motion detection in the human visual system has long been a fascinating challenge in neuroscience and artificial intelligence. This paper presents a spiking neural network model inspired by the processing of motion information in the primate visual system, particularly focusing on the Middle Temporal (MT) area. In our study, we propose a multi-layer spiking neural network model to perform motion detection tasks, leveraging the idea that synaptic delays in neuronal communication are pivotal in motion perception. Synaptic delay, determined by factors like axon length and myelin insulation, affects the temporal order of input spikes, thereby encoding motion direction and speed. Overall, our spiking neural network model demonstrates the feasibility of capturing motion detection principles observed in the primate visual system. The combination of synaptic delays, learning mechanisms, and shared weights and delays in SMD provides a promising framework for motion perception in artificial systems, with potential applications in computer vision and robotics.

Keywords: neural network, motion detection, signature detection, convolutional neural network

Procedia PDF Downloads 88
668 Forecasting the Future Implications of ChatGPT Usage in Education Based on AI Algorithms

Authors: Yakubu Bala Mohammed, Nadire Chavus, Mohammed Bulama

Abstract:

Generative Pre-trained Transformer (ChatGPT) represents an artificial intelligence (AI) tool capable of swiftly generating comprehensive responses to prompts and follow-up inquiries. This emerging AI tool was introduced in November 2022 by OpenAI firm, an American AI research laboratory, utilizing substantial language models. This present study aims to delve into the potential future consequences of ChatGPT usage in education using AI-based algorithms. The paper will bring forth the likely potential risks of ChatGBT utilization, such as academic integrity concerns, unfair learning assessments, excessive reliance on AI, and dissemination of inaccurate information using four machine learning algorithms: eXtreme-Gradient Boosting (XGBoost), Support vector machine (SVM), Emotional artificial neural network (EANN), and Random forest (RF) would be used to analyze the study collected data due to their robustness. Finally, the findings of the study will assist education stakeholders in understanding the future implications of ChatGPT usage in education and propose solutions and directions for upcoming studies.

Keywords: machine learning, ChatGPT, education, learning, implications

Procedia PDF Downloads 232
667 “Octopub”: Geographical Sentiment Analysis Using Named Entity Recognition from Social Networks for Geo-Targeted Billboard Advertising

Authors: Oussama Hafferssas, Hiba Benyahia, Amina Madani, Nassima Zeriri

Abstract:

Although data nowadays has multiple forms; from text to images, and from audio to videos, yet text is still the most used one at a public level. At an academical and research level, and unlike other forms, text can be considered as the easiest form to process. Therefore, a brunch of Data Mining researches has been always under its shadow, called "Text Mining". Its concept is just like data mining’s, finding valuable patterns in data, from large collections and tremendous volumes of data, in this case: Text. Named entity recognition (NER) is one of Text Mining’s disciplines, it aims to extract and classify references such as proper names, locations, expressions of time and dates, organizations and more in a given text. Our approach "Octopub" does not aim to find new ways to improve named entity recognition process, rather than that it’s about finding a new, and yet smart way, to use NER in a way that we can extract sentiments of millions of people using Social Networks as a limitless information source, and Marketing for product promotion as the main domain of application.

Keywords: textmining, named entity recognition(NER), sentiment analysis, social media networks (SN, SMN), business intelligence(BI), marketing

Procedia PDF Downloads 589
666 Assessing Artificial Neural Network Models on Forecasting the Return of Stock Market Index

Authors: Hamid Rostami Jaz, Kamran Ameri Siahooei

Abstract:

Up to now different methods have been used to forecast the index returns and the index rate. Artificial intelligence and artificial neural networks have been one of the methods of index returns forecasting. This study attempts to carry out a comparative study on the performance of different Radial Base Neural Network and Feed-Forward Perceptron Neural Network to forecast investment returns on the index. To achieve this goal, the return on investment in Tehran Stock Exchange index is evaluated and the performance of Radial Base Neural Network and Feed-Forward Perceptron Neural Network are compared. Neural networks performance test is applied based on the least square error in two approaches of in-sample and out-of-sample. The research results show the superiority of the radial base neural network in the in-sample approach and the superiority of perceptron neural network in the out-of-sample approach.

Keywords: exchange index, forecasting, perceptron neural network, Tehran stock exchange

Procedia PDF Downloads 464
665 Studies on the Teaching Pedagogy and Effectiveness for the Multi-Channel Storytelling for Social Media, Cinema, Game, and Streaming Platform: Case Studies of Squid Game

Authors: Chan Ka Lok Sobel

Abstract:

The rapid evolution of digital media platforms has given rise to new forms of narrative engagement, particularly through multi-channel storytelling. This research focuses on exploring the teaching pedagogy and effectiveness of multi-channel storytelling for social media, cinema, games, and streaming platforms. The study employs case studies of the popular series "Squid Game" to investigate the diverse pedagogical approaches and strategies used in teaching multi-channel storytelling. Through qualitative research methods, including interviews, surveys, and content analysis, the research assesses the effectiveness of these approaches in terms of student engagement, knowledge acquisition, critical thinking skills, and the development of digital literacy. The findings contribute to understanding best practices for incorporating multi-channel storytelling into educational contexts and enhancing learning outcomes in the digital media landscape.

Keywords: digital literacy, game-based learning, artificial intelligence, animation production, educational technology

Procedia PDF Downloads 114
664 The Role of Public Representatives and Legislatures in Strengthening HIV and AIDS Prevention Strategies: The Case of South Africa

Authors: Moses Mncwabe

Abstract:

Both Public Representatives and Legislatures have an imperative role towards strengthening interventions to reduce and cease Sexual Transmitted Infections (STIs) specifically the Human Immunodeficiency Virus (HIV). Scaling-up constituency work in support of interventions earmarked for mitigating the compromising socio-economic impacts of advanced HIV is extremely essential. Though the antiretroviral treatment (ART) has saved million lives that would have perished without it, the Joint United Nations Programme on HIV/AIDS (2012) states that more efforts should be redirected to prevention strategies to close the tap of new infections. It is against this backdrop that Legislatures as law making institutions have undisputed role to play in HIV alleviation because of the position they occupy in the society. Furthermore, Public Representatives are arguably idolised by young people for the role they play hence it is incumbent upon them to use their moral and political responsibility to aid the interventions for HIV prevention (Inter-Parliamentary Union, Joint United Nations Programme on HIV/AIDS & United Nations Development Programme, 2007). Moreover, the continuous HIV infection and its devastating effects specifically in Southern African region has brought closer the disease to public representatives and demanded calculated interventions warranting both public representatives and legislatures to be more visible in various ways such as taking HIV counselling and testing publicly, oversight, reducing stigma and discrimination, partnering with civil society organisations (CSOs) and facilitating debates on HIV across parliamentary and social platforms. The effects of advanced HIV yearn for public representatives to be seen, accessed, felt, engaged, partnered and lobbied for pro-human rights legislations and ideal oversight to coerce the executive to deliver on their core responsibilities like providing basic services to the electorates (AIDS Law Project (2003). The National Democratic Institute for International Affairs and the Southern African Development Community Parliamentary Forum (2004) assert that the omission of Public Representatives and Legislatures in the HIV prevention agenda is a serious deficiency in the fight against HIV and AIDS. In light of this, this paper argues the innovative and legislative ways in which both the Public Representative and the Legislatures should play in HIV prevention.

Keywords: legislature, public representative, oversight, HIV and AIDS, constituency, service delivery

Procedia PDF Downloads 388
663 A Guide to User-Friendly Bash Prompt: Adding Natural Language Processing Plus Bash Explanation to the Command Interface

Authors: Teh Kean Kheng, Low Soon Yee, Burra Venkata Durga Kumar

Abstract:

In 2022, as the future world becomes increasingly computer-related, more individuals are attempting to study coding for themselves or in school. This is because they have discovered the value of learning code and the benefits it will provide them. But learning coding is difficult for most people. Even senior programmers that have experience for a decade year still need help from the online source while coding. The reason causing this is that coding is not like talking to other people; it has the specific syntax to make the computer understand what we want it to do, so coding will be hard for normal people if they don’t have contact in this field before. Coding is hard. If a user wants to learn bash code with bash prompt, it will be harder because if we look at the bash prompt, we will find that it is just an empty box and waiting for a user to tell the computer what we want to do, if we don’t refer to the internet, we will not know what we can do with the prompt. From here, we can conclude that the bash prompt is not user-friendly for new users who are learning bash code. Our goal in writing this paper is to give an idea to implement a user-friendly Bash prompt in Ubuntu OS using Artificial Intelligent (AI) to lower the threshold of learning in Bash code, to make the user use their own words and concept to write and learn Bash code.

Keywords: user-friendly, bash code, artificial intelligence, threshold, semantic similarity, lexical similarity

Procedia PDF Downloads 142
662 Glucose Monitoring System Using Machine Learning Algorithms

Authors: Sangeeta Palekar, Neeraj Rangwani, Akash Poddar, Jayu Kalambe

Abstract:

The bio-medical analysis is an indispensable procedure for identifying health-related diseases like diabetes. Monitoring the glucose level in our body regularly helps us identify hyperglycemia and hypoglycemia, which can cause severe medical problems like nerve damage or kidney diseases. This paper presents a method for predicting the glucose concentration in blood samples using image processing and machine learning algorithms. The glucose solution is prepared by the glucose oxidase (GOD) and peroxidase (POD) method. An experimental database is generated based on the colorimetric technique. The image of the glucose solution is captured by the raspberry pi camera and analyzed using image processing by extracting the RGB, HSV, LUX color space values. Regression algorithms like multiple linear regression, decision tree, RandomForest, and XGBoost were used to predict the unknown glucose concentration. The multiple linear regression algorithm predicts the results with 97% accuracy. The image processing and machine learning-based approach reduce the hardware complexities of existing platforms.

Keywords: artificial intelligence glucose detection, glucose oxidase, peroxidase, image processing, machine learning

Procedia PDF Downloads 204
661 Barriers of the Development and Implementation of Health Information Systems in Iran

Authors: Abbas Sheikhtaheri, Nasim Hashemi

Abstract:

Health information systems have great benefits for clinical and managerial processes of health care organizations. However, identifying and removing constraints and barriers of implementing and using health information systems before any implementation is essential. Physicians are one of the main users of health information systems, therefore, identifying the causes of their resistance and concerns about the barriers of the implementation of these systems is very important. So the purpose of this study was to determine the barriers of the development and implementation of health information systems in terms of the Iranian physicians’ perspectives. In this study conducted in 8 selected hospitals affiliated to Tehran and Iran Universities of Medical Sciences, Tehran, Iran in 2014, physicians (GPs, residents, interns, specialists) in these hospitals were surveyed. In order to collect data, a research made questionnaire was used (Cronbach’s α = 0.95). The instrument included 25 about organizational (9), personal (4), moral and legal (3) and technical barriers (9). Participants were asked to answer the questions using 5 point scale Likert (completely disagree=1 to completely agree=5). By using a simple random sampling method, 200 physicians (from 600) were invited to study that eventually 163 questionnaires were returned. We used mean score and t-test and ANOVA to analyze the data using SPSS software version 17. 52.1% of respondents were female. The mean age was 30.18 ± 7.29. The work experience years for most of them were between 1 to 5 years (80.4 percent). The most important barriers were organizational ones (3.4 ± 0.89), followed by ethical (3.18 ± 0.98), technical (3.06 ± 0.8) and personal (3.04 ± 1.2). Lack of easy access to a fast Internet (3.67±1.91) and the lack of exchanging information (3.61±1.2) were the most important technical barriers. Among organizational barriers, the lack of efficient planning for the development and implementation systems (3.56±1.32) and was the most important ones. Lack of awareness and knowledge of health care providers about the health information systems features (3.33±1.28) and the lack of physician participation in planning phase (3.27±1.2) as well as concerns regarding the security and confidentiality of health information (3.15 ± 1.31) were the most important personal and ethical barriers, respectively. Women (P = 0.02) and those with less experience (P = 0.002) were more concerned about personal barriers. GPs also were more concerned about technical barriers (P = 0.02). According to the study, technical and ethics barriers were considered as the most important barriers however, lack of awareness in target population is also considered as one of the main barriers. Ignoring issues such as personal and ethical barriers, even if the necessary infrastructure and technical requirements were provided, may result in failure. Therefore, along with the creating infrastructure and resolving organizational barriers, special attention to education and awareness of physicians and providing solution for ethics concerns are necessary.

Keywords: barriers, development health information systems, implementation, physicians

Procedia PDF Downloads 345
660 ChatGPT 4.0 Demonstrates Strong Performance in Standardised Medical Licensing Examinations: Insights and Implications for Medical Educators

Authors: K. O'Malley

Abstract:

Background: The emergence and rapid evolution of large language models (LLMs) (i.e., models of generative artificial intelligence, or AI) has been unprecedented. ChatGPT is one of the most widely used LLM platforms. Using natural language processing technology, it generates customized responses to user prompts, enabling it to mimic human conversation. Responses are generated using predictive modeling of vast internet text and data swathes and are further refined and reinforced through user feedback. The popularity of LLMs is increasing, with a growing number of students utilizing these platforms for study and revision purposes. Notwithstanding its many novel applications, LLM technology is inherently susceptible to bias and error. This poses a significant challenge in the educational setting, where academic integrity may be undermined. This study aims to evaluate the performance of the latest iteration of ChatGPT (ChatGPT4.0) in standardized state medical licensing examinations. Methods: A considered search strategy was used to interrogate the PubMed electronic database. The keywords ‘ChatGPT’ AND ‘medical education’ OR ‘medical school’ OR ‘medical licensing exam’ were used to identify relevant literature. The search included all peer-reviewed literature published in the past five years. The search was limited to publications in the English language only. Eligibility was ascertained based on the study title and abstract and confirmed by consulting the full-text document. Data was extracted into a Microsoft Excel document for analysis. Results: The search yielded 345 publications that were screened. 225 original articles were identified, of which 11 met the pre-determined criteria for inclusion in a narrative synthesis. These studies included performance assessments in national medical licensing examinations from the United States, United Kingdom, Saudi Arabia, Poland, Taiwan, Japan and Germany. ChatGPT 4.0 achieved scores ranging from 67.1 to 88.6 percent. The mean score across all studies was 82.49 percent (SD= 5.95). In all studies, ChatGPT exceeded the threshold for a passing grade in the corresponding exam. Conclusion: The capabilities of ChatGPT in standardized academic assessment in medicine are robust. While this technology can potentially revolutionize higher education, it also presents several challenges with which educators have not had to contend before. The overall strong performance of ChatGPT, as outlined above, may lend itself to unfair use (such as the plagiarism of deliverable coursework) and pose unforeseen ethical challenges (arising from algorithmic bias). Conversely, it highlights potential pitfalls if users assume LLM-generated content to be entirely accurate. In the aforementioned studies, ChatGPT exhibits a margin of error between 11.4 and 32.9 percent, which resonates strongly with concerns regarding the quality and veracity of LLM-generated content. It is imperative to highlight these limitations, particularly to students in the early stages of their education who are less likely to possess the requisite insight or knowledge to recognize errors, inaccuracies or false information. Educators must inform themselves of these emerging challenges to effectively address them and mitigate potential disruption in academic fora.

Keywords: artificial intelligence, ChatGPT, generative ai, large language models, licensing exam, medical education, medicine, university

Procedia PDF Downloads 32