Search results for: cognitive radio network
3488 Pavement Management for a Metropolitan Area: A Case Study of Montreal
Authors: Luis Amador Jimenez, Md. Shohel Amin
Abstract:
Pavement performance models are based on projections of observed traffic loads, which makes uncertain to study funding strategies in the long run if history does not repeat. Neural networks can be used to estimate deterioration rates but the learning rate and momentum have not been properly investigated, in addition, economic evolvement could change traffic flows. This study addresses both issues through a case study for roads of Montreal that simulates traffic for a period of 50 years and deals with the measurement error of the pavement deterioration model. Travel demand models are applied to simulate annual average daily traffic (AADT) every 5 years. Accumulated equivalent single axle loads (ESALs) are calculated from the predicted AADT and locally observed truck distributions combined with truck factors. A back propagation Neural Network (BPN) method with a Generalized Delta Rule (GDR) learning algorithm is applied to estimate pavement deterioration models capable of overcoming measurement errors. Linear programming of lifecycle optimization is applied to identify M&R strategies that ensure good pavement condition while minimizing the budget. It was found that CAD 150 million is the minimum annual budget to good condition for arterial and local roads in Montreal. Montreal drivers prefer the use of public transportation for work and education purposes. Vehicle traffic is expected to double within 50 years, ESALS are expected to double the number of ESALs every 15 years. Roads in the island of Montreal need to undergo a stabilization period for about 25 years, a steady state seems to be reached after.Keywords: pavement management system, traffic simulation, backpropagation neural network, performance modeling, measurement errors, linear programming, lifecycle optimization
Procedia PDF Downloads 4603487 Information and Communication Technology (ICT) Education Improvement for Enhancing Learning Performance and Social Equality
Authors: Heichia Wang, Yalan Chao
Abstract:
Social inequality is a persistent problem. One of the ways to solve this problem is through education. At present, vulnerable groups are often less geographically accessible to educational resources. However, compared with educational resources, communication equipment is easier for vulnerable groups. Now that information and communication technology (ICT) has entered the field of education, today we can accept the convenience that ICT provides in education, and the mobility that it brings makes learning independent of time and place. With mobile learning, teachers and students can start discussions in an online chat room without the limitations of time or place. However, because liquidity learning is quite convenient, people tend to solve problems in short online texts with lack of detailed information in a lack of convenient online environment to express ideas. Therefore, the ICT education environment may cause misunderstanding between teachers and students. Therefore, in order to better understand each other's views between teachers and students, this study aims to clarify the essays of the analysts and classify the students into several types of learning questions to clarify the views of teachers and students. In addition, this study attempts to extend the description of possible omissions in short texts by using external resources prior to classification. In short, by applying a short text classification, this study can point out each student's learning problems and inform the instructor where the main focus of the future course is, thus improving the ICT education environment. In order to achieve the goals, this research uses convolutional neural network (CNN) method to analyze short discussion content between teachers and students in an ICT education environment. Divide students into several main types of learning problem groups to facilitate answering student problems. In addition, this study will further cluster sub-categories of each major learning type to indicate specific problems for each student. Unlike most neural network programs, this study attempts to extend short texts with external resources before classifying them to improve classification performance. In short, by applying the classification of short texts, we can point out the learning problems of each student and inform the instructors where the main focus of future courses will improve the ICT education environment. The data of the empirical process will be used to pre-process the chat records between teachers and students and the course materials. An action system will be set up to compare the most similar parts of the teaching material with each student's chat history to improve future classification performance. Later, the function of short text classification uses CNN to classify rich chat records into several major learning problems based on theory-driven titles. By applying these modules, this research hopes to clarify the main learning problems of students and inform teachers that they should focus on future teaching.Keywords: ICT education improvement, social equality, short text analysis, convolutional neural network
Procedia PDF Downloads 1283486 The Conflict of Grammaticality and Meaningfulness of the Corrupt Words: A Cross-lingual Sociolinguistic Study
Authors: Jayashree Aanand, Gajjam
Abstract:
The grammatical tradition in Sanskrit literature emphasizes the importance of the correct use of Sanskrit words or linguistic units (sādhu śabda) that brings the meritorious values, denying the attribution of the same religious merit to the incorrect use of Sanskrit words (asādhu śabda) or the vernacular or corrupt forms (apa-śabda or apabhraṁśa), even though they may help in communication. The current research, the culmination of the doctoral research on sentence definition, studies the difference among the comprehension of both correct and incorrect word forms in Sanskrit and Marathi languages in India. Based on the total of 19 experiments (both web-based and classroom-controlled) on approximately 900 Indian readers, it is found that while the incorrect forms in Sanskrit are comprehended with lesser accuracy than the correct word forms, no such difference can be seen for the Marathi language. It is interpreted that the incorrect word forms in the native language or in the language which is spoken daily (such as Marathi) will pose a lesser cognitive load as compared to the language that is not spoken on a daily basis but only used for reading (such as Sanskrit). The theoretical base for the research problem is as follows: among the three main schools of Language Science in ancient India, the Vaiyākaraṇas (Grammarians) hold that the corrupt word forms do have their own expressive power since they convey meaning, while as the Mimāṁsakas (the Exegesists) and the Naiyāyikas (the Logicians) believe that the corrupt forms can only convey the meaning indirectly, by recalling their association and similarity with the correct forms. The grammarians argue that the vernaculars that are born of the speaker’s inability to speak proper Sanskrit are regarded as degenerate versions or fallen forms of the ‘divine’ Sanskrit language and speakers who could not use proper Sanskrit or the standard language were considered as Śiṣṭa (‘elite’). The different ideas of different schools strictly adhere to their textual dispositions. For the last few years, sociolinguists have agreed that no variety of language is inherently better than any other; they are all the same as long as they serve the need of people that use them. Although the standard form of a language may offer the speakers some advantages, the non-standard variety is considered the most natural style of speaking. This is visible in the results. If the incorrect word forms incur the recall of the correct word forms in the reader as the theory suggests, it would have added one extra step in the process of sentential cognition leading to more cognitive load and less accuracy. This has not been the case for the Marathi language. Although speaking and listening to the vernaculars is the common practice and reading the vernacular is not, Marathi readers have readily and accurately comprehended the incorrect word forms in the sentences, as against the Sanskrit readers. The primary reason being Sanskrit is spoken and also read in the standard form only and the vernacular forms in Sanskrit are not found in the conversational data.Keywords: experimental sociolinguistics, grammaticality and meaningfulness, Marathi, Sanskrit
Procedia PDF Downloads 1263485 Synthesis and Characterization of Fibrin/Polyethylene Glycol-Based Interpenetrating Polymer Networks for Dermal Tissue Engineering
Authors: O. Gsib, U. Peirera, C. Egles, S. A. Bencherif
Abstract:
In skin regenerative medicine, one of the critical issues is to produce a three-dimensional scaffold with optimized porosity for dermal fibroblast infiltration and neovascularization, which exhibits high mechanical properties and displays sufficient wound healing characteristics. In this study, we report on the synthesis and characterization of macroporous sequential interpenetrating polymer networks (IPNs) combining skin wound healing properties of fibrin with the excellent physical properties of polyethylene glycol (PEG). Fibrin fibers serve as a provisional biologically active network to promote cell adhesion and proliferation while PEG provides the mechanical stability to maintain the entire 3D construct. After having modified both PEG and Serum Albumin (used for promoting enzymatic degradability) by adding methacrylate residues (PEGDM and SAM, respectively), Fibrin/PEGDM-SAM sequential IPNs were synthesized as follows: Macroporous sponges were first produced from PEGDM-SAM hydrogels by a freeze-drying technique and then rehydrated by adding the fibrin precursors. Environmental Scanning Electron Microscopy (ESEM) and Confocal Laser Scanning Microscopy (CLSM) were used to characterize their microstructure. Human dermal fibroblasts were cultivated during one week in the constructs and different cell culture parameters (viability, morphology, proliferation) were evaluated. Subcutaneous implantations of the scaffolds were conducted on five-week old male nude mice to investigate their biocompatibility in vivo. We successfully synthesized interconnected and macroporous Fibrin/PEGDM-SAM sequential IPNs. The viability of primary dermal fibroblasts was well maintained (above 90%) after 2 days of culture. Cells were able to adhere, spread and proliferate in the scaffolds suggesting the suitable porosity and intrinsic biologic properties of the constructs. The fibrin network adopted a spider web shape that covered partially the pores allowing easier cell infiltration into the macroporous structure. To further characterize the in vitro cell behavior, cell proliferation (EdU incorporation, MTS assay) is being studied. Preliminary histological analysis of animal studies indicated the persistence of hydrogels even after one-month post implantation and confirmed the absence of inflammation response, good biocompatibility and biointegration of our scaffolds within the surrounding tissues. These results suggest that our Fibrin/PEGDM-SAM IPNs could be considered as potential candidates for dermis regenerative medicine. Histological analysis will be completed to further assess scaffold remodeling including de novo extracellular matrix protein synthesis and early stage angiogenesis analysis. Compression measurements will be conducted to investigate the mechanical properties.Keywords: fibrin, hydrogels for dermal reconstruction, polyethylene glycol, semi-interpenetrating polymer network
Procedia PDF Downloads 2363484 Resisting Adversarial Assaults: A Model-Agnostic Autoencoder Solution
Authors: Massimo Miccoli, Luca Marangoni, Alberto Aniello Scaringi, Alessandro Marceddu, Alessandro Amicone
Abstract:
The susceptibility of deep neural networks (DNNs) to adversarial manipulations is a recognized challenge within the computer vision domain. Adversarial examples, crafted by adding subtle yet malicious alterations to benign images, exploit this vulnerability. Various defense strategies have been proposed to safeguard DNNs against such attacks, stemming from diverse research hypotheses. Building upon prior work, our approach involves the utilization of autoencoder models. Autoencoders, a type of neural network, are trained to learn representations of training data and reconstruct inputs from these representations, typically minimizing reconstruction errors like mean squared error (MSE). Our autoencoder was trained on a dataset of benign examples; learning features specific to them. Consequently, when presented with significantly perturbed adversarial examples, the autoencoder exhibited high reconstruction errors. The architecture of the autoencoder was tailored to the dimensions of the images under evaluation. We considered various image sizes, constructing models differently for 256x256 and 512x512 images. Moreover, the choice of the computer vision model is crucial, as most adversarial attacks are designed with specific AI structures in mind. To mitigate this, we proposed a method to replace image-specific dimensions with a structure independent of both dimensions and neural network models, thereby enhancing robustness. Our multi-modal autoencoder reconstructs the spectral representation of images across the red-green-blue (RGB) color channels. To validate our approach, we conducted experiments using diverse datasets and subjected them to adversarial attacks using models such as ResNet50 and ViT_L_16 from the torch vision library. The autoencoder extracted features used in a classification model, resulting in an MSE (RGB) of 0.014, a classification accuracy of 97.33%, and a precision of 99%.Keywords: adversarial attacks, malicious images detector, binary classifier, multimodal transformer autoencoder
Procedia PDF Downloads 1123483 Transboundary Pollution after Natural Disasters: Scenario Analyses for Uranium at Kyrgyzstan-Uzbekistan Border
Authors: Fengqing Li, Petra Schneider
Abstract:
Failure of tailings management facilities (TMF) of radioactive residues is an enormous challenge worldwide and can result in major catastrophes. Particularly in transboundary regions, such failure is most likely to lead to international conflict. This risk occurs in Kyrgyzstan and Uzbekistan, where the current major challenge is the quantification of impacts due to pollution from uranium legacy sites and especially the impact on river basins after natural hazards (i.e., landslides). By means of GoldSim, a probabilistic simulation model, the amount of tailing material that flows into the river networks of Mailuu Suu in Kyrgyzstan after pond failure was simulated for three scenarios, namely 10%, 20%, and 30% of material inputs. Based on Muskingum-Cunge flood routing procedure, the peak value of uranium flood wave along the river network was simulated. Among the 23 TMF, 19 ponds are close to the river networks. The spatiotemporal distributions of uranium along the river networks were then simulated for all the 19 ponds under three scenarios. Taking the TP7 which is 30 km far from the Kyrgyzstan-Uzbekistan border as one example, the uranium concentration decreased continuously along the longitudinal gradient of the river network, the concentration of uranium was observed at the border after 45 min of the pond failure and the highest value was detected after 69 min. The highest concentration of uranium at the border were 16.5, 33, and 47.5 mg/L under scenarios of 10%, 20%, and 30% of material inputs, respectively. In comparison to the guideline value of uranium in drinking water (i.e., 30 µg/L) provided by the World Health Organization, the observed concentrations of uranium at the border were 550‒1583 times higher. In order to mitigate the transboundary impact of a radioactive pollutant release, an integrated framework consisting of three major strategies were proposed. Among, the short-term strategy can be used in case of emergency event, the medium-term strategy allows both countries handling the TMF efficiently based on the benefit-sharing concept, and the long-term strategy intends to rehabilitate the site through the relocation of all TMF.Keywords: Central Asia, contaminant transport modelling, radioactive residue, transboundary conflict
Procedia PDF Downloads 1183482 An Approach to Autonomous Drones Using Deep Reinforcement Learning and Object Detection
Authors: K. R. Roopesh Bharatwaj, Avinash Maharana, Favour Tobi Aborisade, Roger Young
Abstract:
Presently, there are few cases of complete automation of drones and its allied intelligence capabilities. In essence, the potential of the drone has not yet been fully utilized. This paper presents feasible methods to build an intelligent drone with smart capabilities such as self-driving, and obstacle avoidance. It does this through advanced Reinforcement Learning Techniques and performs object detection using latest advanced algorithms, which are capable of processing light weight models with fast training in real time instances. For the scope of this paper, after researching on the various algorithms and comparing them, we finally implemented the Deep-Q-Networks (DQN) algorithm in the AirSim Simulator. In future works, we plan to implement further advanced self-driving and object detection algorithms, we also plan to implement voice-based speech recognition for the entire drone operation which would provide an option of speech communication between users (People) and the drone in the time of unavoidable circumstances. Thus, making drones an interactive intelligent Robotic Voice Enabled Service Assistant. This proposed drone has a wide scope of usability and is applicable in scenarios such as Disaster management, Air Transport of essentials, Agriculture, Manufacturing, Monitoring people movements in public area, and Defense. Also discussed, is the entire drone communication based on the satellite broadband Internet technology for faster computation and seamless communication service for uninterrupted network during disasters and remote location operations. This paper will explain the feasible algorithms required to go about achieving this goal and is more of a reference paper for future researchers going down this path.Keywords: convolution neural network, natural language processing, obstacle avoidance, satellite broadband technology, self-driving
Procedia PDF Downloads 2513481 Analyzing the Sound of Space - The Glissando of the Planets and the Spiral Movement on the Sound of Earth, Saturn and Jupiter
Authors: L. Tonia, I. Daglis, W. Kurth
Abstract:
The sound of the universe creates an affinity with the sounds of music. The analysis of the sound of space focuses on the existence of a tone material, the microstructure and macrostructure, and the form of the sound through the signals recorded during the flight of the spacecraft Van Allen Probes and Cassini’s mission. The sound becomes from the frequencies that belong to electromagnetic waves. Plasma Wave Science Instrument and Electric and Magnetic Field Instrument Suite and Integrated Science (EMFISIS) recorded the signals from space. A transformation of that signals to audio gave the opportunity to study and analyze the sound. Due to the fact that the musical tone pitch has a frequency and every electromagnetic wave produces a frequency too, the creation of a musical score, which appears as the sound of space, can give information about the form, the symmetry, and the harmony of the sound. The conversion of space radio emissions to audio provides a number of tone pitches corresponding to the original frequencies. Through the process of these sounds, we have the opportunity to present a music score that “composed” from space. In this score, we can see some basic features associated with the music form, the structure, the tone center of music material, the construction and deconstruction of the sound. The structure, which was built through a harmonic world, includes tone centers, major and minor scales, sequences of chords, and types of cadences. The form of the sound represents the symmetry of a spiral movement not only in micro-structural but also to macro-structural shape. Multiple glissando sounds in linear and polyphonic process of the sound, founded in magnetic fields around Earth, Saturn, and Jupiter, but also a spiral movement appeared on the spectrogram of the sound. Whistles, Auroral Kilometric Radiations, and Chorus emissions reveal movements similar to musical excerpts of works by contemporary composers like Sofia Gubaidulina, Iannis Xenakis, EinojuhamiRautavara.Keywords: space sound analysis, spiral, space music, analysis
Procedia PDF Downloads 1773480 Using Artificial Neural Networks for Optical Imaging of Fluorescent Biomarkers
Authors: K. A. Laptinskiy, S. A. Burikov, A. M. Vervald, S. A. Dolenko, T. A. Dolenko
Abstract:
The article presents the results of the application of artificial neural networks to separate the fluorescent contribution of nanodiamonds used as biomarkers, adsorbents and carriers of drugs in biomedicine, from a fluorescent background of own biological fluorophores. The principal possibility of solving this problem is shown. Use of neural network architecture let to detect fluorescence of nanodiamonds against the background autofluorescence of egg white with high accuracy - better than 3 ug/ml.Keywords: artificial neural networks, fluorescence, data aggregation, biomarkers
Procedia PDF Downloads 7103479 Linearization and Process Standardization of Construction Design Engineering Workflows
Authors: T. R. Sreeram, S. Natarajan, C. Jena
Abstract:
Civil engineering construction is a network of tasks involving varying degree of complexity and streamlining, and standardization is the only way to establish a systemic approach to design. While there are off the shelf tools such as AutoCAD that play a role in the realization of design, the repeatable process in which these tools are deployed often is ignored. The present paper addresses this challenge through a sustainable design process and effective standardizations at all stages in the design workflow. The same is demonstrated through a case study in the context of construction, and further improvement points are highlighted.Keywords: syste, lean, value stream, process improvement
Procedia PDF Downloads 1233478 Analysis of Biomarkers Intractable Epileptogenic Brain Networks with Independent Component Analysis and Deep Learning Algorithms: A Comprehensive Framework for Scalable Seizure Prediction with Unimodal Neuroimaging Data in Pediatric Patients
Authors: Bliss Singhal
Abstract:
Epilepsy is a prevalent neurological disorder affecting approximately 50 million individuals worldwide and 1.2 million Americans. There exist millions of pediatric patients with intractable epilepsy, a condition in which seizures fail to come under control. The occurrence of seizures can result in physical injury, disorientation, unconsciousness, and additional symptoms that could impede children's ability to participate in everyday tasks. Predicting seizures can help parents and healthcare providers take precautions, prevent risky situations, and mentally prepare children to minimize anxiety and nervousness associated with the uncertainty of a seizure. This research proposes a comprehensive framework to predict seizures in pediatric patients by evaluating machine learning algorithms on unimodal neuroimaging data consisting of electroencephalogram signals. The bandpass filtering and independent component analysis proved to be effective in reducing the noise and artifacts from the dataset. Various machine learning algorithms’ performance is evaluated on important metrics such as accuracy, precision, specificity, sensitivity, F1 score and MCC. The results show that the deep learning algorithms are more successful in predicting seizures than logistic Regression, and k nearest neighbors. The recurrent neural network (RNN) gave the highest precision and F1 Score, long short-term memory (LSTM) outperformed RNN in accuracy and convolutional neural network (CNN) resulted in the highest Specificity. This research has significant implications for healthcare providers in proactively managing seizure occurrence in pediatric patients, potentially transforming clinical practices, and improving pediatric care.Keywords: intractable epilepsy, seizure, deep learning, prediction, electroencephalogram channels
Procedia PDF Downloads 843477 The Wellness Wheel: A Tool to Reimagine Schooling
Authors: Jennifer F. Moore
Abstract:
The wellness wheel as a tool for school growth and change is currently being piloted by a startup school in Chicago, IL. In this case study, members of the school community engaged in the appreciative inquiry process to plan their organizational development around the wellness wheel. The wellness wheel (comprised of physical, emotional, social, spiritual, environmental, cognitive, and financial wellness) is used as a planning tool by teachers, students, parents, and administrators. Through the appreciative inquiry method of change, the community is reflecting on their individual level of wellness and developing organizational structures to ensure the well being of children and adults. The goal of the case study is to test the appropriateness of the use of appreciative inquiry (as a method) and the wellness wheel (as a tool) for school growth and development. Findings of the case study will be realized by the conference. The research is in process now.Keywords: education, schools, well being, wellness
Procedia PDF Downloads 1783476 Centralizing the Teaching Process in Intelligent Tutoring System Architectures
Authors: Nikolaj Troels Graf Von Malotky, Robin Nicolay, Alke Martens
Abstract:
There exist a plethora of architectures for ITSs (Intelligent Tutoring Systems). A thorough analysis and comparison of the architectures revealed, that in most cases the architecture extensions are evolutionary grown, reflecting state of the art trends of each decade. However, from the perspective of software engineering, the main aspect of an ITS has not been reflected in any of these architectures, yet. From the perspective of cognitive research, the construction of the teaching process is what makes an ITS 'intelligent' regarding the spectrum of interaction with the students. Thus, in our approach, we focus on a behavior based architecture, which is based on the main teaching processes. To create a new general architecture for ITS, we have to define the prerequisites. This paper analyzes the current state of the existing architectures and derives rules for the behavior of ITS. It is presenting a teaching process for ITSs to be used together with the architecture.Keywords: intelligent tutoring, ITS, tutoring process, system architecture, interaction process
Procedia PDF Downloads 3853475 Risk Assessment on Construction Management with “Fuzzy Logy“
Authors: Mehrdad Abkenari, Orod Zarrinkafsh, Mohsen Ramezan Shirazi
Abstract:
Construction projects initiate in complicated dynamic environments and, due to the close relationships between project parameters and the unknown outer environment, they are faced with several uncertainties and risks. Success in time, cost and quality in large scale construction projects is uncertain in consequence of technological constraints, large number of stakeholders, too much time required, great capital requirements and poor definition of the extent and scope of the project. Projects that are faced with such environments and uncertainties can be well managed through utilization of the concept of risk management in project’s life cycle. Although the concept of risk is dependent on the opinion and idea of management, it suggests the risks of not achieving the project objectives as well. Furthermore, project’s risk analysis discusses the risks of development of inappropriate reactions. Since evaluation and prioritization of construction projects has been a difficult task, the network structure is considered to be an appropriate approach to analyze complex systems; therefore, we have used this structure for analyzing and modeling the issue. On the other hand, we face inadequacy of data in deterministic circumstances, and additionally the expert’s opinions are usually mathematically vague and are introduced in the form of linguistic variables instead of numerical expression. Owing to the fact that fuzzy logic is used for expressing the vagueness and uncertainty, formulation of expert’s opinion in the form of fuzzy numbers can be an appropriate approach. In other words, the evaluation and prioritization of construction projects on the basis of risk factors in real world is a complicated issue with lots of ambiguous qualitative characteristics. In this study, evaluated and prioritization the risk parameters and factors with fuzzy logy method by combination of three method DEMATEL (Decision Making Trial and Evaluation), ANP (Analytic Network Process) and TOPSIS (Technique for Order-Preference by Similarity Ideal Solution) on Construction Management.Keywords: fuzzy logy, risk, prioritization, assessment
Procedia PDF Downloads 5943474 Liver Transplantation after Downstaging with Electrochemotherapy of Large Hepatocellular Carcinoma and Portal Vein Tumor Thrombosis: A Case Report
Authors: Luciano Tarantino, Emanuele Balzano, Aurelio Nasto
Abstract:
S.R. 53 years. January 2009: HCV-related cirrhosis, Child-Pugh A5 class, EGDS no aesophageal Varices. No important comorbidities. Treated with PEG-IFN+Ribavirin (march-november 2009) with subsequent sustained virologic response. HCVRNA absent overtime. October 2016 :CT detected small HCC nodule in the VIII segment (diam.=12 mm). Treated with US guided RF-ablation. November 2016 CT: complete necrosis. Unfortunately, the patient dropped out US and CT follow-up controls.September 2018: asthenia and weight loss. CT showed a large tumor infiltrating V-VII-VI segments and complete PVTT of right portal vein and its branches . Surgical Consultation excluded indication to Liver resection and OLT . 23 october 2018: ECT of a large peri-hilar area of the tumor including the PVTT. 1 and 3 months post-treatment CT showed complete necrosis and retraction of the thrombus and residual viable tumor in the peripheral portion of the right lobe . Therefor, the patient was reevaluated for OLT and considered eligible in waiting list . March 2019: CT showed no perihilar or portal vein recurrence and distant progression in the right lobe . March 2019 : Trans-arterial-Radio-therapy (TARE) of the right lobe. Post-treatment CT demonstrated no perihilar or portal vein recurrence and extensive necrosis of the residual tumor . December 2019: CT demonstrated several recurrences of HCC infiltrating the VI and VII segment . Howewer no recurrence was observed at hepatic hilum and in portal vessels . Therefore, on February 2020 the patient received OLT. At 44 months follow-up, no complication or recurrence or liver disfunction have been observed.Keywords: hepatocellular carcinoma, portal vein tumor thrombosis, interventional ultrasound, liver tumor ablation, liver transplantation
Procedia PDF Downloads 673473 Tram Track Deterioration Modeling
Authors: Mohammad Yousefikia, Sara Moridpour, Ehsan Mazloumi
Abstract:
Perceiving track geometry deterioration decisively influences the optimization of track maintenance operations. The effective management of this deterioration and increasingly utilized system with limited financial resources is a significant challenge. This paper provides a review of degradation models relevant for railroad tracks. Furthermore, due to the lack of long term information on the condition development of tram infrastructures, presents the methodology which will be used to derive degradation models from the data of Melbourne tram network.Keywords: deterioration modeling, asset management, railway, tram
Procedia PDF Downloads 3793472 Building a Parametric Link between Mapping and Planning: A Sunlight-Adaptive Urban Green System Plan Formation Process
Authors: Chenhao Zhu
Abstract:
Quantitative mapping is playing a growing role in guiding urban planning, such as using a heat map created by CFX, CFD2000, or Envi-met, to adjust the master plan. However, there is no effective quantitative link between the mappings and planning formation. So, in many cases, the decision-making is still based on the planner's subjective interpretation and understanding of these mappings, which limits the improvement of scientific and accuracy brought by the quantitative mapping. Therefore, in this paper, an effort has been made to give a methodology of building a parametric link between the mapping and planning formation. A parametric planning process based on radiant mapping has been proposed for creating an urban green system. In the first step, a script is written in Grasshopper to build a road network and form the block, while the Ladybug Plug-in is used to conduct a radiant analysis in the form of mapping. Then, the research creatively transforms the radiant mapping from a polygon into a data point matrix, because polygon is hard to engage in the design formation. Next, another script is created to select the main green spaces from the road network based on the criteria of radiant intensity and connect the green spaces' central points to generate a green corridor. After that, a control parameter is introduced to adjust the corridor's form based on the radiant intensity. Finally, a green system containing greenspace and green corridor is generated under the quantitative control of the data matrix. The designer only needs to modify the control parameter according to the relevant research results and actual conditions to realize the optimization of the green system. This method can also be applied to much other mapping-based analysis, such as wind environment analysis, thermal environment analysis, and even environmental sensitivity analysis. The parameterized link between the mapping and planning will bring about a more accurate, objective, and scientific planning.Keywords: parametric link, mapping, urban green system, radiant intensity, planning strategy, grasshopper
Procedia PDF Downloads 1423471 Investigation of Delivery of Triple Play Data in GE-PON Fiber to the Home Network
Authors: Ashima Anurag Sharma
Abstract:
Optical fiber based networks can deliver performance that can support the increasing demands for high speed connections. One of the new technologies that have emerged in recent years is Passive Optical Networks. This research paper is targeted to show the simultaneous delivery of triple play service (data, voice, and video). The comparison between various data rates is presented. It is demonstrated that as we increase the data rate, number of users to be decreases due to increase in bit error rate.Keywords: BER, PON, TDMPON, GPON, CWDM, OLT, ONT
Procedia PDF Downloads 5273470 Collaboration between Dietician and Occupational Therapist, Promotes Independent Functional Eating in Tube Weaning Process of Mechanical Ventilated Patients
Authors: Inbal Zuriely, Yonit Weiss, Hilla Zaharoni, Hadas Lewkowicz, Tatiana Vander, Tarif Bader
Abstract:
early active movement, along with adjusting optimal nutrition, prevents aggravation of muscle degeneracy and functional decline. Eating is a basic activity of daily life, which reflects the patient's independence. When eating and feeding are experienced successfully, they lead to a sense of pleasure and satisfaction. However, when they are experienced as a difficulty, they might evoke feelings of helplessness and frustration. This stresses the essential process of gradual weaning off the enteral feeding tube. the work describes the collaboration of a dietitian, determining the nutritional needs of patients undergoing enteral tube weaning as part of the rehabilitation process, with the suited treatment of an occupational therapist. Occupational therapy intervention regarding eating capabilities focuses on improving the required motor and cognitive components, along with environmental adjustments and aids, imparting eating strategies and training to patients and their families. The project was conducted in the long-term, ventilated patients’ department at the Herzfeld Rehabilitation Geriatric Medical Center on patients undergoing enteral tube weaning with the staff’s assistance. Establishing continuous collaboration between the dietician and the occupational therapist, starting from the beginning of the feeding-tube weaning process: 1.The dietician updates the occupational therapist about the start of the process and the approved diet. 2.The occupational therapist performs cognitive, motor, and functional assessments and treatments regarding the patient’s eating capabilities and recommends the required adjustments for independent eating according to the FIM (Functional Independence Measure) scale. 3.The occupational therapist closely follows up on the patient’s degree of independence in eating and provides a repeated update to the dietician. 4.The dietician accordingly guides the ward staff on whether and how to feed the patient or allow independent eating. The project aimed to promote patients toward independent feeding, which leads to a sense of empowerment, enjoyment of the eating experience, and progress of functional ability, along with performing active movements that will motivate mobilization. From the beginning of 2022, 26 patients participated in the project. 79% of all patients who started the weaning process from tube feeding achieved different levels of independence in feeding (independence levels ranged from supervision (FIM-5) to complete independence (FIM-7). The integration of occupational therapy and dietary treatment is based on a patient-centered approach while considering the patient’s personal needs, preferences, and goals. This interdisciplinary partnership is essential for meeting the complex needs of prolonged mechanically ventilated patients and promotes independent functioning and quality of life.Keywords: dietary, mechanical ventilation, occupational therapy, tube feeding weaning
Procedia PDF Downloads 783469 Methodology for the Integration of Object Identification Processes in Handling and Logistic Systems
Authors: L. Kiefer, C. Richter, G. Reinhart
Abstract:
The uprising complexity in production systems due to an increasing amount of variants up to customer innovated products leads to requirements that hierarchical control systems are not able to fulfil. Therefore, factory planners can install autonomous manufacturing systems. The fundamental requirement for an autonomous control is the identification of objects within production systems. In this approach an attribute-based identification is focused for avoiding dose-dependent identification costs. Instead of using an identification mark (ID) like a radio frequency identification (RFID)-Tag, an object type is directly identified by its attributes. To facilitate that it’s recommended to include the identification and the corresponding sensors within handling processes, which connect all manufacturing processes and therefore ensure a high identification rate and reduce blind spots. The presented methodology reduces the individual effort to integrate identification processes in handling systems. First, suitable object attributes and sensor systems for object identification in a production environment are defined. By categorising these sensor systems as well as handling systems, it is possible to match them universal within a compatibility matrix. Based on that compatibility further requirements like identification time are analysed, which decide whether the combination of handling and sensor system is well suited for parallel handling and identification within an autonomous control. By analysing a list of more than thousand possible attributes, first investigations have shown, that five main characteristics (weight, form, colour, amount, and position of subattributes as drillings) are sufficient for an integrable identification. This knowledge limits the variety of identification systems and leads to a manageable complexity within the selection process. Besides the procedure, several tools, as an example a sensor pool are presented. These tools include the generated specific expert knowledge and simplify the selection. The primary tool is a pool of preconfigured identification processes depending on the chosen combination of sensor and handling device. By following the defined procedure and using the created tools, even laypeople out of other scientific fields can choose an appropriate combination of handling devices and sensors which enable parallel handling and identification.Keywords: agent systems, autonomous control, handling systems, identification
Procedia PDF Downloads 1773468 Spatial Conceptualization in French and Italian Speakers: A Contrastive Approach in the Context of the Linguistic Relativity Theory
Authors: Camilla Simoncelli
Abstract:
The connection between language and cognition has been one of the main interests of linguistics from several years. According to the Sapir-Whorf Linguistic Relativity Theory, the way we perceive reality depends on the language we speak which in turn has a central role in the human cognition. This paper is in line with this research work with the aim of analyzing how language structures reflect on our cognitive abilities even in the description of space, which is generally considered as a human natural and universal domain. The main objective is to identify the differences in the encoding of spatial inclusion relationships in French and Italian speakers to make evidence that a significant variation exists at various levels even in two similar systems. Starting from the constitution a corpora, the first step of the study has been to establish the relevant complex prepositions marking an inclusion relation in French and Italian: au centre de, au cœur de, au milieu de, au sein de, à l'intérieur de and the opposition entre/parmi in French; al centro di, al cuore di, nel mezzo di, in seno a, all'interno di and the fra/tra contrast in Italian. These prepositions had been classified on the base of the type of Noun following them (e.g. mass nouns, concrete nouns, abstract nouns, body-parts noun, etc.) following the Collostructional Analysis of lexemes with the purpose of analyzing the preferred construction of each preposition comparing the relations construed. Comparing the Italian and the French results it has been possible to define the degree of representativeness of each target Noun for the chosen preposition studied. Lexicostatistics and Statistical Association Measures showed the values of attraction or repulsion between lexemes and a given preposition, highlighting which words are over-represented or under-represented in a specific context compared to the expected results. For instance, a Noun as Dibattiti has a negative value for the Italian Al cuore di (-1,91), but it has a strong positive representativeness for the corresponding French Au cœur de (+677,76). The value, positive or negative, is the result of a hypergeometric distribution law which displays the current use of some relevant nouns in relations of spatial inclusion by French and Italian speakers. Differences on the kind of location conceptualization denote syntactic and semantic constraints based on spatial features as well as on linguistic peculiarity, too. The aim of this paper is to demonstrate that the domain of spatial relations is basic to human experience and is linked to universally shared perceptual mechanisms which create mental representations depending on the language use. Therefore, linguistic coding strongly correlates with the way spatial distinctions are conceptualized for non-verbal tasks even in close language systems, like Italian and French.Keywords: cognitive semantics, cross-linguistic variations, locational terms, non-verbal spatial representations
Procedia PDF Downloads 1133467 Speech Community and Social Language Codes: A Sociolinguistic Study of Mampruli-English Codeswitching in Nalerigu, Ghana
Authors: Gertrude Yidanpoa Grumah
Abstract:
Ghana boasts of a rich linguistic diversity, with around eighty-seven indigenous languages coexisting with English, the official language. Within this multilingual environment, speech communities adopt bilingual code choices as a common practice, as people seamlessly switch between Ghanaian languages and English. Extensive research has delved into this phenomenon from various perspectives, including the role of bilingual code choices in teaching, its implications for language policy, and its significance in multilingual communities. Yet, a noticeable gap in the literature persists, with most studies focusing on codeswitching between English and the major southern Ghanaian languages like Twi, Ga, and Ewe. The intricate dynamics of codeswitching with minority indigenous languages, such as Mampruli spoken in northern Ghana, remain largely unexplored. This thesis embarks on an investigation into Mampruli-English codeswitching, delving into the linguistic practices of educated Mampruli speakers. The data collection methods encompass interviews, recorded radio programs, and ethnographic observation. The analytical framework employed draws upon the Ethnography of Communication, with observation notes and transcribed interviews thoughtfully classified into discernible themes. The research findings suggest that a bilingual's tendency to switch from Mampruli to English is significantly influenced by factors such as the level of education, age, gender, perceptions of language prestige, and religious beliefs. In essence, this study represents a pioneering endeavor, marking the first comprehensive study on codeswitching practices within the Mampruli-English context and making a significant contribution to our understanding of Mampruli linguistics, covering the social language codes reflecting the speech community. In a region where such research has been scarce for the past four decades, this study addresses a critical knowledge gap, shedding light on the intricate dynamics of language use in northern Ghana.Keywords: codeswitching, English, ethnography of communication, Mampruli, sociolinguistics
Procedia PDF Downloads 633466 Emotions Aroused by Children’s Literature
Authors: Catarina Maria Neto da Cruz, Ana Maria Reis d'Azevedo Breda
Abstract:
Emotions are manifestations of everything that happens around us, influencing, consequently, our actions. People experience emotions continuously when socialize with friends, when facing complex situations, and when at school, among many other situations. Although the influence of emotions in the teaching and learning process is nothing new, its study in the academic field has been more popular in recent years, distinguishing between positive (e.g., enjoyment and curiosity) and negative emotions (e.g., boredom and frustration). There is no doubt that emotions play an important role in the students’ learning process since the development of knowledge involves thoughts, actions, and emotions. Nowadays, one of the most significant changes in acquiring knowledge, accessing information, and communicating is the way we do it through technological and digital resources. Faced with an increasingly frequent use of technological or digital means with different purposes, whether in the acquisition of knowledge or in communicating with others, the emotions involved in these processes change naturally. The speed with which the Internet provides information reduces the excitement for searching for the answer, the gratification of discovering something through our own effort, the patience, the capacity for effort, and resilience. Thus, technological and digital devices are bringing changes to the emotional domain. For this reason and others, it is essential to educate children from an early age to understand that it is not possible to have everything with just one click and to deal with negative emotions. Currently, many curriculum guidelines highlight the importance of the development of so-called soft skills, in which the emotional domain is present, in academic contexts. The technical report “OECD Survey on Social and Emotional Skills”, developed by OECD, is one of them. Within the scope of the Portuguese reality, the “Students’ profile by the end of compulsory schooling” and the “Health education reference” also emphasizes the importance of emotions in education. There are several resources to stimulate good emotions in articulation with cognitive development. One of the most predictable and not very used resources in the most diverse areas of knowledge after pre-school education is the literature. Due to its characteristics, in the narrative or in the illustrations, literature provides the reader with a journey full of emotions. On the other hand, literature makes it possible to establish bridges between narrative and different areas of knowledge, reconciling the cognitive and emotional domains. This study results from the presentation session of a children's book, entitled “From the Outside to Inside and from the Inside to Outside”, to children attending the 2nd, 3rd, and 4th years of basic education in the Portuguese education system. In this book, rationale and emotion are in constant dialogue, so in this session, based on excerpts from the book dramatized by the authors, some questions were asked to the children in a large group, with an aim to explore their perception regarding certain emotions or events that trigger them. According to the aim of this study, qualitative, descriptive, and interpretative research was carried out based on participant observation and audio records.Keywords: emotions, basic education, children, soft skills
Procedia PDF Downloads 843465 Feature Engineering Based Detection of Buffer Overflow Vulnerability in Source Code Using Deep Neural Networks
Authors: Mst Shapna Akter, Hossain Shahriar
Abstract:
One of the most important challenges in the field of software code audit is the presence of vulnerabilities in software source code. Every year, more and more software flaws are found, either internally in proprietary code or revealed publicly. These flaws are highly likely exploited and lead to system compromise, data leakage, or denial of service. C and C++ open-source code are now available in order to create a largescale, machine-learning system for function-level vulnerability identification. We assembled a sizable dataset of millions of opensource functions that point to potential exploits. We developed an efficient and scalable vulnerability detection method based on deep neural network models that learn features extracted from the source codes. The source code is first converted into a minimal intermediate representation to remove the pointless components and shorten the dependency. Moreover, we keep the semantic and syntactic information using state-of-the-art word embedding algorithms such as glove and fastText. The embedded vectors are subsequently fed into deep learning networks such as LSTM, BilSTM, LSTM-Autoencoder, word2vec, BERT, and GPT-2 to classify the possible vulnerabilities. Furthermore, we proposed a neural network model which can overcome issues associated with traditional neural networks. Evaluation metrics such as f1 score, precision, recall, accuracy, and total execution time have been used to measure the performance. We made a comparative analysis between results derived from features containing a minimal text representation and semantic and syntactic information. We found that all of the deep learning models provide comparatively higher accuracy when we use semantic and syntactic information as the features but require higher execution time as the word embedding the algorithm puts on a bit of complexity to the overall system.Keywords: cyber security, vulnerability detection, neural networks, feature extraction
Procedia PDF Downloads 893464 Effect of Different Porous Media Models on Drug Delivery to Solid Tumors: Mathematical Approach
Authors: Mostafa Sefidgar, Sohrab Zendehboudi, Hossein Bazmara, Madjid Soltani
Abstract:
Based on findings from clinical applications, most drug treatments fail to eliminate malignant tumors completely even though drug delivery through systemic administration may inhibit their growth. Therefore, better understanding of tumor formation is crucial in developing more effective therapeutics. For this purpose, nowadays, solid tumor modeling and simulation results are used to predict how therapeutic drugs are transported to tumor cells by blood flow through capillaries and tissues. A solid tumor is investigated as a porous media for fluid flow simulation. Most of the studies use Darcy model for porous media. In Darcy model, the fluid friction is neglected and a few simplified assumptions are implemented. In this study, the effect of these assumptions is studied by considering Brinkman model. A multi scale mathematical method which calculates fluid flow to a solid tumor is used in this study to investigate how neglecting fluid friction affects the solid tumor simulation. In this work, the mathematical model in our previous studies is developed by considering two model of momentum equation for porous media: Darcy and Brinkman. The mathematical method involves processes such as fluid flow through solid tumor as porous media, extravasation of blood flow from vessels, blood flow through vessels and solute diffusion, convective transport in extracellular matrix. The sprouting angiogenesis model is used for generating capillary network and then fluid flow governing equations are implemented to calculate blood flow through the tumor-induced capillary network. Finally, the two models of porous media are used for modeling fluid flow in normal and tumor tissues in three different shapes of tumors. Simulations of interstitial fluid transport in a solid tumor demonstrate that the simplifications used in Darcy model affect the interstitial velocity and Brinkman model predicts a lower value for interstitial velocity than the values that Darcy model does.Keywords: solid tumor, porous media, Darcy model, Brinkman model, drug delivery
Procedia PDF Downloads 3063463 Consumer Innovativeness and Shopping Styles: An Empirical Study in Turkey
Authors: Hande Begum Bumin Doyduk, Elif Okan Yolbulan
Abstract:
Innovation is very important for success and competitiveness of countries, as well as business sectors and individuals' firms. In order to have successful and sustainable innovations, the other side of the game, consumers, should be aware of the innovations and appreciate them. In this study, the consumer innovativeness is focused and the relationship between motivated consumer innovativeness and consumer shopping styles is analyzed. Motivated consumer innovativeness scale by (Vandecasteele & Geuens, 2010) and consumer shopping styles scale by (Sproles & Kendall, 1986) is used. Data is analyzed by SPSS 20 program through realibility, factor, and correlation analysis. According to the findings of the study, there are strong positive relationships between hedonic innovativeness and recreational shopping style; social innovativeness and brand consciousness; cognitive innovativeness and price consciousness and functional innovativeness and perfectionistic high-quality conscious shopping styles.Keywords: consumer innovativeness, consumer decision making, shopping styles, innovativeness
Procedia PDF Downloads 4313462 Analysis of the Occurrence of Hydraulic Fracture Phenomena in Roudbar Lorestan Dam
Authors: Masoud Ghaemi, MohammadJafar Hedayati, Faezeh Yousefzadeh, Hoseinali Heydarzadeh
Abstract:
According to the statistics of the International Committee on Large Dams, internal erosion and piping (scour) are major causes of the destruction of earth-fill dams. If such dams are constructed in narrow valleys, the valley walls will increase the arching of the dam body due to the transfer of vertical and horizontal stresses, so the occurrence of hydraulic fracturing in these embankments is more likely. Roudbar Dam in Lorestan is a clay-core pebble earth-fill dam constructed in a relatively narrow valley in western Iran. Three years after the onset of impoundment, there has been a fall in dam behavior. Evaluation of the dam behavior based on the data recorded on the instruments installed inside the dam body and foundation confirms the occurrence of internal erosion in the lower and adjacent parts of the core on the left support (abutment). The phenomenon of hydraulic fracturing is one of the main causes of the onset of internal erosion in this dam. Accordingly, the main objective of this paper is to evaluate the validity of this hypothesis. To evaluate the validity of this hypothesis, the dam behavior during construction and impoundment has been first simulated with a three-dimensional numerical model. Then, using validated empirical equations, the safety factor of the occurrence of hydraulic fracturing phenomenon upstream of the dam score was calculated. Then, using the artificial neural network, the failure time of the given section was predicted based on the maximum stress trend created. The study results show that steep slopes of valley walls, sudden changes in coefficient, and differences in compressibility properties of dam body materials have caused considerable stress transfer from core to adjacent valley walls, especially at its lower levels. This has resulted in the coefficient of confidence of the occurrence of hydraulic fracturing in each of these areas being close to one in each of the empirical equations used.Keywords: arching, artificial neural network, FLAC3D, hydraulic fracturing, internal erosion, pore water pressure
Procedia PDF Downloads 1773461 Copula Autoregressive Methodology for Simulation of Solar Irradiance and Air Temperature Time Series for Solar Energy Forecasting
Authors: Andres F. Ramirez, Carlos F. Valencia
Abstract:
The increasing interest in renewable energies strategies application and the path for diminishing the use of carbon related energy sources have encouraged the development of novel strategies for integration of solar energy into the electricity network. A correct inclusion of the fluctuating energy output of a photovoltaic (PV) energy system into an electric grid requires improvements in the forecasting and simulation methodologies for solar energy potential, and the understanding not only of the mean value of the series but the associated underlying stochastic process. We present a methodology for synthetic generation of solar irradiance (shortwave flux) and air temperature bivariate time series based on copula functions to represent the cross-dependence and temporal structure of the data. We explore the advantages of using this nonlinear time series method over traditional approaches that use a transformation of the data to normal distributions as an intermediate step. The use of copulas gives flexibility to represent the serial variability of the real data on the simulation and allows having more control on the desired properties of the data. We use discrete zero mass density distributions to assess the nature of solar irradiance, alongside vector generalized linear models for the bivariate time series time dependent distributions. We found that the copula autoregressive methodology used, including the zero mass characteristics of the solar irradiance time series, generates a significant improvement over state of the art strategies. These results will help to better understand the fluctuating nature of solar energy forecasting, the underlying stochastic process, and quantify the potential of a photovoltaic (PV) energy generating system integration into a country electricity network. Experimental analysis and real data application substantiate the usage and convenience of the proposed methodology to forecast solar irradiance time series and solar energy across northern hemisphere, southern hemisphere, and equatorial zones.Keywords: copula autoregressive, solar irradiance forecasting, solar energy forecasting, time series generation
Procedia PDF Downloads 3233460 The Effect of Newspaper Reporting on COVID-19 Vaccine Hesitancy: A Randomised Controlled Trial
Authors: Anna Rinaldi, Pierfrancesco Dellino
Abstract:
COVID-19 vaccine hesitancy can be observed at different rates in different countries. In June 2021, 1,068 people were surveyed in France and Italy to inquire about individual potential acceptance, focusing on time preferences in a risk-return framework: having the vaccination today, in a month, and in 3 months; perceived risks of vaccination and COVID-19; and expected benefit of the vaccine. A randomized controlled trial was conducted to understand how everyday stimuli like fact-based news about vaccines impact an audience's acceptance of vaccination. The main experiment involved two groups of participants and two different articles about vaccine-related thrombosis taken from two Italian newspapers. One article used a more abstract description and language, and the other used a more anecdotal description and concrete language; each group read only one of these articles. Two other groups were assigned categorization tasks; one was asked to complete a concrete categorization task, and the other an abstract categorization task. Individual preferences for vaccination were found to be variable and unstable over time, and individual choices of accepting, refusing, or delaying could be affected by the way news is written. In order to understand these dynamic preferences, the present work proposes a new model based on seven categories of human behaviors that were validated by a neural network. A treatment effect was observed: participants who read the articles shifted to vaccine hesitancy categories more than participants assigned to other treatments and control. Furthermore, there was a significant gender effect, showing that the type of language leading to a lower hesitancy rate for men is correlated with a higher hesitancy rate for women and vice versa. This outcome should be taken into consideration for an appropriate gender-based communication campaign aimed at achieving herd immunity. The trial was registered at ClinicalTrials.gov NCT05582564 (17/10/2022).Keywords: vaccine hesitancy, risk elicitation, neural network, covid19
Procedia PDF Downloads 833459 Bounded Rational Heterogeneous Agents in Artificial Stock Markets: Literature Review and Research Direction
Authors: Talal Alsulaiman, Khaldoun Khashanah
Abstract:
In this paper, we provided a literature survey on the artificial stock problem (ASM). The paper began by exploring the complexity of the stock market and the needs for ASM. ASM aims to investigate the link between individual behaviors (micro level) and financial market dynamics (macro level). The variety of patterns at the macro level is a function of the AFM complexity. The financial market system is a complex system where the relationship between the micro and macro level cannot be captured analytically. Computational approaches, such as simulation, are expected to comprehend this connection. Agent-based simulation is a simulation technique commonly used to build AFMs. The paper proceeds by discussing the components of the ASM. We consider the roles of behavioral finance (BF) alongside the traditionally risk-averse assumption in the construction of agent's attributes. Also, the influence of social networks in the developing of agents’ interactions is addressed. Network topologies such as a small world, distance-based, and scale-free networks may be utilized to outline economic collaborations. In addition, the primary methods for developing agents learning and adaptive abilities have been summarized. These incorporated approach such as Genetic Algorithm, Genetic Programming, Artificial neural network and Reinforcement Learning. In addition, the most common statistical properties (the stylized facts) of stock that are used for calibration and validation of ASM are discussed. Besides, we have reviewed the major related previous studies and categorize the utilized approaches as a part of these studies. Finally, research directions and potential research questions are argued. The research directions of ASM may focus on the macro level by analyzing the market dynamic or on the micro level by investigating the wealth distributions of the agents.Keywords: artificial stock markets, market dynamics, bounded rationality, agent based simulation, learning, interaction, social networks
Procedia PDF Downloads 354