Search results for: Grid computing
509 Entrepreneurship Training of Young People as a Pillar to Generate Income and Create Jobs: Progress Report of the Moroccan National Human Development Initiative in the Region of Meknes
Authors: Bennani Zoubir Nada, El Hiri Abderrazak, El Hajri Aimad
Abstract:
In context of economic and health crisis, sustainable entrepreneurship has become one of the best solutions to economic recovery. This study is about the third program of the Moroccan national human development initiative in her third phase which began in 2019 and continuous until 2023, and which deals with income improvement and social inclusion of young people, under the high patronage of his majesty the king of Morocco. What is the approach of this program and how entrepreneurship training of young people can be a pillar to generate income and create jobs? Starting on the effectuation theory, we adopted an exploratory qualitative approach through semi-structured interviews with national human development initiative stakeholders in the area of Meknes-Morocco, which allowed us the state of progress of this program. We carried out a survey based on a grid of questions to collect information that we processed using NVIVO software. The most relevant results are that people eligible are jobless young people, who are between 18 and 35 years old, who reside in Meknes and surroundings and who have a project idea. They are trained by experts in entrepreneurship and management through targeted and diversified courses. To ensure the sustainability of projects, the project organisers have provided measures to ensure the sustainability of the companies through continuous monitoring and evaluation as well as support during all phases from the project idea to the realisation and progress.Keywords: sustainable entrepreneurship, training, social inclusion, national human development initiative in Morocco (INDH), youth entrepreneurship, the effectuation theory
Procedia PDF Downloads 110508 Analyzing the Factors that Cause Parallel Performance Degradation in Parallel Graph-Based Computations Using Graph500
Authors: Mustafa Elfituri, Jonathan Cook
Abstract:
Recently, graph-based computations have become more important in large-scale scientific computing as they can provide a methodology to model many types of relations between independent objects. They are being actively used in fields as varied as biology, social networks, cybersecurity, and computer networks. At the same time, graph problems have some properties such as irregularity and poor locality that make their performance different than regular applications performance. Therefore, parallelizing graph algorithms is a hard and challenging task. Initial evidence is that standard computer architectures do not perform very well on graph algorithms. Little is known exactly what causes this. The Graph500 benchmark is a representative application for parallel graph-based computations, which have highly irregular data access and are driven more by traversing connected data than by computation. In this paper, we present results from analyzing the performance of various example implementations of Graph500, including a shared memory (OpenMP) version, a distributed (MPI) version, and a hybrid version. We measured and analyzed all the factors that affect its performance in order to identify possible changes that would improve its performance. Results are discussed in relation to what factors contribute to performance degradation.Keywords: graph computation, graph500 benchmark, parallel architectures, parallel programming, workload characterization.
Procedia PDF Downloads 149507 An Analysis of Uncoupled Designs in Chicken Egg
Authors: Pratap Sriram Sundar, Chandan Chowdhury, Sagar Kamarthi
Abstract:
Nature has perfected her designs over 3.5 billion years of evolution. Research fields such as biomimicry, biomimetics, bionics, bio-inspired computing, and nature-inspired designs have explored nature-made artifacts and systems to understand nature’s mechanisms and intelligence. Learning from nature, the researchers have generated sustainable designs and innovation in a variety of fields such as energy, architecture, agriculture, transportation, communication, and medicine. Axiomatic design offers a method to judge if a design is good. This paper analyzes design aspects of one of the nature’s amazing object: chicken egg. The functional requirements (FRs) of components of the object are tabulated and mapped on to nature-chosen design parameters (DPs). The ‘independence axiom’ of the axiomatic design methodology is applied to analyze couplings and to evaluate if eggs’ design is good (i.e., uncoupled design) or bad (i.e., coupled design). The analysis revealed that eggs design is a good design, i.e., uncoupled design. This approach can be applied to any nature’s artifacts to judge whether their design is a good or a bad. This methodology is valuable for biomimicry studies. This approach can also be a very useful teaching design consideration of biology and bio-inspired innovation.Keywords: uncoupled design, axiomatic design, nature design, design evaluation
Procedia PDF Downloads 173506 Location Choice: The Effects of Network Configuration upon the Distribution of Economic Activities in the Chinese City of Nanning
Authors: Chuan Yang, Jing Bie, Zhong Wang, Panagiotis Psimoulis
Abstract:
Contemporary studies investigating the association between the spatial configuration of the urban network and economic activities at the street level were mostly conducted within space syntax conceptual framework. These findings supported the theory of 'movement economy' and demonstrated the impact of street configuration on the distribution of pedestrian movement and land-use shaping, especially retail activities. However, the effects varied between different urban contexts. In this paper, the relationship between economic activity distribution and the urban configurational characters was examined at the segment level. In the study area, three kinds of neighbourhood types, urban, suburban, and rural neighbourhood, were included. And among all neighbourhoods, three kinds of urban network form, 'tree-like', grid, and organic pattern, were recognised. To investigate the nested effects of urban configuration measured by space syntax approach and urban context, multilevel zero-inflated negative binomial (ZINB) regression models were constructed. Additionally, considering the spatial autocorrelation, spatial lag was also concluded in the model as an independent variable. The random effect ZINB model shows superiority over the ZINB model or multilevel linear (ML) model in the explanation of economic activities pattern shaping over the urban environment. And after adjusting for the neighbourhood type and network form effects, connectivity and syntax centrality significantly affect economic activities clustering. The comparison between accumulative and new established economic activities illustrated the different preferences for economic activity location choice.Keywords: space syntax, economic activities, multilevel model, Chinese city
Procedia PDF Downloads 125505 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data
Authors: Mohamed Amhal, Jose Sayritupac
Abstract:
Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems
Procedia PDF Downloads 177504 Spontaneous and Posed Smile Detection: Deep Learning, Traditional Machine Learning, and Human Performance
Authors: Liang Wang, Beste F. Yuksel, David Guy Brizan
Abstract:
A computational model of affect that can distinguish between spontaneous and posed smiles with no errors on a large, popular data set using deep learning techniques is presented in this paper. A Long Short-Term Memory (LSTM) classifier, a type of Recurrent Neural Network, is utilized and compared to human classification. Results showed that while human classification (mean of 0.7133) was above chance, the LSTM model was more accurate than human classification and other comparable state-of-the-art systems. Additionally, a high accuracy rate was maintained with small amounts of training videos (70 instances). The derivation of important features to further understand the success of our computational model were analyzed, and it was inferred that thousands of pairs of points within the eyes and mouth are important throughout all time segments in a smile. This suggests that distinguishing between a posed and spontaneous smile is a complex task, one which may account for the difficulty and lower accuracy of human classification compared to machine learning models.Keywords: affective computing, affect detection, computer vision, deep learning, human-computer interaction, machine learning, posed smile detection, spontaneous smile detection
Procedia PDF Downloads 126503 Estimation of Implicit Colebrook White Equation by Preferable Explicit Approximations in the Practical Turbulent Pipe Flow
Authors: Itissam Abuiziah
Abstract:
In several hydraulic systems, it is necessary to calculate the head losses which depend on the resistance flow friction factor in Darcy equation. Computing the resistance friction is based on implicit Colebrook-White equation which is considered as the standard for the friction calculation, but it needs high computational cost, therefore; several explicit approximation methods are used for solving an implicit equation to overcome this issue. It follows that the relative error is used to determine the most accurate method among the approximated used ones. Steel, cast iron and polyethylene pipe materials investigated with practical diameters ranged from 0.1m to 2.5m and velocities between 0.6m/s to 3m/s. In short, the results obtained show that the suitable method for some cases may not be accurate for other cases. For example, when using steel pipe materials, Zigrang and Silvester's method has revealed as the most precise in terms of low velocities 0.6 m/s to 1.3m/s. Comparatively, Halland method showed a less relative error with the gradual increase in velocity. Accordingly, the simulation results of this study might be employed by the hydraulic engineers, so they can take advantage to decide which is the most applicable method according to their practical pipe system expectations.Keywords: Colebrook–White, explicit equation, friction factor, hydraulic resistance, implicit equation, Reynolds numbers
Procedia PDF Downloads 188502 Hydroinformatics of Smart Cities: Real-Time Water Quality Prediction Model Using a Hybrid Approach
Authors: Elisa Coraggio, Dawei Han, Weiru Liu, Theo Tryfonas
Abstract:
Water is one of the most important resources for human society. The world is currently undergoing a wave of urban growth, and pollution problems are of a great impact. Monitoring water quality is a key task for the future of the environment and human species. In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for environmental monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the artificial intelligence algorithm. This study derives the methodology and demonstrates its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.In recent times, researchers, using Smart Cities technologies are trying to mitigate the problems generated by the population growth in urban areas. The availability of huge amounts of data collected by a pervasive urban IoT can increase the transparency of decision making. Several services have already been implemented in Smart Cities, but more and more services will be involved in the future. Water quality monitoring can successfully be implemented in the urban IoT. The combination of water quality sensors, cloud computing, smart city infrastructure, and IoT technology can lead to a bright future for the environment monitoring. In the past decades, lots of effort has been put on monitoring and predicting water quality using traditional approaches based on manual collection and laboratory-based analysis, which are slow and laborious. The present study proposes a new methodology for implementing a water quality prediction model using artificial intelligence techniques and comparing the results obtained with different algorithms. Furthermore, a 3D numerical model will be created using the software D-Water Quality, and simulation results will be used as a training dataset for the Artificial Intelligence algorithm. This study derives the methodology and demonstrate its implementation based on information and data collected at the floating harbour in the city of Bristol (UK). The city of Bristol is blessed with the Bristol-Is-Open infrastructure that includes Wi-Fi network and virtual machines. It was also named the UK ’s smartest city in 2017.Keywords: artificial intelligence, hydroinformatics, numerical modelling, smart cities, water quality
Procedia PDF Downloads 189501 Developing Soil Accumulation Effect Correction Factor for Solar Photovoltaic Module
Authors: Kelebaone Tsamaase, Rapelang Kemoabe, Japhet Sakala, Edward Rakgati, Ishmael Zibani
Abstract:
Increasing demand for energy, depletion of non-renewable energy, effects of climate change, the abundance of renewable energy such as solar energy have increased the interest in investing in renewable energies, in particular solar photovoltaic (PV) energy. Solar photovoltaic energy systems as part of clean technology are considered to be environmentally friendly, freely available, offer clean production systems, long term costs benefits as opposed to conventional sources, and are the attractive power source for a wide range of applications in remote areas where there is no easy access to the national grid. To get maximum electrical power, maximum solar power should penetrate the module and be converted accordingly. However, some environmental and other geographical related factors reduce the electrical power. One of them is dust which accumulates on the surface of the module and forming a dust layer and in the process obstructing the solar power from penetrating PV module. This study intends to improve the performance of solar photovoltaic (PV) energy modules by establishing soil accumulation effects correction factor from dust characteristics and properties, and also from dust accumulation and retention pattern on PV module surface. The non-urban dry deposition flux model was adapted to determine monthly and yearly dust accumulation pattern. Consideration was done on prevailing environmental and other geographical conditions. Preliminary results showed that cumulative dust settlement increased during the months of July to October leading to a higher drop in module electrical output power.Keywords: dust, electrical power output, PV module, soil correction factor
Procedia PDF Downloads 134500 A Simple Model for Solar Panel Efficiency
Authors: Stefano M. Spagocci
Abstract:
The efficiency of photovoltaic panels can be calculated with such software packages as RETScreen that allow design engineers to take financial as well as technical considerations into account. RETScreen is interfaced with meteorological databases, so that efficiency calculations can be realistically carried out. The author has recently contributed to the development of solar modules with accumulation capability and an embedded water purifier, aimed at off-grid users such as users in developing countries. The software packages examined do not allow to take ancillary equipment into account, hence the decision to implement a technical and financial model of the system. The author realized that, rather than re-implementing the quite sophisticated model of RETScreen - a mathematical description of which is anyway not publicly available - it was possible to drastically simplify it, including the meteorological factors which, in RETScreen, are presented in a numerical form. The day-by-day efficiency of a photovoltaic solar panel was parametrized by the product of factors expressing, respectively, daytime duration, solar right ascension motion, solar declination motion, cloudiness, temperature. For the sun-motion-dependent factors, positional astronomy formulae, simplified by the author, were employed. Meteorology-dependent factors were fitted by simple trigonometric functions, employing numerical data supplied by RETScreen. The accuracy of our model was tested by comparing it to the predictions of RETScreen; the accuracy obtained was 11%. In conclusion, our study resulted in a model that can be easily implemented in a spreadsheet - thus being easily manageable by non-specialist personnel - or in more sophisticated software packages. The model was used in a number of design exercises, concerning photovoltaic solar panels and ancillary equipment like the above-mentioned water purifier.Keywords: clean energy, energy engineering, mathematical modelling, photovoltaic panels, solar energy
Procedia PDF Downloads 71499 Study of Efficiency of Flying Animal Using Computational Simulation
Authors: Ratih Julistina, M. Agoes Moelyadi
Abstract:
Innovation in aviation technology evolved rapidly by time to time for acquiring the most favorable value of utilization and is usually denoted by efficiency parameter. Nature always become part of inspiration, and for this sector, many researchers focused on studying the behavior of flying animal to comprehend the fundamental, one of them is birds. Experimental testing has already conducted by several researches to seek and calculate the efficiency by putting the object in wind tunnel. Hence, computational simulation is needed to conform the result and give more visualization which is based on Reynold Averaged Navier-Stokes equation solution for unsteady case in time-dependent viscous flow. By creating model from simplification of the real bird as a rigid body, those are Hawk which has low aspect ratio and Swift with high aspect ratio, subsequently generating the multi grid structured mesh to capture and calculate the aerodynamic behavior and characteristics. Mimicking the motion of downstroke and upstroke of bird flight which produced both lift and thrust, the sinusoidal function is used. Simulation is carried out for varied of flapping frequencies within upper and lower range of actual each bird’s frequency which are 1 Hz, 2.87 Hz, 5 Hz for Hawk and 5 Hz, 8.9 Hz, 13 Hz for Swift to investigate the dependency of frequency effecting the efficiency of aerodynamic characteristics production. Also, by comparing the result in different condition flights with the morphology of each bird. Simulation has shown that higher flapping frequency is used then greater aerodynamic coefficient is obtained, on other hand, efficiency on thrust production is not the same. The result is analyzed from velocity and pressure contours, mesh movement as to see the behavior.Keywords: characteristics of aerodynamic, efficiency, flapping frequency, flapping wing, unsteady simulation
Procedia PDF Downloads 246498 Designing a Patient Monitoring System Using Cloud and Semantic Web Technologies
Authors: Chryssa Thermolia, Ekaterini S. Bei, Stelios Sotiriadis, Kostas Stravoskoufos, Euripides G. M. Petrakis
Abstract:
Moving into a new era of healthcare, new tools and devices are developed to extend and improve health services, such as remote patient monitoring and risk prevention. In this concept, Internet of Things (IoT) and Cloud Computing present great advantages by providing remote and efficient services, as well as cooperation between patients, clinicians, researchers and other health professionals. This paper focuses on patients suffering from bipolar disorder, a brain disorder that belongs to a group of conditions called effective disorders, which is characterized by great mood swings.We exploit the advantages of Semantic Web and Cloud Technologies to develop a patient monitoring system to support clinicians. Based on intelligently filtering of evidence-knowledge and individual-specific information we aim to provide treatment notifications and recommended function tests at appropriate times or concluding into alerts for serious mood changes and patient’s non-response to treatment. We propose an architecture, as the back-end part of a cloud platform for IoT, intertwining intelligence devices with patients’ daily routine and clinicians’ support.Keywords: bipolar disorder, intelligent systems patient monitoring, semantic web technologies, healthcare
Procedia PDF Downloads 510497 Consumer Perception of 3D Body Scanning While Online Shopping for Clothing
Authors: A. Grilec, S. Petrak, M. Mahnic Naglic
Abstract:
Technological development and the globalization in production and sales of clothing in the last decade have significantly influenced the changes in consumer relationship with the industrial-fashioned apparel and in the way of clothing purchasing. The Internet sale of clothing is in a constant and significant increase in the global market, but the possibilities offered by modern computing technologies in the customization segment are not yet fully involved, especially according to the individual customer requirements and body sizes. Considering the growing trend of online shopping, the main goal of this paper is to investigate the differences in customer perceptions towards online apparel shopping and particularly to discover the main differences in perceptions between customers regarding three different body sizes. In order to complete the research goal, the quantitative study on the sample of 85 Croatian consumers was conducted in 2017 in Zagreb, Croatia. Respondents were asked to indicate their level of agreement according to a five-point Likert scale ranging from strongly disagree (1) to strongly agree (5). To analyze attitudes of respondents, simple and descriptive statistics were used. The main findings highlight the differences in respondent perception of 3D body scanning, using 3D body scanning in Internet shopping, online apparel shopping habits regarding their body sizes.Keywords: consumer behavior, Internet, 3D body scanning, body types
Procedia PDF Downloads 165496 Educational Data Mining: The Case of the Department of Mathematics and Computing in the Period 2009-2018
Authors: Mário Ernesto Sitoe, Orlando Zacarias
Abstract:
University education is influenced by several factors that range from the adoption of strategies to strengthen the whole process to the academic performance improvement of the students themselves. This work uses data mining techniques to develop a predictive model to identify students with a tendency to evasion and retention. To this end, a database of real students’ data from the Department of University Admission (DAU) and the Department of Mathematics and Informatics (DMI) was used. The data comprised 388 undergraduate students admitted in the years 2009 to 2014. The Weka tool was used for model building, using three different techniques, namely: K-nearest neighbor, random forest, and logistic regression. To allow for training on multiple train-test splits, a cross-validation approach was employed with a varying number of folds. To reduce bias variance and improve the performance of the models, ensemble methods of Bagging and Stacking were used. After comparing the results obtained by the three classifiers, Logistic Regression using Bagging with seven folds obtained the best performance, showing results above 90% in all evaluated metrics: accuracy, rate of true positives, and precision. Retention is the most common tendency.Keywords: evasion and retention, cross-validation, bagging, stacking
Procedia PDF Downloads 84495 Computing Transition Intensity Using Time-Homogeneous Markov Jump Process: Case of South African HIV/AIDS Disposition
Authors: A. Bayaga
Abstract:
This research provides a technical account of estimating Transition Probability using Time-homogeneous Markov Jump Process applying by South African HIV/AIDS data from the Statistics South Africa. It employs Maximum Likelihood Estimator (MLE) model to explore the possible influence of Transition Probability of mortality cases in which case the data was based on actual Statistics South Africa. This was conducted via an integrated demographic and epidemiological model of South African HIV/AIDS epidemic. The model was fitted to age-specific HIV prevalence data and recorded death data using MLE model. Though the previous model results suggest HIV in South Africa has declined and AIDS mortality rates have declined since 2002 – 2013, in contrast, our results differ evidently with the generally accepted HIV models (Spectrum/EPP and ASSA2008) in South Africa. However, there is the need for supplementary research to be conducted to enhance the demographic parameters in the model and as well apply it to each of the nine (9) provinces of South Africa.Keywords: AIDS mortality rates, epidemiological model, time-homogeneous markov jump process, transition probability, statistics South Africa
Procedia PDF Downloads 497494 Automatic Verification Technology of Virtual Machine Software Patch on IaaS Cloud
Authors: Yoji Yamato
Abstract:
In this paper, we propose an automatic verification technology of software patches for user virtual environments on IaaS Cloud to decrease verification costs of patches. In these days, IaaS services have been spread and many users can customize virtual machines on IaaS Cloud like their own private servers. Regarding to software patches of OS or middleware installed on virtual machines, users need to adopt and verify these patches by themselves. This task increases operation costs of users. Our proposed method replicates user virtual environments, extracts verification test cases for user virtual environments from test case DB, distributes patches to virtual machines on replicated environments and conducts those test cases automatically on replicated environments. We have implemented the proposed method on OpenStack using Jenkins and confirmed the feasibility. Using the implementation, we confirmed the effectiveness of test case creation efforts by our proposed idea of 2-tier abstraction of software functions and test cases. We also evaluated the automatic verification performance of environment replications, test cases extractions and test cases conductions.Keywords: OpenStack, cloud computing, automatic verification, jenkins
Procedia PDF Downloads 491493 A Two-Phase Flow Interface Tracking Algorithm Using a Fully Coupled Pressure-Based Finite Volume Method
Authors: Shidvash Vakilipour, Scott Ormiston, Masoud Mohammadi, Rouzbeh Riazi, Kimia Amiri, Sahar Barati
Abstract:
Two-phase and multi-phase flows are common flow types in fluid mechanics engineering. Among the basic and applied problems of these flow types, two-phase parallel flow is the one that two immiscible fluids flow in the vicinity of each other. In this type of flow, fluid properties (e.g. density, viscosity, and temperature) are different at the two sides of the interface of the two fluids. The most challenging part of the numerical simulation of two-phase flow is to determine the location of interface accurately. In the present work, a coupled interface tracking algorithm is developed based on Arbitrary Lagrangian-Eulerian (ALE) approach using a cell-centered, pressure-based, coupled solver. To validate this algorithm, an analytical solution for fully developed two-phase flow in presence of gravity is derived, and then, the results of the numerical simulation of this flow are compared with analytical solution at various flow conditions. The results of the simulations show good accuracy of the algorithm despite using a nearly coarse and uniform grid. Temporal variations of interface profile toward the steady-state solution show that a greater difference between fluids properties (especially dynamic viscosity) will result in larger traveling waves. Gravity effect studies also show that favorable gravity will result in a reduction of heavier fluid thickness and adverse gravity leads to increasing it with respect to the zero gravity condition. However, the magnitude of variation in favorable gravity is much more than adverse gravity.Keywords: coupled solver, gravitational force, interface tracking, Reynolds number to Froude number, two-phase flow
Procedia PDF Downloads 316492 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 66491 HcDD: The Hybrid Combination of Disk Drives in Active Storage Systems
Authors: Shu Yin, Zhiyang Ding, Jianzhong Huang, Xiaojun Ruan, Xiaomin Zhu, Xiao Qin
Abstract:
Since large-scale and data-intensive applications have been widely deployed, there is a growing demand for high-performance storage systems to support data-intensive applications. Compared with traditional storage systems, next-generation systems will embrace dedicated processor to reduce computational load of host machines and will have hybrid combinations of different storage devices. The advent of flash- memory-based solid state disk has become a critical role in revolutionizing the storage world. However, instead of simply replacing the traditional magnetic hard disk with the solid state disk, it is believed that finding a complementary approach to corporate both of them is more challenging and attractive. This paper explores an idea of active storage, an emerging new storage configuration, in terms of the architecture and design, the parallel processing capability, the cooperation of other machines in cluster computing environment, and a disk configuration, the hybrid combination of different types of disk drives. Experimental results indicate that the proposed HcDD achieves better I/O performance and longer storage system lifespan.Keywords: arallel storage system, hybrid storage system, data inten- sive, solid state disks, reliability
Procedia PDF Downloads 450490 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 118489 A Parallel Cellular Automaton Model of Tumor Growth for Multicore and GPU Programming
Authors: Manuel I. Capel, Antonio Tomeu, Alberto Salguero
Abstract:
Tumor growth from a transformed cancer-cell up to a clinically apparent mass spans through a range of spatial and temporal magnitudes. Through computer simulations, Cellular Automata (CA) can accurately describe the complexity of the development of tumors. Tumor development prognosis can now be made -without making patients undergo through annoying medical examinations or painful invasive procedures- if we develop appropriate CA-based software tools. In silico testing mainly refers to Computational Biology research studies of application to clinical actions in Medicine. To establish sound computer-based models of cellular behavior, certainly reduces costs and saves precious time with respect to carrying out experiments in vitro at labs or in vivo with living cells and organisms. These aim to produce scientifically relevant results compared to traditional in vitro testing, which is slow, expensive, and does not generally have acceptable reproducibility under the same conditions. For speeding up computer simulations of cellular models, specific literature shows recent proposals based on the CA approach that include advanced techniques, such the clever use of supporting efficient data structures when modeling with deterministic stochastic cellular automata. Multiparadigm and multiscale simulation of tumor dynamics is just beginning to be developed by the concerned research community. The use of stochastic cellular automata (SCA), whose parallel programming implementations are open to yield a high computational performance, are of much interest to be explored up to their computational limits. There have been some approaches based on optimizations to advance in multiparadigm models of tumor growth, which mainly pursuit to improve performance of these models through efficient memory accesses guarantee, or considering the dynamic evolution of the memory space (grids, trees,…) that holds crucial data in simulations. In our opinion, the different optimizations mentioned above are not decisive enough to achieve the high performance computing power that cell-behavior simulation programs actually need. The possibility of using multicore and GPU parallelism as a promising multiplatform and framework to develop new programming techniques to speed-up the computation time of simulations is just starting to be explored in the few last years. This paper presents a model that incorporates parallel processing, identifying the synchronization necessary for speeding up tumor growth simulations implemented in Java and C++ programming environments. The speed up improvement that specific parallel syntactic constructs, such as executors (thread pools) in Java, are studied. The new tumor growth parallel model is proved using implementations with Java and C++ languages on two different platforms: chipset Intel core i-X and a HPC cluster of processors at our university. The parallelization of Polesczuk and Enderling model (normally used by researchers in mathematical oncology) proposed here is analyzed with respect to performance gain. We intend to apply the model and overall parallelization technique presented here to solid tumors of specific affiliation such as prostate, breast, or colon. Our final objective is to set up a multiparadigm model capable of modelling angiogenesis, or the growth inhibition induced by chemotaxis, as well as the effect of therapies based on the presence of cytotoxic/cytostatic drugs.Keywords: cellular automaton, tumor growth model, simulation, multicore and manycore programming, parallel programming, high performance computing, speed up
Procedia PDF Downloads 244488 The Beam Expansion Method, A Simplified and Efficient Approach of Field Propagation and Resonators Modes Study
Authors: Zaia Derrar Kaddour
Abstract:
The study of a beam throughout an optical path is generally achieved by means of diffraction integral. Unfortunately, in some problems, this tool turns out to be not very friendly and hard to implement. Instead, the beam expansion method for computing field profiles appears to be an interesting alternative. The beam expansion method consists of expanding the field pattern as a series expansion in a set of orthogonal functions. Propagating each individual component through a circuit and adding up the derived elements leads easily to the result. The problem is then reduced to finding how the expansion coefficients change in a circuit. The beam expansion method requires a systematic study of each type of optical element that can be met in the considered optical path. In this work, we analyze the following fundamental elements: first order optical systems, hard apertures and waveguides. We show that the former element type is completely defined thanks to the Gouy phase shift expression we provide and the latters require a suitable mode conversion. For endorsing the usefulness and relevance of the beam expansion approach, we show here some of its applications such as the treatment of the thermal lens effect and the study of unstable resonators.Keywords: gouy phase shift, modes, optical resonators, unstable resonators
Procedia PDF Downloads 63487 Orientational Pair Correlation Functions Modelling of the LiCl6H2O by the Hybrid Reverse Monte Carlo: Using an Environment Dependence Interaction Potential
Authors: Mohammed Habchi, Sidi Mohammed Mesli, Rafik Benallal, Mohammed Kotbi
Abstract:
On the basis of four partial correlation functions and some geometric constraints obtained from neutron scattering experiments, a Reverse Monte Carlo (RMC) simulation has been performed in the study of the aqueous electrolyte LiCl6H2O at the glassy state. The obtained 3-dimensional model allows computing pair radial and orientational distribution functions in order to explore the structural features of the system. Unrealistic features appeared in some coordination peaks. To remedy to this, we use the Hybrid Reverse Monte Carlo (HRMC), incorporating an additional energy constraint in addition to the usual constraints derived from experiments. The energy of the system is calculated using an Environment Dependence Interaction Potential (EDIP). Ions effects is studied by comparing correlations between water molecules in the solution and in pure water at room temperature Our results show a good agreement between experimental and computed partial distribution functions (PDFs) as well as a significant improvement in orientational distribution curves.Keywords: LiCl6H2O, glassy state, RMC, HRMC
Procedia PDF Downloads 472486 Characteristics of GaAs/InGaP and AlGaAs/GaAs/InAlGaP Npn Heterostructural Optoelectronic Switches
Authors: Der-Feng Guo
Abstract:
Optoelectronic switches have attracted a considerable attention in the semiconductor research field due to their potential applications in optical computing systems and optoelectronic integrated circuits (OEICs). With high gains and high-speed operations, npn heterostructures can be used to produce promising optoelectronic switches. It is known that the bulk barrier and heterostructure-induced potential spike act important roles in the characteristics of the npn heterostructures. To investigate the effects of bulk barrier and potential spike heights on the optoelectronic switching of the npn heterostructures, GaAs/InGaP and AlGaAs/GaAs/InAlGaP npn heterostructural optoelectronic switches (HSOSs) have been fabricated in this work. It is seen that the illumination decreases the switching voltage Vs and increases the switching current Is, and thus the OFF state is under dark and ON state under illumination in the optical switching of the GaAs/InGaP HSOS characteristics. But in the AlGaAs/GaAs/InAlGaP HSOS characteristics, the Vs and Is present contrary trends, and the OFF state is under illumination and ON state under dark. The studied HSOSs show quite different switching variations with incident light, which are mainly attributed to the bulk barrier and potential spike heights affected by photogenerated carriers.Keywords: bulk barrier, heterostructure, optoelectronic switch, potential spike
Procedia PDF Downloads 238485 A Framework for Early Differential Diagnosis of Tropical Confusable Diseases Using the Fuzzy Cognitive Map Engine
Authors: Faith-Michael E. Uzoka, Boluwaji A. Akinnuwesi, Taiwo Amoo, Flora Aladi, Stephen Fashoto, Moses Olaniyan, Joseph Osuji
Abstract:
The overarching aim of this study is to develop a soft-computing system for the differential diagnosis of tropical diseases. These conditions are of concern to health bodies, physicians, and the community at large because of their mortality rates, and difficulties in early diagnosis due to the fact that they present with symptoms that overlap, and thus become ‘confusable’. We report on the first phase of our study, which focuses on the development of a fuzzy cognitive map model for early differential diagnosis of tropical diseases. We used malaria as a case disease to show the effectiveness of the FCM technology as an aid to the medical practitioner in the diagnosis of tropical diseases. Our model takes cognizance of manifested symptoms and other non-clinical factors that could contribute to symptoms manifestations. Our model showed 85% accuracy in diagnosis, as against the physicians’ initial hypothesis, which stood at 55% accuracy. It is expected that the next stage of our study will provide a multi-disease, multi-symptom model that also improves efficiency by utilizing a decision support filter that works on an algorithm, which mimics the physician’s diagnosis process.Keywords: medical diagnosis, tropical diseases, fuzzy cognitive map, decision support filters, malaria differential diagnosis
Procedia PDF Downloads 322484 The Feasibility Evaluation Of The Compressed Air Energy Storage System In The Porous Media Reservoir
Authors: Ming-Hong Chen
Abstract:
In the study, the mechanical and financial feasibility for the compressed air energy storage (CAES) system in the porous media reservoir in Taiwan is evaluated. In 2035, Taiwan aims to install 16.7 GW of wind power and 40 GW of photovoltaic (PV) capacity. However, renewable energy sources often generate more electricity than needed, particularly during winter. Consequently, Taiwan requires long-term, large-scale energy storage systems to ensure the security and stability of its power grid. Currently, the primary large-scale energy storage options are Pumped Hydro Storage (PHS) and Compressed Air Energy Storage (CAES). Taiwan has not ventured into CAES-related technologies due to geological and cost constraints. However, with the imperative of achieving net-zero carbon emissions by 2050, there's a substantial need for the development of a considerable amount of renewable energy. PHS has matured, boasting an overall installed capacity of 4.68 GW. CAES, presenting a similar scale and power generation duration to PHS, is now under consideration. Taiwan's geological composition, being a porous medium unlike salt caves, introduces flow field resistance affecting gas injection and extraction. This study employs a program analysis model to establish the system performance analysis capabilities of CAES. The finite volume model is then used to assess the impact of porous media, and the findings are fed back into the system performance analysis for correction. Subsequently, the financial implications are calculated and compared with existing literature. For Taiwan, the strategic development of CAES technology is crucial, not only for meeting energy needs but also for decentralizing energy allocation, a feature of great significance in regions lacking alternative natural resources.Keywords: compressed-air energy storage, efficiency, porous media, financial feasibility
Procedia PDF Downloads 67483 Environmental Justice and Marginalized Communities: Addressing Barriers to Energy Justice in the Negev
Authors: Mohammad Naser Aldeen
Abstract:
This study explores environmental justice issues among Bedouin communities in Israel’s Negev region, focusing on energy access and their exclusion from state-supported energy services. As a historically marginalized and indigenous population, Bedouins face intersecting inequities, including limited access to national grid energy, waste management, access to water, systematic discrimination, and environmental harms such as industrial pollution and land degradation. Employing Pellow’s Critical Environmental Justice framework, this research examines how power relations and intersecting identities – ethnicity, class, and indigeneity – shape energy exclusion. Utilizing K. Arrow’s Barriers Analysis framework, it identifies the multifaceted barriers obstructing equitable energy access, including structural policy deficiencies, socio-economic constraints, and administrative neglect. The study also highlights Bedouins’ resilience, advocacy, and community-led strategies to address these challenges through the adoption of solar energy. A mixed-methods approach integrates quantitative data with qualitative narratives from community leaders, policymakers, and activists, revealing the multidimensional nature of energy inequities in the Negev. Findings emphasize the urgent need for inclusive energy policies that address intersectional barriers and prioritize environmental justice in planning and management. By advancing discourse on energy equity, this research underscores the importance of integrating marginalized communities into sustainable energy systems, contributing to the development of equitable energy policies and fostering pathways toward environmental justice and sustainable development.Keywords: environmental justice, energy justice, intersectionality, Bedouin communities, barriers analysis
Procedia PDF Downloads 13482 Infinite Impulse Response Digital Filters Design
Authors: Phuoc Si Nguyen
Abstract:
Infinite impulse response (IIR) filters can be designed from an analogue low pass prototype by using frequency transformation in the s-domain and bilinear z-transformation with pre-warping frequency; this method is known as frequency transformation from the s-domain to the z-domain. This paper will introduce a new method to transform an IIR digital filter to another type of IIR digital filter (low pass, high pass, band pass, band stop or narrow band) using a technique based on inverse bilinear z-transformation and inverse matrices. First, a matrix equation is derived from inverse bilinear z-transformation and Pascal’s triangle. This Low Pass Digital to Digital Filter Pascal Matrix Equation is used to transform a low pass digital filter to other digital filter types. From this equation and the inverse matrix, a Digital to Digital Filter Pascal Matrix Equation can be derived that is able to transform any IIR digital filter. This paper will also introduce some specific matrices to replace the inverse matrix, which is difficult to determine due to the larger size of the matrix in the current method. This will make computing and hand calculation easier when transforming from one IIR digital filter to another in the digital domain.Keywords: bilinear z-transformation, frequency transformation, inverse bilinear z-transformation, IIR digital filters
Procedia PDF Downloads 425481 Employing Innovative Pedagogy: Collaborative (Online) Learning and Teaching In An International Setting
Authors: Sonja Gögele, Petra Kletzenbauer
Abstract:
International strategies are ranked as one of the core activities in the development plans of Austrian universities. This has led to numerous promising activities in terms of internationalization (i.e. development of international degree programmes, increased staff, and student mobility, and blended international projects). The latest innovative approach are so called Blended Intensive Programmes (BIP), which combine jointly delivered teaching and learning elements of at least three participating ERASMUS universities in a virtual and short-term mobility setup. Students who participate in BIP can maintain their study plans at their home institution and include BIP as a parallel activity. This paper presents the experiences of this programme on the topic of sustainable computing hosted by the University of Applied Sciences FH JOANNEUM. By means of an online survey and face-to-face interviews with all stakeholders (20 students, 8 professors), the empirical study addresses the challenges of hosting an international blended learning programme (i.e. virtual phase and on-site intensive phase) and discusses the impact of such activities in terms of innovative pedagogy (i.e. virtual collaboration, research-based learning).Keywords: internationalization, collaborative learning, blended intensive programme, pedagogy
Procedia PDF Downloads 132480 The Relationship between Spanish Economic Variables: Evidence from the Wavelet Techniques
Authors: Concepcion Gonzalez-Concepcion, Maria Candelaria Gil-Fariña, Celina Pestano-Gabino
Abstract:
We analyze six relevant economic and financial variables for the period 2000M1-2015M3 in the context of the Spanish economy: a financial index (IBEX35), a commodity (Crude Oil Price in euros), a foreign exchange index (EUR/USD), a bond (Spanish 10-Year Bond), the Spanish National Debt and the Consumer Price Index. The goal of this paper is to analyze the main relations between them by computing the Wavelet Power Spectrum and the Cross Wavelet Coherency associated with Morlet wavelets. By using a special toolbox in MATLAB, we focus our interest on the period variable. We decompose the time-frequency effects and improve the interpretation of the results by non-expert users in the theory of wavelets. The empirical evidence shows certain instability periods and reveals various changes and breaks in the causality relationships for sample data. These variables were individually analyzed with Daubechies Wavelets to visualize high-frequency variance, seasonality, and trend. The results are included in Proceeding 20th International Academic Conference, 2015, International Institute of Social and Economic Sciences (IISES), Madrid.Keywords: economic and financial variables, Spain, time-frequency domain, wavelet coherency
Procedia PDF Downloads 241