Search results for: input dealers
679 The Role of Sustainable Financing Models for Smallholder Tree Growers in Ghana
Authors: Raymond Awinbilla
Abstract:
The call for tree planting has long been set in motion by the government of Ghana. The Forestry Commission encourages plantation development through numerous interventions including formulating policies and enacting legislations. However, forest policies have failed and that has generated a major concern over the vast gap between the intentions of national policies and the realities established. This study addresses three objectives;1) Assessing the farmers' response and contribution to the tree planting initiative, 2) Identifying socio-economic factors hindering the development of smallholder plantations as a livelihood strategy, and 3) Determining the level of support available for smallholder tree growers and the factors influencing it. The field work was done in 12 farming communities in Ghana. The article illuminates that farmers have responded to the call for tree planting and have planted both exotic and indigenous tree species. Farmers have converted 17.2% (369.48ha) of their total land size into plantations and have no problem with land tenure. Operations and marketing constraints include lack of funds for operations, delay in payment, low price of wood, manipulation of price by buyers, documentation by buyers, and no ready market for harvesting wood products. Environmental institutions encourage tree planting; the only exception is with the Lands Commission. Support availed to farmers includes capacity building in silvicultural practices, organisation of farmers, linkage to markets and finance. Efforts by the Government of Ghana to enhance forest resources in the country could rely on the input of local populations.Keywords: livelihood strategy, marketing constraints, environmental institutions, silvicultural practices
Procedia PDF Downloads 58678 C-eXpress: A Web-Based Analysis Platform for Comparative Functional Genomics and Proteomics in Human Cancer Cell Line, NCI-60 as an Example
Authors: Chi-Ching Lee, Po-Jung Huang, Kuo-Yang Huang, Petrus Tang
Abstract:
Background: Recent advances in high-throughput research technologies such as new-generation sequencing and multi-dimensional liquid chromatography makes it possible to dissect the complete transcriptome and proteome in a single run for the first time. However, it is almost impossible for many laboratories to handle and analysis these “BIG” data without the support from a bioinformatics team. We aimed to provide a web-based analysis platform for users with only limited knowledge on bio-computing to study the functional genomics and proteomics. Method: We use NCI-60 as an example dataset to demonstrate the power of the web-based analysis platform and data delivering system: C-eXpress takes a simple text file that contain the standard NCBI gene or protein ID and expression levels (rpkm or fold) as input file to generate a distribution map of gene/protein expression levels in a heatmap diagram organized by color gradients. The diagram is hyper-linked to a dynamic html table that allows the users to filter the datasets based on various gene features. A dynamic summary chart is generated automatically after each filtering process. Results: We implemented an integrated database that contain pre-defined annotations such as gene/protein properties (ID, name, length, MW, pI); pathways based on KEGG and GO biological process; subcellular localization based on GO cellular component; functional classification based on GO molecular function, kinase, peptidase and transporter. Multiple ways of sorting of column and rows is also provided for comparative analysis and visualization of multiple samples.Keywords: cancer, visualization, database, functional annotation
Procedia PDF Downloads 618677 Effects of the Air Supply Outlets Geometry on Human Comfort inside Living Rooms: CFD vs. ADPI
Authors: Taher M. Abou-deif, Esmail M. El-Bialy, Essam E. Khalil
Abstract:
The paper is devoted to numerically investigating the influence of the air supply outlets geometry on human comfort inside living looms. A computational fluid dynamics model is developed to examine the air flow characteristics of a room with different supply air diffusers. The work focuses on air flow patterns, thermal behavior in the room with few number of occupants. As an input to the full-scale 3-D room model, a 2-D air supply diffuser model that supplies direction and magnitude of air flow into the room is developed. Air distribution effect on thermal comfort parameters was investigated depending on changing the air supply diffusers type, angles and velocity. Air supply diffusers locations and numbers were also investigated. The pre-processor Gambit is used to create the geometric model with parametric features. Commercially available simulation software “Fluent 6.3” is incorporated to solve the differential equations governing the conservation of mass, three momentum and energy in the processing of air flow distribution. Turbulence effects of the flow are represented by the well-developed two equation turbulence model. In this work, the so-called standard k-ε turbulence model, one of the most widespread turbulence models for industrial applications, was utilized. Basic parameters included in this work are air dry bulb temperature, air velocity, relative humidity and turbulence parameters are used for numerical predictions of indoor air distribution and thermal comfort. The thermal comfort predictions through this work were based on ADPI (Air Diffusion Performance Index),the PMV (Predicted Mean Vote) model and the PPD (Percentage People Dissatisfied) model, the PMV and PPD were estimated using Fanger’s model.Keywords: thermal comfort, Fanger's model, ADPI, energy effeciency
Procedia PDF Downloads 409676 Faster Pedestrian Recognition Using Deformable Part Models
Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia
Abstract:
Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time
Procedia PDF Downloads 280675 VISSIM Modeling of Driver Behavior at Connecticut Roundabouts
Authors: F. Clara Fang, Hernan Castaneda
Abstract:
The Connecticut Department of Transportation (ConnDOT) has constructed four roundabouts in the State of Connecticut within the past ten years. VISSIM traffic simulation software was utilized to analyze these roundabouts during their design phase. The queue length and level of service observed in the field appear to be better than predicted by the VISSIM model. The objectives of this project are to: identify VISSIM input variables most critical to accurate modeling; recommend VISSIM calibration factors; and, provide other recommendations for roundabout traffic operations modeling. Traffic data were collected at these roundabouts using Miovision Technologies. Cameras were set up to capture vehicle circulating activity and entry behavior for two weekdays. A large sample size of filed data was analyzed to achieve accurate and statistically significant results. The data extracted from the videos include: vehicle circulating speed; critical gap estimated by Maximum Likelihood Method; peak hour volume; follow-up headway; travel time; and, vehicle queue length. A VISSIM simulation of existing roundabouts was built to compare both queue length and travel time predicted from simulation with measured in the field. The research investigated a variety of simulation parameters as calibration factors for describing driver behaviors at roundabouts. Among them, critical gap is the most effective calibration variable in roundabout simulation. It has a significant impact to queue length, particularly when the volume is higher. The results will improve the design of future roundabouts in Connecticut and provide decision makers with insights on the relationship between various choices and future performance.Keywords: driver critical gap, roundabout analysis, simulation, VISSIM modeling
Procedia PDF Downloads 289674 Theory of the Optimum Signal Approximation Clarifying the Importance in the Recognition of Parallel World and Application to Secure Signal Communication with Feedback
Authors: Takuro Kida, Yuichi Kida
Abstract:
In this paper, it is shown a base of the new trend of algorithm mathematically that treats a historical reason of continuous discrimination in the world as well as its solution by introducing new concepts of parallel world that includes an invisible set of errors as its companion. With respect to a matrix operator-filter bank that the matrix operator-analysis-filter bank H and the matrix operator-sampling-filter bank S are given, firstly, we introduce the detail algorithm to derive the optimum matrix operator-synthesis-filter bank Z that minimizes all the worst-case measures of the matrix operator-error-signals E(ω) = F(ω) − Y(ω) between the matrix operator-input-signals F(ω) and the matrix operator-output-signals Y(ω) of the matrix operator-filter bank at the same time. Further, feedback is introduced to the above approximation theory, and it is indicated that introducing conversations with feedback do not superior automatically to the accumulation of existing knowledge of signal prediction. Secondly, the concept of category in the field of mathematics is applied to the above optimum signal approximation and is indicated that the category-based approximation theory is applied to the set-theoretic consideration of the recognition of humans. Based on this discussion, it is shown naturally why the narrow perception that tends to create isolation shows an apparent advantage in the short term and, often, why such narrow thinking becomes intimate with discriminatory action in a human group. Throughout these considerations, it is presented that, in order to abolish easy and intimate discriminatory behavior, it is important to create a parallel world of conception where we share the set of invisible error signals, including the words and the consciousness of both worlds.Keywords: matrix filterbank, optimum signal approximation, category theory, simultaneous minimization
Procedia PDF Downloads 143673 Lower Limb Oedema in Beckwith-Wiedemann Syndrome
Authors: Mihai-Ionut Firescu, Mark A. P. Carson
Abstract:
We present a case of inferior vena cava agenesis (IVCA) associated with bilateral deep venous thrombosis (DVT) in a patient with Beckwith-Wiedemann syndrome (BWS). In adult patients with BWS presenting with bilateral lower limb oedema, specific aetiological factors should be considered. These include cardiomyopathy and intraabdominal tumours. Congenital malformations of the IVC, through causing relative venous stasis, can lead to lower limb oedema either directly or indirectly by favouring lower limb venous thromboembolism; however, they are yet to be reported as an associated feature of BWS. Given its life-threatening potential, the prompt initiation of treatment for bilateral DVT is paramount. In BWS patients, however, this can prove more complicated. Due to overgrowth, the above-average birth weight can continue throughout childhood. In this case, the patient’s weight reached 170 kg, impacting on anticoagulation choice, as direct oral anticoagulants have a limited evidence base in patients with a body mass above 120 kg. Furthermore, the presence of IVCA leads to a long-term increased venous thrombosis risk. Therefore, patients with IVCA and bilateral DVT warrant specialist consideration and may benefit from multidisciplinary team management, with hematology and vascular surgery input. Conclusion: Here, we showcased a rare cause for bilateral lower limb oedema, respectively bilateral deep venous thrombosis complicating IVCA in a patient with Beckwith-Wiedemann syndrome. The importance of this case lies in its novelty, as the association between IVC agenesis and BWS has not yet been described. Furthermore, the treatment of DVT in such situations requires special consideration, taking into account the patient’s weight and the presence of a significant, predisposing vascular abnormality.Keywords: Beckwith-Wiedemann syndrome, bilateral deep venous thrombosis, inferior vena cava agenesis, venous thromboembolism
Procedia PDF Downloads 235672 YOLO-IR: Infrared Small Object Detection in High Noise Images
Authors: Yufeng Li, Yinan Ma, Jing Wu, Chengnian Long
Abstract:
Infrared object detection aims at separating small and dim target from clutter background and its capabilities extend beyond the limits of visible light, making it invaluable in a wide range of applications such as improving safety, security, efficiency, and functionality. However, existing methods are usually sensitive to the noise of the input infrared image, leading to a decrease in target detection accuracy and an increase in the false alarm rate in high-noise environments. To address this issue, an infrared small target detection algorithm called YOLO-IR is proposed in this paper to improve the robustness to high infrared noise. To address the problem that high noise significantly reduces the clarity and reliability of target features in infrared images, we design a soft-threshold coordinate attention mechanism to improve the model’s ability to extract target features and its robustness to noise. Since the noise may overwhelm the local details of the target, resulting in the loss of small target features during depth down-sampling, we propose a deep and shallow feature fusion neck to improve the detection accuracy. In addition, because the generalized Intersection over Union (IoU)-based loss functions may be sensitive to noise and lead to unstable training in high-noise environments, we introduce a Wasserstein-distance based loss function to improve the training of the model. The experimental results show that YOLO-IR achieves a 5.0% improvement in recall and a 6.6% improvement in F1-score over existing state-of-art model.Keywords: infrared small target detection, high noise, robustness, soft-threshold coordinate attention, feature fusion
Procedia PDF Downloads 73671 Design and Creation of a BCI Videogame for Training and Measure of Sustained Attention in Children with ADHD
Authors: John E. Muñoz, Jose F. Lopez, David S. Lopez
Abstract:
Attention Deficit Hyperactivity Disorder (ADHD) is a disorder that affects 1 out of 5 Colombian children, converting into a real public health problem in the country. Conventional treatments such as medication and neuropsychological therapy have been proved to be insufficient in order to decrease high incidence levels of ADHD in the principal Colombian cities. This work demonstrates a design and development of a videogame that uses a brain computer interface not only to serve as an input device but also as a tool to monitor neurophysiologic signal. The video game named “The Harvest Challenge” puts a cultural scene of a Colombian coffee grower in its context, where a player can use his/her avatar in three mini games created in order to reinforce four fundamental aspects: i) waiting ability, ii) planning ability, iii) ability to follow instructions and iv) ability to achieve objectives. The details of this collaborative designing process of the multimedia tool according to the exact clinic necessities and the description of interaction proposals are presented through the mental stages of attention and relaxation. The final videogame is presented as a tool for sustained attention training in children with ADHD using as an action mechanism the neuromodulation of Beta and Theta waves through an electrode located in the central part of the front lobe of the brain. The processing of an electroencephalographic signal is produced automatically inside the videogame allowing to generate a report of the theta/beta ratio evolution - a biological marker, which has been demonstrated to be a sufficient measure to discriminate of children with deficit and without.Keywords: BCI, neuromodulation, ADHD, videogame, neurofeedback, theta/beta ratio
Procedia PDF Downloads 371670 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 64669 Reducing Ambulance Offload Delay: A Quality Improvement Project at Princess Royal University Hospital
Authors: Fergus Wade, Jasmine Makker, Matthew Jankinson, Aminah Qamar, Gemma Morrelli, Shayan Shah
Abstract:
Background: Ambulance offload delays (AODs) affect patient outcomes. At baseline, the average AOD at Princess Royal University Hospital (PRUH) was 41 minutes, in breach of the 15-minute target. Aims: By February 2023, we aimed to reduce: the average AOD to 30 minutes percentage of AOD >30 minutes (PA30) to 25% and >60 minutes (PA60) to 10% Methods: Following a root-cause analysis, we implemented 2 Plan, Do, Study, Act (PDSA) cycles. PDSA-1 ‘Drop-and-run’: ambulances waiting >15 minutes for a handover left the patients in the Emergency Department (ED) and returned to the community. PDSA-2: Booking in the patients before the handover, allowing direct updates to online records, eliminating the need for handwritten notes. Outcome measures: AOD, PA30, and PA60, and process measures: total ambulances and patients in the ED were recorded for 16 weeks. Results: In PDSA-1, all parameters increased slightly despite unvarying ED crowding. In PDSA-2, two shifts in data were seen: initially, a sharp increase in the outcome measures consistent with increased ED crowding, followed by a downward shift when crowding returned to baseline (p<0.01). Within this interval, the AOD reduced to 29.9 minutes, and PA30 and PA60 were 31.2% and 9.2% respectively. Discussion/conclusion: PDSA-1 didn’t result in any significant changes; lack of compliance was a key cause. The initial upward shift in PDSA-2 is likely associated with NHS staff strikes. However, during the second interval, the AOD and the PA60 met our targets of 30 minutes and 10%, respectively, improving patient flow in the ED. This was sustained without further input and if maintained, saves 2 paramedic shifts every 3 days.Keywords: ambulance offload, district general hospital, handover, quality improvement
Procedia PDF Downloads 105668 Tokyo Skyscrapers: Technologically Advanced Structures in Seismic Areas
Authors: J. Szolomicki, H. Golasz-Szolomicka
Abstract:
The architectural and structural analysis of selected high-rise buildings in Tokyo is presented in this paper. The capital of Japan is the most densely populated city in the world and moreover is located in one of the most active seismic zones. The combination of these factors has resulted in the creation of sophisticated designs and innovative engineering solutions, especially in the field of design and construction of high-rise buildings. The foreign architectural studios (as, for Jean Nouvel, Kohn Pedesen Associates, Skidmore, Owings & Merill) which specialize in the designing of skyscrapers, played a major role in the development of technological ideas and architectural forms for such extraordinary engineering structures. Among the projects completed by them, there are examples of high-rise buildings that set precedents for future development. An essential aspect which influences the design of high-rise buildings is the necessity to take into consideration their dynamic reaction to earthquakes and counteracting wind vortices. The need to control motions of these buildings, induced by the force coming from earthquakes and wind, led to the development of various methods and devices for dissipating energy which occur during such phenomena. Currently, Japan is a global leader in seismic technologies which safeguard seismic influence on high-rise structures. Due to these achievements the most modern skyscrapers in Tokyo are able to withstand earthquakes with a magnitude of over seven degrees at the Richter scale. Damping devices applied are of a passive, which do not require additional power supply or active one which suppresses the reaction with the input of extra energy. In recent years also hybrid dampers were used, with an additional active element to improve the efficiency of passive damping.Keywords: core structures, damping system, high-rise building, seismic zone
Procedia PDF Downloads 175667 Cement Bond Characteristics of Artificially Fabricated Sandstones
Authors: Ashirgul Kozhagulova, Ainash Shabdirova, Galym Tokazhanov, Minh Nguyen
Abstract:
The synthetic rocks have been advantageous over the natural rocks in terms of availability and the consistent studying the impact of a particular parameter. The artificial rocks can be fabricated using variety of techniques such as mixing sand and Portland cement or gypsum, firing the mixture of sand and fine powder of borosilicate glass or by in-situ precipitation of calcite solution. In this study, sodium silicate solution has been used as the cementing agent for the quartz sand. The molded soft cylindrical sandstone samples are placed in the gas-tight pressure vessel, where the hardening of the material takes place as the chemical reaction between carbon dioxide and the silicate solution progresses. The vessel allows uniform disperse of carbon dioxide and control over the ambient gas pressure. Current paper shows how the bonding material is initially distributed in the intergranular space and the surface of the sand particles by the usage of Electron Microscopy and the Energy Dispersive Spectroscopy. During the study, the strength of the cement bond as a function of temperature is observed. The impact of cementing agent dosage on the micro and macro characteristics of the sandstone is investigated. The analysis of the cement bond at micro level helps to trace the changes to particles bonding damage after a potential yielding. Shearing behavior and compressional response have been examined resulting in the estimation of the shearing resistance and cohesion force of the sandstone. These are considered to be main input values to the mathematical prediction models of sand production from weak clastic oil reservoir formations.Keywords: artificial sanstone, cement bond, microstructure, SEM, triaxial shearing
Procedia PDF Downloads 167666 An Approach to Building a Recommendation Engine for Travel Applications Using Genetic Algorithms and Neural Networks
Authors: Adrian Ionita, Ana-Maria Ghimes
Abstract:
The lack of features, design and the lack of promoting an integrated booking application are some of the reasons why most online travel platforms only offer automation of old booking processes, being limited to the integration of a smaller number of services without addressing the user experience. This paper represents a practical study on how to improve travel applications creating user-profiles through data-mining based on neural networks and genetic algorithms. Choices made by users and their ‘friends’ in the ‘social’ network context can be considered input data for a recommendation engine. The purpose of using these algorithms and this design is to improve user experience and to deliver more features to the users. The paper aims to highlight a broader range of improvements that could be applied to travel applications in terms of design and service integration, while the main scientific approach remains the technical implementation of the neural network solution. The motivation of the technologies used is also related to the initiative of some online booking providers that have made the fact that they use some ‘neural network’ related designs public. These companies use similar Big-Data technologies to provide recommendations for hotels, restaurants, and cinemas with a neural network based recommendation engine for building a user ‘DNA profile’. This implementation of the ‘profile’ a collection of neural networks trained from previous user choices, can improve the usability and design of any type of application.Keywords: artificial intelligence, big data, cloud computing, DNA profile, genetic algorithms, machine learning, neural networks, optimization, recommendation system, user profiling
Procedia PDF Downloads 163665 The Cultural Adaptation of a Social and Emotional Learning Program for an Intervention in Saudi Arabia’s Preschools
Authors: Malak Alqaydhi
Abstract:
A problem in the Saudi Arabia education system is that there is a lack of curriculum- based Social, emotional learning (SEL) teaching practices with the pedagogical concept of SEL yet to be practiced in the Kingdom of Saudi Arabia (KSA). Furthermore, voices of teachers and parents have not been captured regarding the use of SEL, particularly in preschools. The importance of this research is to help determine, with the input of teachers and mothers of preschoolers, the efficacy of a culturally adapted SEL program. The purpose of this research is to determine the most appropriate SEL intervention method to appropriately apply in the cultural context of the Saudi preschool classroom setting. The study will use a mixed method exploratory sequential research design, applying qualitative and quantitative approaches including semi-structured interviews with teachers and parents of preschoolers and an experimental research approach. The research will proceed in four phases beginning with a series of interviews with Saudi preschool teachers and mothers, whose voices and perceptions will help guide the second phase of selection and adaptation of a suitable SEL preschool program. The third phase will be the implementation of the intervention by the researcher in the preschool classroom environment, which will be facilitated by the researcher’s cultural proficiency and practical experience in Saudi Arabia. The fourth and final phase will be an evaluation to assess the effectiveness of the trialled SEL among the preschool student participants. The significance of this research stems from its contribution to knowledge about SEL in culturally appropriate Saudi preschools and the opportunity to support initiatives for Saudi early childhood educators to consider implementing SEL programs. The findings from the study may be useful to inform the Saudi Ministry of Education and its curriculum designers about SEL programs, which could be beneficial to trial more widely in the Saudi preschool curriculum.Keywords: social emotional learning, preschool children, saudi Arabia, child behavior
Procedia PDF Downloads 156664 Neural Network-based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children
Authors: Budhvin T. Withana, Sulochana Rupasinghe
Abstract:
The problem of Dyslexia and Dysgraphia, two learning disabilities that affect reading and writing abilities, respectively, is a major concern for the educational system. Due to the complexity and uniqueness of the Sinhala language, these conditions are especially difficult for children who speak it. The traditional risk detection methods for Dyslexia and Dysgraphia frequently rely on subjective assessments, making it difficult to cover a wide range of risk detection and time-consuming. As a result, diagnoses may be delayed and opportunities for early intervention may be lost. The project was approached by developing a hybrid model that utilized various deep learning techniques for detecting risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16 and YOLOv8 were integrated to detect the handwriting issues, and their outputs were fed into an MLP model along with several other input data. The hyperparameters of the MLP model were fine-tuned using Grid Search CV, which allowed for the optimal values to be identified for the model. This approach proved to be effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention of these conditions. The Resnet50 model achieved an accuracy of 0.9804 on the training data and 0.9653 on the validation data. The VGG16 model achieved an accuracy of 0.9991 on the training data and 0.9891 on the validation data. The MLP model achieved an impressive training accuracy of 0.99918 and a testing accuracy of 0.99223, with a loss of 0.01371. These results demonstrate that the proposed hybrid model achieved a high level of accuracy in predicting the risk of Dyslexia and Dysgraphia.Keywords: neural networks, risk detection system, Dyslexia, Dysgraphia, deep learning, learning disabilities, data science
Procedia PDF Downloads 114663 Integrating Efficient Anammox with Enhanced Biological Phosphorus Removal Process Through Flocs Management for Sustainable Ultra-deep Nutrients Removal from Municipal Wastewater
Authors: Qiongpeng Dan, Xiyao Li, Qiong Zhang, Yongzhen Peng
Abstract:
The nutrients removal from wastewater is of great significance for global wastewater recycling and sustainable reuse. Traditional nitrogen and phosphorus removal processes are very dependent on the input of aeration and carbon sources, which makes it difficult to meet the low-carbon goal of energy saving and emission reduction. This study reported a proof-of-concept demonstration of integrating anammox and enhanced biological phosphorus removal (EBPR) by flocs management in a single-stage hybrid bioreactor (biofilms and flocs) for simultaneous nitrogen and phosphorus removal (SNPR). Excellent removal efficiencies of nitrogen (97.7±1.3%) and phosphorus (97.4±0.7%) were obtained in low C/N ratio (3.0±0.5) municipal wastewater treatment. Interestingly, with the loss of flocs, anammox bacteria (Ca. Brocadia) was highly enriched in biofilms, with relative and absolute abundances reaching up to 12.5% and 8.3×1010 copies/g dry sludge, respectively. The anammox contribution to nitrogen removal also rose from 32.6±9.8% to 53.4±4.2%. Endogenous denitrification by flocs was proven to be the main contributor to both nitrite and nitrate reduction, and flocs loss significantly promoted nitrite flow towards anammox, facilitating AnAOB enrichment. Moreover, controlling the floc's solid retention time at around 8 days could maintain a low poly-phosphorus level of 0.02±0.001 mg P/mg VSS in the flocs, effectively addressing the additional phosphorus removal burden imposed by the enrichment of phosphorus-accumulating organisms in biofilms. This study provides an update on developing a simple and feasible strategy for integrating anammox and EBPR for SNPR in mainstream municipal wastewater.Keywords: anammox process, enhanced biological phosphorus removal, municipal wastewater, sustainable nutrients removal
Procedia PDF Downloads 51662 Evaluation of Fresh, Strength and Durability Properties of Self-Compacting Concrete Incorporating Bagasse Ash
Authors: Abdul Haseeb Wani, Shruti Sharma, Rafat Siddique
Abstract:
Self-compacting concrete is an engineered concrete that flows and de-airs without additional energy input. Such concrete requires a high slump which can be achieved by the addition of superplasticizers to the concrete mix. In the present work, bagasse ash is utilised as a replacement of cement in self-compacting concrete. This serves the purpose of both land disposal and environmental concerns related to the disposal of bagasse ash. Further, an experimental program was carried out to study the fresh, strength, and durability properties of self-compacting concrete made with bagasse ash. The mixes were prepared with four percentages (0, 5, 10 and 15) of bagasse ash as partial replacement of cement. Properties investigated were; Slump-flow, V-funnel and L-box, Compressive strength, Splitting tensile strength, Chloride-ion penetration resistance and Water absorption. Compressive and splitting tensile strength tests were conducted at the age of 7 and 28 days. Rapid chloride-ion permeability test was carried at the age of 28 days and water absorption test was carried out at the age of 7 days after initial curing of 28 days. Test results showed that there is an increase in the compressive strength and splitting tensile strength of the concrete specimens having up to 10% replacement level, however, there is a slight decrease at 15% level of replacement. Resistance to chloride-ion penetration of the specimens increased as the percentage of replacement was increased. The charge passed in all the specimens containing bagasse ash was lower than that of the specimen without bagasse ash. Water absorption of the specimens decreased up to 10% replacement level and increased at 15% level of replacement. Hence, it can be concluded that optimum level of replacement of cement with bagasse ash in self-compacting concrete comes out to be 10%; at which the self-compacting concrete has satisfactory flow characteristics (as per the European guidelines), improved compressive and splitting tensile strength and better durability properties as compared to the control mix.Keywords: bagasse ash, compressive strength, self-compacting concrete, splitting tensile strength
Procedia PDF Downloads 352661 A Framework for Auditing Multilevel Models Using Explainability Methods
Authors: Debarati Bhaumik, Diptish Dey
Abstract:
Multilevel models, increasingly deployed in industries such as insurance, food production, and entertainment within functions such as marketing and supply chain management, need to be transparent and ethical. Applications usually result in binary classification within groups or hierarchies based on a set of input features. Using open-source datasets, we demonstrate that popular explainability methods, such as SHAP and LIME, consistently underperform inaccuracy when interpreting these models. They fail to predict the order of feature importance, the magnitudes, and occasionally even the nature of the feature contribution (negative versus positive contribution to the outcome). Besides accuracy, the computational intractability of SHAP for binomial classification is a cause of concern. For transparent and ethical applications of these hierarchical statistical models, sound audit frameworks need to be developed. In this paper, we propose an audit framework for technical assessment of multilevel regression models focusing on three aspects: (i) model assumptions & statistical properties, (ii) model transparency using different explainability methods, and (iii) discrimination assessment. To this end, we undertake a quantitative approach and compare intrinsic model methods with SHAP and LIME. The framework comprises a shortlist of KPIs, such as PoCE (Percentage of Correct Explanations) and MDG (Mean Discriminatory Gap) per feature, for each of these three aspects. A traffic light risk assessment method is furthermore coupled to these KPIs. The audit framework will assist regulatory bodies in performing conformity assessments of AI systems using multilevel binomial classification models at businesses. It will also benefit businesses deploying multilevel models to be future-proof and aligned with the European Commission’s proposed Regulation on Artificial Intelligence.Keywords: audit, multilevel model, model transparency, model explainability, discrimination, ethics
Procedia PDF Downloads 94660 Heavy Vehicle Traffic Estimation Using Automatic Traffic Recorders/Weigh-In-Motion Data: Current Practice and Proposed Methods
Authors: Muhammad Faizan Rehman Qureshi, Ahmed Al-Kaisy
Abstract:
Accurate estimation of traffic loads is critical for pavement and bridge design, among other transportation applications. Given the disproportional impact of heavier axle loads on pavement and bridge structures, truck and heavy vehicle traffic is expected to be a major determinant of traffic load estimation. Further, heavy vehicle traffic is also a major input in transportation planning and economic studies. The traditional method for estimating heavy vehicle traffic primarily relies on AADT estimation using Monthly Day of the Week (MDOW) adjustment factors as well as the percent heavy vehicles observed using statewide data collection programs. The MDOW factors are developed using daily and seasonal (or monthly) variation patterns for total traffic, consisting predominantly of passenger cars and other smaller vehicles. Therefore, while using these factors may yield reasonable estimates for total traffic (AADT), such estimates may involve a great deal of approximation when applied to heavy vehicle traffic. This research aims at assessing the approximation involved in estimating heavy vehicle traffic using MDOW adjustment factors for total traffic (conventional approach) along with three other methods of using MDOW adjustment factors for total trucks (class 5-13), combination-unit trucks (class 8-13), as well as adjustment factors for each vehicle class separately. Results clearly indicate that the conventional method was outperformed by the other three methods by a large margin. Further, using the most detailed and data intensive method (class-specific adjustment factors) does not necessarily yield a more accurate estimation of heavy vehicle traffic.Keywords: traffic loads, heavy vehicles, truck traffic, adjustment factors, traffic data collection
Procedia PDF Downloads 23659 Parameters Identification and Sensitivity Study for Abrasive WaterJet Milling Model
Authors: Didier Auroux, Vladimir Groza
Abstract:
This work is part of STEEP Marie-Curie ITN project, and it focuses on the identification of unknown parameters of the proposed generic Abrasive WaterJet Milling (AWJM) PDE model, that appears as an ill-posed inverse problem. The necessity of studying this problem comes from the industrial milling applications where the possibility to predict and model the final surface with high accuracy is one of the primary tasks in the absence of any knowledge of the model parameters that should be used. In this framework, we propose the identification of model parameters by minimizing a cost function, measuring the difference between experimental and numerical solutions. The adjoint approach based on corresponding Lagrangian gives the opportunity to find out the unknowns of the AWJM model and their optimal values that could be used to reproduce the required trench profile. Due to the complexity of the nonlinear problem and a large number of model parameters, we use an automatic differentiation software tool (TAPENADE) for the adjoint computations. By adding noise to the artificial data, we show that in fact the parameter identification problem is highly unstable and strictly depends on input measurements. Regularization terms could be effectively used to deal with the presence of data noise and to improve the identification correctness. Based on this approach we present results in 2D and 3D of the identification of the model parameters and of the surface prediction both with self-generated data and measurements obtained from the real production. Considering different types of model and measurement errors allows us to obtain acceptable results for manufacturing and to expect the proper identification of unknowns. This approach also gives us the ability to distribute the research on more complex cases and consider different types of model and measurement errors as well as 3D time-dependent model with variations of the jet feed speed.Keywords: Abrasive Waterjet Milling, inverse problem, model parameters identification, regularization
Procedia PDF Downloads 316658 Online Multilingual Dictionary Using Hamburg Notation for Avatar-Based Indian Sign Language Generation System
Authors: Sugandhi, Parteek Kumar, Sanmeet Kaur
Abstract:
Sign Language (SL) is used by deaf and other people who cannot speak but can hear or have a problem with spoken languages due to some disability. It is a visual gesture language that makes use of either one hand or both hands, arms, face, body to convey meanings and thoughts. SL automation system is an effective way which provides an interface to communicate with normal people using a computer. In this paper, an avatar based dictionary has been proposed for text to Indian Sign Language (ISL) generation system. This research work will also depict a literature review on SL corpus available for various SL s over the years. For ISL generation system, a written form of SL is required and there are certain techniques available for writing the SL. The system uses Hamburg sign language Notation System (HamNoSys) and Signing Gesture Mark-up Language (SiGML) for ISL generation. It is developed in PHP using Web Graphics Library (WebGL) technology for 3D avatar animation. A multilingual ISL dictionary is developed using HamNoSys for both English and Hindi Language. This dictionary will be used as a database to associate signs with words or phrases of a spoken language. It provides an interface for admin panel to manage the dictionary, i.e., modification, addition, or deletion of a word. Through this interface, HamNoSys can be developed and stored in a database and these notations can be converted into its corresponding SiGML file manually. The system takes natural language input sentence in English and Hindi language and generate 3D sign animation using an avatar. SL generation systems have potential applications in many domains such as healthcare sector, media, educational institutes, commercial sectors, transportation services etc. This research work will help the researchers to understand various techniques used for writing SL and generation of Sign Language systems.Keywords: avatar, dictionary, HamNoSys, hearing impaired, Indian sign language (ISL), sign language
Procedia PDF Downloads 230657 The Roles of Parental Involvement in the Teaching-Learning Process of Students with Special Needs: Perceptions of Special Needs Education Teachers
Authors: Chassel T. Paras, Tryxzy Q. Dela Cruz, Ma. Carmela Lousie V. Goingco, Pauline L. Tolentino, Carmela S. Dizon
Abstract:
In implementing inclusive education, parental involvement is measured to be an irreplaceable contributing factor. Parental involvement is described as an indispensable aspect of the teaching-learning process and has a remarkable effect on the student's academic performance. However, there are still differences in the viewpoints, expectations, and needs of both parents and teachers that are not yet fully conveyed in their relationship; hence, the perceptions of SNED teachers are essential in their collaboration with parents. This qualitative study explored how SNED teachers perceive the roles of parental involvement in the teaching-learning process of students with special needs. To answer this question, one-on-one face-to-face semi-structured interviews with three SNED teachers in a selected public school in Angeles City, Philippines, that offer special needs education services were conducted. The gathered data are then analyzed using Interpretative Phenomenological Analysis (IPA). The results revealed four superordinate themes, which include: (1) roles of parental involvement, (2) parental involvement opportunities, (3) barriers to parental involvement, and (4) parent-teacher collaboration practices. These results indicate that SNED teachers are aware of the roles and importance of parental involvement; however, despite parent-teacher collaboration, there are still barriers that impede parental involvement. Also, SNED teachers acknowledge the big roles of parents as they serve as main figures in the teaching-learning process of their children with special needs. Lastly, these results can be used as input in developing a school-facilitated parenting involvement framework that encompasses the contribution of SNED teachers in planning, developing, and evaluating parental involvement programs, which future researchers can also use in their studiesKeywords: parental involvement, special needs education, teaching-learning process, teachers’ perceptions, special needs education teachers, interpretative phenomenological analysis
Procedia PDF Downloads 112656 Factors Affecting Entrepreneurial Behavior and Performance of Youth Entrepreneurs in Malaysia
Authors: Mohd Najib Mansor, Nur Syamilah Md. Noor, Abdul Rahim Anuar, Shazida Jan Mohd Khan, Ahmad Zubir Ibrahim, Badariah Hj Din, Abu Sufian Abu Bakar, Kalsom Kayat, Wan Nurmahfuzah Jannah Wan Mansor
Abstract:
This study aimed and focused on the behavior of youth entrepreneurs’ especially entrepreneurial self-efficacy and the performance in micro SMEs in Malaysia. Entrepreneurship development calls for support from various quarters, and mostly the need exists to initiate a youth entrepreneurship culture and drive amongst the youth in the society. Although backed up by the government and non-government organizations, micro-entrepreneurs are still facing challenges which greatly delay their progress, growth and consequently their input towards economic advancement. Micro-entrepreneurs are confronted with unique difficulties such as uncertainty, innovation, and evolution. Reviews on the development of entrepreneurial characteristics such as need for achievement, internal locus of control, risk-taking and innovation and have been recognized as highly associated with entrepreneurial behavior. The data in this study was obtained from the Department of Statistics, Malaysia. A random sampling of 830 respondents was distributed to 14 states that involve of micro-entrepreneurs. The study adopted a quantitative approach whereby a set of questionnaire was used to gather data. Multiple regression analysis was chosen as a method of analysis testing. The result of this study is expected to provide insight into the factor affecting entrepreneurial behavior and performance of youth entrepreneurs in micro SMEs. The finding showed that the Malaysian youth entrepreneurs do not have the entrepreneurial self-efficacy within themselves in order to accomplish greater success in their business venture. The establishment of entrepreneurial schools to allow our youth to be exposed to entrepreneurship from an early age and the development of special training focuses on the creation of business network so that the continuous entrepreneurial culture is crafted.Keywords: youth entrepreneurs, micro entrepreneurs, entrepreneurial self-efficacy, entrepreneurial performance
Procedia PDF Downloads 301655 The Impact of Sedimentary Heterogeneity on Oil Recovery in Basin-plain Turbidite: An Outcrop Analogue Simulation Case Study
Authors: Bayonle Abiola Omoniyi
Abstract:
In turbidite reservoirs with volumetrically significant thin-bedded turbidites (TBTs), thin-pay intervals may be underestimated during calculation of reserve volume due to poor vertical resolution of conventional well logs. This paper demonstrates the strong control of bed-scale sedimentary heterogeneity on oil recovery using six facies distribution scenarios that were generated from outcrop data from the Eocene Itzurun Formation, Basque Basin (northern Spain). The variable net sand volume in these scenarios serves as a primary source of sedimentary heterogeneity impacting sandstone-mudstone ratio, sand and shale geometry and dimensions, lateral and vertical variations in bed thickness, and attribute indices. The attributes provided input parameters for modeling the scenarios. The models are 20-m (65.6 ft) thick. Simulation of the scenarios reveals that oil production is markedly enhanced where degree of sedimentary heterogeneity and resultant permeability contrast are low, as exemplified by Scenarios 1, 2, and 3. In these scenarios, bed architecture encourages better apparent vertical connectivity across intervals of laterally continuous beds. By contrast, low net-to-gross Scenarios 4, 5, and 6, have rapidly declining oil production rates and higher water cut with more oil effectively trapped in low-permeability layers. These scenarios may possess enough lateral connectivity to enable injected water to sweep oil to production well; such sweep is achieved at a cost of high-water production. It is therefore imperative to consider not only net-to-gross threshold but also facies stack pattern and related attribute indices to better understand how to effectively manage water production for optimum oil recovery from basin-plain reservoirs.Keywords: architecture, connectivity, modeling, turbidites
Procedia PDF Downloads 24654 Energy and Carbon Footprint Analysis of Food Waste Treatment Alternatives for Hong Kong
Authors: Asad Iqbal, Feixiang Zan, Xiaoming Liu, Guang-Hao Chen
Abstract:
Water, food, and energy nexus is a vital subject to achieve sustainable development goals worldwide. Wastewater (WW) and food waste (FW) from municipal sources are primary contributors to their respective wastage sum from a country. Along with the loss of these invaluable natural resources, their treatment systems also consume a lot of abiotic energy and resources input with a perceptible contribution to global warming. Hence, the global paradigm has evolved from simple pollution mitigation to a resource recovery system (RRS). In this study, the prospects of six alternative FW treatment scenarios are quantitatively evaluated for Hong Kong in terms of energy use and greenhouse emissions (GHEs) potential, using life cycle assessment (LCA). Considered scenarios included: aerobic composting, anaerobic digestion (AD), combine AD and composting (ADC), co-disposal, and treatment with wastewater (CoD-WW), incineration, and conventional landfilling as base-case. Results revealed that in terms of GHEs saving, all-new scenarios performed significantly better than conventional landfilling, with ADC scenario as best-case and incineration, AD alone, CoD-WW ranked as second, third, and fourth best respectively. Whereas, composting was the worst-case scenario in terms of energy balance, while incineration ranked best and AD alone, ADC, and CoD-WW ranked as second, third, and fourth best, respectively. However, these results are highly sensitive to boundary settings, e.g., the inclusion of the impact of biogenic carbon emissions and waste collection and transportation, and several other influential parameters. The study provides valuable insights and policy guidelines for the decision-makers locally and a generic modelling template for environmental impact assessment.Keywords: food waste, resource recovery, greenhouse emissions, energy balance
Procedia PDF Downloads 107653 Theoretical Analysis and Design Consideration of Screened Heat Pipes for Low-Medium Concentration Solar Receivers
Authors: Davoud Jafari, Paolo Di Marco, Alessandro Franco, Sauro Filippeschi
Abstract:
This paper summarizes the results of an investigation into the heat pipe heat transfer for solar collector applications. The study aims to show the feasibility of a concentrating solar collector, which is coupled with a heat pipe. Particular emphasis is placed on the capillary and boiling limits in capillary porous structures, with different mesh numbers and wick thicknesses. A mathematical model of a cylindrical heat pipe is applied to study its behaviour when it is exposed to higher heat input at the evaporator. The steady state analytical model includes two-dimensional heat conduction in the HP’s wall, the liquid flow in the wick and vapor hydrodynamics. A sensitivity analysis was conducted by considering different design criteria and working conditions. Different wicks (mesh 50, 100, 150, 200, 250, and, 300), different porosities (0.5, 0.6, 0.7, 0.8, and 0.9) with different wick thicknesses (0.25, 0.5, 1, 1.5, and 2 mm) are analyzed with water as a working fluid. Results show that it is possible to improve heat transfer capability (HTC) of a HP by selecting the appropriate wick thickness, the effective pore radius, and lengths for a given HP configuration, and there exist optimal design criteria (optimal thick, evaporator adiabatic and condenser sections). It is shown that the boiling and wicking limits are connected and occurs in dependence on each other. As different parts of the HP external surface collect different fractions of the total incoming insolation, the analysis of non-uniform heat flux distribution indicates that peak heat flux is not affecting parameter. The parametric investigations are aimed to determine working limits and thermal performance of HP for medium temperature SC application.Keywords: screened heat pipes, analytical model, boiling and capillary limits, concentrating collector
Procedia PDF Downloads 560652 An Industrial Steady State Sequence Disorder Model for Flow Controlled Multi-Input Single-Output Queues in Manufacturing Systems
Authors: Anthony John Walker, Glen Bright
Abstract:
The challenge faced by manufactures, when producing custom products, is that each product needs exact components. This can cause work-in-process instability due to component matching constraints imposed on assembly cells. Clearing type flow control policies have been used extensively in mediating server access between multiple arrival processes. Although the stability and performance of clearing policies has been well formulated and studied in the literature, the growth in arrival to departure sequence disorder for each arriving job, across a serving resource, is still an area for further analysis. In this paper, a closed form industrial model has been formulated that characterizes arrival-to-departure sequence disorder through stable manufacturing systems under clearing type flow control policy. Specifically addressed are the effects of sequence disorder imposed on a downstream assembly cell in terms of work-in-process instability induced through component matching constraints. Results from a simulated manufacturing system show that steady state average sequence disorder in parallel upstream processing cells can be balanced in order to decrease downstream assembly system instability. Simulation results also show that the closed form model accurately describes the growth and limiting behavior of average sequence disorder between parts arriving and departing from a manufacturing system flow controlled via clearing policy.Keywords: assembly system constraint, custom products, discrete sequence disorder, flow control
Procedia PDF Downloads 178651 Creativity as a National System: An Exploratory Model towards Enhance Innovation Ecosystems
Authors: Oscar Javier Montiel Mendez
Abstract:
The link between knowledge-creativity-innovation-entrepreneurship is well established, and broadly emphasized the importance of national innovation systems (NIS) as an approach stresses that the flow of information and technology among people, organizations and institutions are key to its process. Understanding the linkages among the actors involved in innovation is relevant to NIS. Creativity is supposed to fuel NIS, mainly focusing on a personal, group or organizational level, leaving aside the fourth one, as a national system. It is suggested that NIS takes Creativity for granted, an ex-ante stage already solved through some mechanisms, like programs for nurturing it at elementary and secondary schools, universities, or public/organizational specific programs. Or worse, that the individual already has this competence, and that the elements of the NIS will communicate between in a way that will lead to the creation of S curves, with an impact on national systems/programs on entrepreneurship, clusters, and the economy. But creativity constantly appears at any time during NIS, being the key input. Under an initial, exploratory, focused and refined literature review, based on Csikszentmihalyi’s systemic model, Amabile's componential theory, Kaufman and Beghetto’s 4C model, and the OECD’s (Organisation for Economic Co-operation and Development) NIS model (expanded), an NCS theoretical model is elaborated. Its suggested that its implementation could become a significant factor helping strengthen local, regional and national economies. The results also suggest that the establishment of a national creativity system (NCS), something that appears not been previously addressed, as a strategic/vital companion for a NIS, installing it not only as a national education strategy, but as its foundation, managing it and measuring its impact on NIS, entrepreneurship and the rest of the ecosystem, could make more effective public policies. Likewise, it should have a beneficial impact on the efforts of all the stakeholders involved and should help prevent some of the possible failures that NIS present.Keywords: national creativity system, national innovation system, entrepreneurship ecosystem, systemic creativity
Procedia PDF Downloads 430650 Microwave Freeze Drying of Fruit Foams for the Production of Healthy Snacks
Authors: Sabine Ambros, Mine Oezcelik, Evelyn Dachmann, Ulrich Kulozik
Abstract:
Nutritional quality and taste of dried fruit products is still often unsatisfactory and does not meet anymore the current consumer trends. Dried foams from fruit puree could be an attractive alternative. Due to their open-porous structure, a new sensory perception with a sudden and very intense aroma release could be generated. To make such high quality fruit snacks affordable for the consumer, a gentle but at the same time fast drying process has to be applied. Therefore, microwave-assisted freeze drying of raspberry foams was investigated in this work and compared with the conventional freeze drying technique in terms of nutritional parameters such as antioxidative capacity, anthocyanin content and vitamin C and the physical parameters colour and wettability. The following process settings were applied: 0.01 kPa chamber pressure and a maximum temperature of 30 °C for both freeze and microwave freeze drying. The influence of microwave power levels on the dried foams was investigated between 1 and 5 W/g. Intermediate microwave power settings led to the highest nutritional values, a colour appearance comparable to the undried foam and a proper wettability. A proper process stability could also be guaranteed for these power levels. By the volumetric energy input of the microwaves drying time could be reduced from 24 h in conventional freeze drying to about 6 h. The short drying times further resulted in an equally high maintenance of the above mentioned parameters in both drying techniques. Hence, microwave assisted freeze drying could lead to a process acceleration in comparison to freeze drying and be therefore an interesting alternative drying technique which on industrial scale enables higher efficiency and higher product throughput.Keywords: foam drying, freeze drying, fruit puree, microwave freeze drying, raspberry
Procedia PDF Downloads 341