Search results for: logistics network optimization
1487 Estimation of the Road Traffic Emissions and Dispersion in the Developing Countries Conditions
Authors: Hicham Gourgue, Ahmed Aharoune, Ahmed Ihlal
Abstract:
We present in this work our model of road traffic emissions (line sources) and dispersion of these emissions, named DISPOLSPEM (Dispersion of Poly Sources and Pollutants Emission Model). In its emission part, this model was designed to keep the consistent bottom-up and top-down approaches. It also allows to generate emission inventories from reduced input parameters being adapted to existing conditions in Morocco and in the other developing countries. While several simplifications are made, all the performance of the model results are kept. A further important advantage of the model is that it allows the uncertainty calculation and emission rate uncertainty according to each of the input parameters. In the dispersion part of the model, an improved line source model has been developed, implemented and tested against a reference solution. It provides improvement in accuracy over previous formulas of line source Gaussian plume model, without being too demanding in terms of computational resources. In the case study presented here, the biggest errors were associated with the ends of line source sections; these errors will be canceled by adjacent sections of line sources during the simulation of a road network. In cases where the wind is parallel to the source line, the use of the combination discretized source and analytical line source formulas minimizes remarkably the error. Because this combination is applied only for a small number of wind directions, it should not excessively increase the calculation time.Keywords: air pollution, dispersion, emissions, line sources, road traffic, urban transport
Procedia PDF Downloads 4421486 Coupling Static Multiple Light Scattering Technique With the Hansen Approach to Optimize Dispersibility and Stability of Particle Dispersions
Authors: Guillaume Lemahieu, Matthias Sentis, Giovanni Brambilla, Gérard Meunier
Abstract:
Static Multiple Light Scattering (SMLS) has been shown to be a straightforward technique for the characterization of colloidal dispersions without dilution, as multiply scattered light in backscattered and transmitted mode is directly related to the concentration and size of scatterers present in the sample. In this view, the use of SMLS for stability measurement of various dispersion types has already been widely described in the literature. Indeed, starting from a homogeneous dispersion, the variation of backscattered or transmitted light can be attributed to destabilization phenomena, such as migration (sedimentation, creaming) or particle size variation (flocculation, aggregation). In a view to investigating more on the dispersibility of colloidal suspensions, an experimental set-up for “at the line” SMLS experiment has been developed to understand the impact of the formulation parameters on particle size and dispersibility. The SMLS experiment is performed with a high acquisition rate (up to 10 measurements per second), without dilution, and under direct agitation. Using such experimental device, SMLS detection can be combined with the Hansen approach to optimize the dispersing and stabilizing properties of TiO₂ particles. It appears that the dispersibility and the stability spheres generated are clearly separated, arguing that lower stability is not necessarily a consequence of poor dispersibility. Beyond this clarification, this combined SMLS-Hansen approach is a major step toward the optimization of dispersibility and stability of colloidal formulations by finding solvents having the best compromise between dispersing and stabilizing properties. Such study can be intended to find better dispersion media, greener and cheaper solvents to optimize particles suspensions, reduce the content of costly stabilizing additives or satisfy product regulatory requirements evolution in various industrial fields using suspensions (paints & inks, coatings, cosmetics, energy).Keywords: dispersibility, stability, Hansen parameters, particles, solvents
Procedia PDF Downloads 1111485 A Non-Destructive Estimation Method for Internal Time in Perilla Leaf Using Hyperspectral Data
Authors: Shogo Nagano, Yusuke Tanigaki, Hirokazu Fukuda
Abstract:
Vegetables harvested early in the morning or late in the afternoon are valued in plant production, and so the time of harvest is important. The biological functions known as circadian clocks have a significant effect on this harvest timing. The purpose of this study was to non-destructively estimate the circadian clock and so construct a method for determining a suitable harvest time. We took eight samples of green busil (Perilla frutescens var. crispa) every 4 hours, six times for 1 day and analyzed all samples at the same time. A hyperspectral camera was used to collect spectrum intensities at 141 different wavelengths (350–1050 nm). Calculation of correlations between spectrum intensity of each wavelength and harvest time suggested the suitability of the hyperspectral camera for non-destructive estimation. However, even the highest correlated wavelength had a weak correlation, so we used machine learning to raise the accuracy of estimation and constructed a machine learning model to estimate the internal time of the circadian clock. Artificial neural networks (ANN) were used for machine learning because this is an effective analysis method for large amounts of data. Using the estimation model resulted in an error between estimated and real times of 3 min. The estimations were made in less than 2 hours. Thus, we successfully demonstrated this method of non-destructively estimating internal time.Keywords: artificial neural network (ANN), circadian clock, green busil, hyperspectral camera, non-destructive evaluation
Procedia PDF Downloads 2991484 Multi-Level Air Quality Classification in China Using Information Gain and Support Vector Machine
Authors: Bingchun Liu, Pei-Chann Chang, Natasha Huang, Dun Li
Abstract:
Machine Learning and Data Mining are the two important tools for extracting useful information and knowledge from large datasets. In machine learning, classification is a wildly used technique to predict qualitative variables and is generally preferred over regression from an operational point of view. Due to the enormous increase in air pollution in various countries especially China, Air Quality Classification has become one of the most important topics in air quality research and modelling. This study aims at introducing a hybrid classification model based on information theory and Support Vector Machine (SVM) using the air quality data of four cities in China namely Beijing, Guangzhou, Shanghai and Tianjin from Jan 1, 2014 to April 30, 2016. China's Ministry of Environmental Protection has classified the daily air quality into 6 levels namely Serious Pollution, Severe Pollution, Moderate Pollution, Light Pollution, Good and Excellent based on their respective Air Quality Index (AQI) values. Using the information theory, information gain (IG) is calculated and feature selection is done for both categorical features and continuous numeric features. Then SVM Machine Learning algorithm is implemented on the selected features with cross-validation. The final evaluation reveals that the IG and SVM hybrid model performs better than SVM (alone), Artificial Neural Network (ANN) and K-Nearest Neighbours (KNN) models in terms of accuracy as well as complexity.Keywords: machine learning, air quality classification, air quality index, information gain, support vector machine, cross-validation
Procedia PDF Downloads 2351483 The Application of King IV by Rugby Clubs Affiliated to a Rugby Union in South Africa
Authors: Anouschka Swart
Abstract:
In 2023, sport faces a plethora of challenges including but not limited to match-fixing, corruption and doping to its integrity that, threatens both the commercial and public appeal. The continuous changes and commercialisation that has occurred within sport have led to a variety of consequences resulting in the need for ethics to be revived, as it used to be in the past to ensure sport is not in danger. In order to understand governance better, the Institute of Directors in Southern Africa, a global network of professional firms providing Audit, Tax and Advisory services, outlined a process explaining all elements with regards to corporate governance. This process illustrates a governing body’s responsibilities as strategy, policy, oversight and accountability. These responsibilities are further elucidated to 16 governing principles which are highlighted as essential for all organisations in order to achieve and deliver on effective governance outcomes. These outcomes are good ethical culture, good performance, effective control and legitimacy therefore, the aim of the study was to investigate the general state of governance within the clubs affiliated with a rugby club in South Africa by utilizing the King IV Code as the framework. The results indicated that the King Code IV principles are implemented by these rugby clubs to ensure they demonstrate commitment to corporate governance to both internal and external stakeholders. It is however evident that a similar report focused solely on sport is a necessity in the industry as this will provide more clarity on sport specific problems.Keywords: South Africa, sport, King IV, responsibilities
Procedia PDF Downloads 711482 Reliability and Maintainability Optimization for Aircraft’s Repairable Components Based on Cost Modeling Approach
Authors: Adel A. Ghobbar
Abstract:
The airline industry is continuously challenging how to safely increase the service life of the aircraft with limited maintenance budgets. Operators are looking for the most qualified maintenance providers of aircraft components, offering the finest customer service. Component owner and maintenance provider is offering an Abacus agreement (Aircraft Component Leasing) to increase the efficiency and productivity of the customer service. To increase the customer service, the current focus on No Fault Found (NFF) units must change into the focus on Early Failure (EF) units. Since the effect of EF units has a significant impact on customer satisfaction, this needs to increase the reliability of EF units at minimal cost, which leads to the goal of this paper. By identifying the reliability of early failure (EF) units with regards to No Fault Found (NFF) units, in particular, the root cause analysis with an integrated cost analysis of EF units with the use of a failure mode analysis tool and a cost model, there will be a set of EF maintenance improvements. The data used for the investigation of the EF units will be obtained from the Pentagon system, an Enterprise Resource Planning (ERP) system used by Fokker Services. The Pentagon system monitors components, which needs to be repaired from Fokker aircraft owners, Abacus exchange pool, and commercial customers. The data will be selected on several criteria’s: time span, failure rate, and cost driver. When the selected data has been acquired, the failure mode and root cause analysis of EF units are initiated. The failure analysis approach tool was implemented, resulting in the proposed failure solution of EF. This will lead to specific EF maintenance improvements, which can be set-up to decrease the EF units and, as a result of this, increasing the reliability. The investigated EFs, between the time period over ten years, showed to have a significant reliability impact of 32% on the total of 23339 unscheduled failures. Since the EFs encloses almost one-third of the entire population.Keywords: supportability, no fault found, FMEA, early failure, availability, operational reliability, predictive model
Procedia PDF Downloads 1271481 Breast Cancer Survivability Prediction via Classifier Ensemble
Authors: Mohamed Al-Badrashiny, Abdelghani Bellaachia
Abstract:
This paper presents a classifier ensemble approach for predicting the survivability of the breast cancer patients using the latest database version of the Surveillance, Epidemiology, and End Results (SEER) Program of the National Cancer Institute. The system consists of two main components; features selection and classifier ensemble components. The features selection component divides the features in SEER database into four groups. After that it tries to find the most important features among the four groups that maximizes the weighted average F-score of a certain classification algorithm. The ensemble component uses three different classifiers, each of which models different set of features from SEER through the features selection module. On top of them, another classifier is used to give the final decision based on the output decisions and confidence scores from each of the underlying classifiers. Different classification algorithms have been examined; the best setup found is by using the decision tree, Bayesian network, and Na¨ıve Bayes algorithms for the underlying classifiers and Na¨ıve Bayes for the classifier ensemble step. The system outperforms all published systems to date when evaluated against the exact same data of SEER (period of 1973-2002). It gives 87.39% weighted average F-score compared to 85.82% and 81.34% of the other published systems. By increasing the data size to cover the whole database (period of 1973-2014), the overall weighted average F-score jumps to 92.4% on the held out unseen test set.Keywords: classifier ensemble, breast cancer survivability, data mining, SEER
Procedia PDF Downloads 3291480 Financial Technology: The Key to Achieving Financial Inclusion in Developing Countries Post COVID-19 from an East African Perspective
Authors: Yosia Mulumba, Klaus Schmidt
Abstract:
Financial Inclusion is considered a key pillar for development in most countries around the world. Access to affordable financial services in a country’s economy can be a driver to overcome poverty and reduce income inequalities, and thus increase economic growth. Nevertheless, the number of financially excluded populations in developing countries continues to be very high. This paper explores the role of Financial Technology (Fintech) as a key driver for achieving financial inclusion in developing countries post the COVID-19 pandemic with an emphasis on four East African countries: Kenya, Tanzania, Uganda, and Rwanda. The research paper is inspired by the positive disruption caused by the pandemic, which has compelled societies in East Africa to adapt and embrace the use of financial technology innovations, specifically Mobile Money Services (MMS), to access financial services. MMS has been further migrated and integrated with other financial technology innovations such as Mobile Banking, Micro Savings, and Loans, and Insurance, to mention but a few. These innovations have been adopted across key sectors such as commerce, health care, or agriculture. The research paper will highlight the Mobile Network Operators (MNOs) that are behind MMS, along with numerous innovative products and services being offered to the customers. It will also highlight the regulatory framework under which these innovations are being governed to ensure the safety of the customers' funds.Keywords: financial inclusion, financial technology, regulatory framework, mobile money services
Procedia PDF Downloads 1461479 Assessment Power and Oscillation Damping Using the POD Controller and Proposed FOD Controller
Authors: Tohid Rahimi, Yahya Naderi, Babak Yousefi, Seyed Hossein Hoseini
Abstract:
Today’s modern interconnected power system is highly complex in nature. In this, one of the most important requirements during the operation of the electric power system is the reliability and security. Power and frequency oscillation damping mechanism improve the reliability. Because of power system stabilizer (PSS) low speed response against of major fault such as three phase short circuit, FACTs devise that can control the network condition in very fast time, are becoming popular. However, FACTs capability can be seen in a major fault present when nonlinear models of FACTs devise and power system equipment are applied. To realize this aim, the model of multi-machine power system with FACTs controller is developed in MATLAB/SIMULINK using Sim Power System (SPS) blockiest. Among the FACTs device, Static synchronous series compensator (SSSC) due to high speed changes its reactance characteristic inductive to capacitive, is effective power flow controller. Tuning process of controller parameter can be performed using different method. However, Genetic Algorithm (GA) ability tends to use it in controller parameter tuning process. In this paper, firstly POD controller is used to power oscillation damping. But in this station, frequency oscillation dos not has proper damping situation. Therefore, FOD controller that is tuned using GA is using that cause to damp out frequency oscillation properly and power oscillation damping has suitable situation.Keywords: power oscillation damping (POD), frequency oscillation damping (FOD), Static synchronous series compensator (SSSC), Genetic Algorithm (GA)
Procedia PDF Downloads 4761478 Parametric Evaluation for the Optimization of Gastric Emptying Protocols Used in Health Care Institutions
Authors: Yakubu Adamu
Abstract:
The aim of this research was to assess the factors contributing to the need for optimisation of the gastric emptying protocols in nuclear medicine and molecular imaging (SNMMI) procedures. The objective is to suggest whether optimisation is possible and provide supporting evidence for the current imaging protocols of gastric emptying examination used in nuclear medicine. The research involved the use of some selected patients with 30 dynamic series for the image processing using ImageJ, and by so doing, the calculated half-time, retention fraction to the 60 x1 minute, 5 minute and 10-minute protocol, and other sampling intervals were obtained. Results from the study IDs for the gastric emptying clearance half-time were classified into normal, abnormal fast, and abnormal slow categories. In the normal category, which represents 50% of the total gastric emptying image IDs processed, their clearance half-time was within the range of 49.5 to 86.6 minutes of the mean counts. Also, under the abnormal fast category, their clearance half-time fell between 21 to 43.3 minutes of the mean counts, representing 30% of the total gastric emptying image IDs processed, and the abnormal slow category had clearance half-time within the range of 138.6 to 138.6 minutes of the mean counts, representing 20%. The results indicated that the calculated retention fraction values from the 1, 5, and 10-minute sampling curves and the measured values of gastric emptying retention fraction from sampling curves of the study IDs had a normal retention fraction of <60% and decreased exponentially with an increase in time and it was evident with low percentages of retention fraction ratios of < 10% after the 4 hours. Thus, this study does not change categories suggesting that these values could feasibly be used instead of having to acquire actual images. Findings from the study suggest that the current gastric emptying protocol can be optimized by acquiring fewer images. The study recommended that the gastric emptying studies should be performed with imaging at a minimum of 0, 1, 2, and 4 hours after meal ingestion.Keywords: gastric emptying, retention fraction, clearance halftime, optimisation, protocol
Procedia PDF Downloads 61477 Role of a Physical Therapist in Rehabilitation
Authors: Andrew Anis Fakhrey Mosaad
Abstract:
Objectives: Physiotherapy in the intensive care unit (ICU) improves patient outcomes. We aimed to determine the characteristics of physiotherapy practice and critical barriers to applying physiotherapy in ICUs. Materials and Methods: A 54-item survey for determining the characteristics physiotherapists and physiotherapy applications in the ICU was developed. The survey was electronically sent to potential participants through the Turkish Physiotherapy Association network. Sixty-five physiotherapists (47F and 18M; 23–52 years; ICU experience: 6.0±6.2 years) completed the survey. The data were analyzed using quantitative and qualitative methods. Results: The duration of ICU practice was 3.51±2.10 h/day. Positioning (90.8%), active exercises (90.8%), breathing exercises (89.2%), passive exercises (87.7%), and percussion (87.7%) were the most commonly used applications. The barriers were related to physiotherapists (low level of employment and practice, lack of shift); patients (unwillingness, instability, participation restriction); teamwork (lack of awareness and communication); equipment (inadequacy, non-priority to purchase); and legal (reimbursement, lack of direct physiotherapy access, non-recognition of autonomy) procedures. Conclusion: The most common interventions were positioning, active, passive, breathing exercises, and percussion. Critical barriers toward physiotherapy are multifactorial and related to physiotherapists, patients, teams, equipment, and legal procedures. Physiotherapist employment, service maintenance, and multidisciplinary teamwork should be considered for physiotherapy effectiveness in ICUs.Keywords: intensive care units, physical therapy, physiotherapy, exercises
Procedia PDF Downloads 1021476 Key Factors for Stakeholder Engagement and Sustainable Development
Authors: Jo Rhodes, Bruce Bergstrom, Peter Lok, Vincent Cheng
Abstract:
The aim of this study is to determine key factors and processes for multinationals (MNCs) to develop an effective stakeholder engagement and sustainable development framework. A qualitative multiple-case approach was used. A triangulation method was adopted (interviews, archival documents and observations) to collect data on three global firms (MNCs). 9 senior executives were interviewed for this study (3 from each firm). An initial literature review was conducted to explore possible practices and factors (the deductive approach) to sustainable development. Interview data were analysed using Nvivo to obtain appropriate nodes and themes for the framework. A comparison of findings from interview data and themes, factors developed from the literature review and cross cases comparison were used to develop the final conceptual framework (the inductive approach). The results suggested that stakeholder engagement is a key mediator between ‘stakeholder network’ (internal and external factors) and outcomes (corporate social responsibility, social capital, shared value and sustainable development). Key internal factors such as human capital/talent, technology, culture, leadership and processes such as collaboration, knowledge sharing and co-creation of value with stakeholders were identified. These internal factors and processes must be integrated and aligned with external factors such as social, political, cultural, environment and NGOs to achieve effective stakeholder engagement.Keywords: stakeholder, engagement, sustainable development, shared value, corporate social responsibility
Procedia PDF Downloads 5131475 GBKMeans: A Genetic Based K-Means Applied to the Capacitated Planning of Reading Units
Authors: Anderson S. Fonseca, Italo F. S. Da Silva, Robert D. A. Santos, Mayara G. Da Silva, Pedro H. C. Vieira, Antonio M. S. Sobrinho, Victor H. B. Lemos, Petterson S. Diniz, Anselmo C. Paiva, Eliana M. G. Monteiro
Abstract:
In Brazil, the National Electric Energy Agency (ANEEL) establishes that electrical energy companies are responsible for measuring and billing their customers. Among these regulations, it’s defined that a company must bill your customers within 27-33 days. If a relocation or a change of period is required, the consumer must be notified in writing, in advance of a billing period. To make it easier to organize a workday’s measurements, these companies create a reading plan. These plans consist of grouping customers into reading groups, which are visited by an employee responsible for measuring consumption and billing. The creation process of a plan efficiently and optimally is a capacitated clustering problem with constraints related to homogeneity and compactness, that is, the employee’s working load and the geographical position of the consuming unit. This process is a work done manually by several experts who have experience in the geographic formation of the region, which takes a large number of days to complete the final planning, and because it’s human activity, there is no guarantee of finding the best optimization for planning. In this paper, the GBKMeans method presents a technique based on K-Means and genetic algorithms for creating a capacitated cluster that respects the constraints established in an efficient and balanced manner, that minimizes the cost of relocating consumer units and the time required for final planning creation. The results obtained by the presented method are compared with the current planning of a real city, showing an improvement of 54.71% in the standard deviation of working load and 11.97% in the compactness of the groups.Keywords: capacitated clustering, k-means, genetic algorithm, districting problems
Procedia PDF Downloads 1981474 Planning a Haemodialysis Process by Minimum Time Control of Hybrid Systems with Sliding Motion
Authors: Radoslaw Pytlak, Damian Suski
Abstract:
The aim of the paper is to provide a computational tool for planning a haemodialysis process. It is shown that optimization methods can be used to obtain the most effective treatment focused on removing both urea and phosphorus during the process. In order to achieve that, the IV–compartment model of phosphorus kinetics is applied. This kinetics model takes into account a rebound phenomenon that can occur during haemodialysis and results in a hybrid model of the process. Furthermore, vector fields associated with the model equations are such that it is very likely that using the most intuitive objective functions in the planning problem could lead to solutions which include sliding motions. Therefore, building computational tools for solving the problem of planning a haemodialysis process has required constructing numerical algorithms for solving optimal control problems with hybrid systems. The paper concentrates on minimum time control of hybrid systems since this control objective is the most suitable for the haemodialysis process considered in the paper. The presented approach to optimal control problems with hybrid systems is different from the others in several aspects. First of all, it is assumed that a hybrid system can exhibit sliding modes. Secondly, the system’s motion on the switching surface is described by index 2 differential–algebraic equations, and that guarantees accurate tracking of the sliding motion surface. Thirdly, the gradients of the problem’s functionals are evaluated with the help of adjoint equations. The adjoint equations presented in the paper take into account sliding motion and exhibit jump conditions at transition times. The optimality conditions in the form of the weak maximum principle for optimal control problems with hybrid systems exhibiting sliding modes and with piecewise constant controls are stated. The presented sensitivity analysis can be used to construct globally convergent algorithms for solving considered problems. The paper presents numerical results of solving the haemodialysis planning problem.Keywords: haemodialysis planning process, hybrid systems, optimal control, sliding motion
Procedia PDF Downloads 1951473 Disaster Adaptation Mechanism and Disaster Prevention Adaptation Planning Strategies for Industrial Parks in Response to Climate Change and Different Socio-Economic Disasters
Authors: Jen-Te Pai, Jao-Heng Liu, Shin-En Pai
Abstract:
The impact of climate change has intensified in recent years, causing Taiwan to face higher frequency and serious natural disasters. Therefore, it is imperative for industrial parks manufacturers to promote adaptation policies in response to climate change. On the other hand, with the rise of the international anti-terrorism situation, once a terrorist attack occurs, it will attract domestic and international media attention, especially the strategic and economic status of the science park. Thus, it is necessary to formulate adaptation and mitigation strategies under climate change and social economic disasters. After reviewed the literature about climate change, urban disaster prevention, vulnerability assessment, and risk communication, the study selected 62 industrial parks compiled by the Industrial Bureau of the Ministry of Economic Affairs of Taiwan as the research object. This study explored the vulnerability and disaster prevention and disaster relief functional assessment of these industrial parks facing of natural and socio-economic disasters. Furthermore, this study explored planned adaptation of industrial parks management section and autonomous adaptation of corporate institutions in the park. The conclusion of this study is that Taiwan industrial parks with a higher vulnerability to natural and socio-economic disasters should employ positive adaptive behaviours.Keywords: adaptive behaviours, analytic network process, vulnerability, industrial parks
Procedia PDF Downloads 1451472 Predicting Options Prices Using Machine Learning
Authors: Krishang Surapaneni
Abstract:
The goal of this project is to determine how to predict important aspects of options, including the ask price. We want to compare different machine learning models to learn the best model and the best hyperparameters for that model for this purpose and data set. Option pricing is a relatively new field, and it can be very complicated and intimidating, especially to inexperienced people, so we want to create a machine learning model that can predict important aspects of an option stock, which can aid in future research. We tested multiple different models and experimented with hyperparameter tuning, trying to find some of the best parameters for a machine-learning model. We tested three different models: a Random Forest Regressor, a linear regressor, and an MLP (multi-layer perceptron) regressor. The most important feature in this experiment is the ask price; this is what we were trying to predict. In the field of stock pricing prediction, there is a large potential for error, so we are unable to determine the accuracy of the models based on if they predict the pricing perfectly. Due to this factor, we determined the accuracy of the model by finding the average percentage difference between the predicted and actual values. We tested the accuracy of the machine learning models by comparing the actual results in the testing data and the predictions made by the models. The linear regression model performed worst, with an average percentage error of 17.46%. The MLP regressor had an average percentage error of 11.45%, and the random forest regressor had an average percentage error of 7.42%Keywords: finance, linear regression model, machine learning model, neural network, stock price
Procedia PDF Downloads 761471 Classification of Barley Varieties by Artificial Neural Networks
Authors: Alper Taner, Yesim Benal Oztekin, Huseyin Duran
Abstract:
In this study, an Artificial Neural Network (ANN) was developed in order to classify barley varieties. For this purpose, physical properties of barley varieties were determined and ANN techniques were used. The physical properties of 8 barley varieties grown in Turkey, namely thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain, were determined and it was found that these properties were statistically significant with respect to varieties. As ANN model, three models, N-l, N-2 and N-3 were constructed. The performances of these models were compared. It was determined that the best-fit model was N-1. In the N-1 model, the structure of the model was designed to be 11 input layers, 2 hidden layers and 1 output layer. Thousand kernel weight, geometric mean diameter, sphericity, kernel volume, surface area, bulk density, true density, porosity and colour parameters of grain were used as input parameter; and varieties as output parameter. R2, Root Mean Square Error and Mean Error for the N-l model were found as 99.99%, 0.00074 and 0.009%, respectively. All results obtained by the N-l model were observed to have been quite consistent with real data. By this model, it would be possible to construct automation systems for classification and cleaning in flourmills.Keywords: physical properties, artificial neural networks, barley, classification
Procedia PDF Downloads 1791470 Assessing Carbon Stock and Sequestration of Reforestation Species on Old Mining Sites in Morocco Using the DNDC Model
Authors: Nabil Elkhatri, Mohamed Louay Metougui, Ngonidzashe Chirinda
Abstract:
Mining activities have left a legacy of degraded landscapes, prompting urgent efforts for ecological restoration. Reforestation holds promise as a potent tool to rehabilitate these old mining sites, with the potential to sequester carbon and contribute to climate change mitigation. This study focuses on evaluating the carbon stock and sequestration potential of reforestation species in the context of Morocco's mining areas, employing the DeNitrification-DeComposition (DNDC) model. The research is grounded in recognizing the need to connect theoretical models with practical implementation, ensuring that reforestation efforts are informed by accurate and context-specific data. Field data collection encompasses growth patterns, biomass accumulation, and carbon sequestration rates, establishing an empirical foundation for the study's analyses. By integrating the collected data with the DNDC model, the study aims to provide a comprehensive understanding of carbon dynamics within reforested ecosystems on old mining sites. The major findings reveal varying sequestration rates among different reforestation species, indicating the potential for species-specific optimization of reforestation strategies to enhance carbon capture. This research's significance lies in its potential to contribute to sustainable land management practices and climate change mitigation strategies. By quantifying the carbon stock and sequestration potential of reforestation species, the study serves as a valuable resource for policymakers, land managers, and practitioners involved in ecological restoration and carbon management. Ultimately, the study aligns with global objectives to rejuvenate degraded landscapes while addressing pressing climate challenges.Keywords: carbon stock, carbon sequestration, DNDC model, ecological restoration, mining sites, Morocco, reforestation, sustainable land management.
Procedia PDF Downloads 761469 Cell Line Screens Identify Biomarkers of Drug Sensitivity in GLIOMA Cancer
Authors: Noora Al Muftah, Reda Rawi, Richard Thompson, Halima Bensmail
Abstract:
Clinical responses to anticancer therapies are often restricted to a subset of patients. In some cases, mutated cancer genes are potent biomarkers of response to targeted agents. There is an urgent need to identify biomarkers that predict which patients with are most likely to respond to treatment. Systematic efforts to correlate tumor mutational data with biologic dependencies may facilitate the translation of somatic mutation catalogs into meaningful biomarkers for patient stratification. To identify genomic features associated with drug sensitivity and uncover new biomarkers of sensitivity and resistance to cancer therapeutics, we have screened and integrated a panel of several hundred cancer cell lines from different databases, mutation, DNA copy number, and gene expression data for hundreds of cell lines with their responses to targeted and cytotoxic therapies with drugs under clinical and preclinical investigation. We found mutated cancer genes were associated with cellular response to most currently available Glioma cancer drugs and some frequently mutated genes were associated with sensitivity to a broad range of therapeutic agents. By linking drug activity to the functional complexity of cancer genomes, systematic pharmacogenomic profiling in cancer cell lines provides a powerful biomarker discovery platform to guide rational cancer therapeutic strategies.Keywords: cancer, gene network, Lasso, penalized regression, P-values, unbiased estimator
Procedia PDF Downloads 4091468 Influences of Thermal Treatments on Dielectric Behaviors of Carbon Nanotubes-BaTiO₃ Hybrids Reinforced Polyvinylidene Fluoride Composites
Authors: Benhui Fan, Fahmi Bedoui, Jinbo Bai
Abstract:
Incorporated carbon nanotube-BaTiO₃ hybrids (H-CNT-BT) with core-shell structure, a better dispersion of CNTs can be achieved in a semi-crystalline polymeric matrix, polyvinylidene fluoride (PVDF). Carried by BT particles, CNTs are easy to mutually connect which helps to obtain an extremely low percolation threshold (fc). After thermal treatments, the dielectric constants (ε’) of samples further increase which depends on the conditions of thermal treatments such as annealing temperatures, annealing durations and cooling ways. Thus, in order to study more comprehensively about the influence of thermal treatments on composite’s dielectric behaviors, in situ synchrotron X-ray is used to detect re-crystalline behavior of PVDF. Results of wide-angle X-ray diffraction (WAXD) and small-angle X-ray scattering (SAXS) show that after the thermal treatment, the content of β polymorph (the polymorph with the highest ε’ among all the polymorphs of PVDF’s crystalline structure) has increased nearly double times at the interfacial region of CNT-PVDF, and the thickness of amorphous layers (La) in PVDF’s long periods (Lp) has shrunk around 10 Å. The evolution of CNT’s network possibly occurs in the procedure of La shrinkage, where the strong interfacial polarization may be aroused and increases ε’ at low frequency. Moreover, an increase in the thickness of crystalline lamella may also arouse more orientational polarization and improve ε’ at high frequency.Keywords: dielectric properties, thermal treatments, carbon nanotubes, crystalline structure
Procedia PDF Downloads 3241467 Machine Learning and Deep Learning Approach for People Recognition and Tracking in Crowd for Safety Monitoring
Authors: A. Degale Desta, Cheng Jian
Abstract:
Deep learning application in computer vision is rapidly advancing, giving it the ability to monitor the public and quickly identify potentially anomalous behaviour from crowd scenes. Therefore, the purpose of the current work is to improve the performance of safety of people in crowd events from panic behaviour through introducing the innovative idea of Aggregation of Ensembles (AOE), which makes use of the pre-trained ConvNets and a pool of classifiers to find anomalies in video data with packed scenes. According to the theory of algorithms that applied K-means, KNN, CNN, SVD, and Faster-CNN, YOLOv5 architectures learn different levels of semantic representation from crowd videos; the proposed approach leverages an ensemble of various fine-tuned convolutional neural networks (CNN), allowing for the extraction of enriched feature sets. In addition to the above algorithms, a long short-term memory neural network to forecast future feature values and a handmade feature that takes into consideration the peculiarities of the crowd to understand human behavior. On well-known datasets of panic situations, experiments are run to assess the effectiveness and precision of the suggested method. Results reveal that, compared to state-of-the-art methodologies, the system produces better and more promising results in terms of accuracy and processing speed.Keywords: action recognition, computer vision, crowd detecting and tracking, deep learning
Procedia PDF Downloads 1631466 Understanding Cognitive Fatigue From FMRI Scans With Self-supervised Learning
Authors: Ashish Jaiswal, Ashwin Ramesh Babu, Mohammad Zaki Zadeh, Fillia Makedon, Glenn Wylie
Abstract:
Functional magnetic resonance imaging (fMRI) is a neuroimaging technique that records neural activations in the brain by capturing the blood oxygen level in different regions based on the task performed by a subject. Given fMRI data, the problem of predicting the state of cognitive fatigue in a person has not been investigated to its full extent. This paper proposes tackling this issue as a multi-class classification problem by dividing the state of cognitive fatigue into six different levels, ranging from no-fatigue to extreme fatigue conditions. We built a spatio-temporal model that uses convolutional neural networks (CNN) for spatial feature extraction and a long short-term memory (LSTM) network for temporal modeling of 4D fMRI scans. We also applied a self-supervised method called MoCo (Momentum Contrast) to pre-train our model on a public dataset BOLD5000 and fine-tuned it on our labeled dataset to predict cognitive fatigue. Our novel dataset contains fMRI scans from Traumatic Brain Injury (TBI) patients and healthy controls (HCs) while performing a series of N-back cognitive tasks. This method establishes a state-of-the-art technique to analyze cognitive fatigue from fMRI data and beats previous approaches to solve this problem.Keywords: fMRI, brain imaging, deep learning, self-supervised learning, contrastive learning, cognitive fatigue
Procedia PDF Downloads 1901465 GeneNet: Temporal Graph Data Visualization for Gene Nomenclature and Relationships
Authors: Jake Gonzalez, Tommy Dang
Abstract:
This paper proposes a temporal graph approach to visualize and analyze the evolution of gene relationships and nomenclature over time. An interactive web-based tool implements this temporal graph, enabling researchers to traverse a timeline and observe coupled dynamics in network topology and naming conventions. Analysis of a real human genomic dataset reveals the emergence of densely interconnected functional modules over time, representing groups of genes involved in key biological processes. For example, the antimicrobial peptide DEFA1A3 shows increased connections to related alpha-defensins involved in infection response. Tracking degree and betweenness centrality shifts over timeline iterations also quantitatively highlight the reprioritization of certain genes’ topological importance as knowledge advances. Examination of the CNR1 gene encoding the cannabinoid receptor CB1 demonstrates changing synonymous relationships and consolidating naming patterns over time, reflecting its unique functional role discovery. The integrated framework interconnecting these topological and nomenclature dynamics provides richer contextual insights compared to isolated analysis methods. Overall, this temporal graph approach enables a more holistic study of knowledge evolution to elucidate complex biology.Keywords: temporal graph, gene relationships, nomenclature evolution, interactive visualization, biological insights
Procedia PDF Downloads 611464 Entrepreneurial Support Ecosystem: Role of Research Institutes
Authors: Ayna Yusubova, Bart Clarysse
Abstract:
This paper explores role of research institutes in creation of support ecosystem for new technology-based ventures. Previous literature introduced research institutes as part of business and knowledge ecosystem, very few studies are available that consider a research institute as an ecosystem that support high-tech startups at every stage of development. Based on a resource-based view and a stage-based model of high-tech startups growth, this study aims to analyze how a research institute builds a startup support ecosystem by attracting different stakeholders in order to help startups to overcome resource. This paper is based on an in-depth case study of public research institute that focus on development of entrepreneurial ecosystem in a developed region. Analysis shows that the idea generation stage of high-tech startups that related to the invention and development of product or technology for commercialization is associated with a lack of critical knowledge resources. Second, at growth phase that related to market entrance, high-tech startups face challenges associated with the development of their business network. Accordingly, the study shows the support ecosystem that research institute creates helps high-tech startups overcome resource gaps in order to achieve a successful transition from one phase of growth to the next.Keywords: new technology-based firms, ecosystems, resources, business incubators, research instutes
Procedia PDF Downloads 2611463 Optimization of Water Desalination System Powered by High Concentrated Photovoltaic Panels in Kuwait Climate Conditions
Authors: Adel A. Ghoneim
Abstract:
Desalination using solar energy is an interesting option specifically at regions with abundant solar radiation since such areas normally have scarcity of clean water resources. Desalination is the procedure of eliminating dissolved minerals from seawater or brackish water to generate fresh water. In this work, a simulation program is developed to determine the performance of reverse osmosis (RO) water desalination plant powered by high concentrated photovoltaic (HCPV) panels in Kuwait climate conditions. The objective of such a photovoltaic thermal system is to accomplish a double output, i.e., co-generation of both electricity and fresh water that is applicable for rural regions with high solar irradiation. The suggested plan enables to design an RO plant that does not depend on costly batteries or additional land and significantly reduce the government costs to subsidize the water generation cost. Typical weather conditions for Kuwait is employed as input to the simulation program. The simulation program is utilized to optimize the system efficiency as well as the distillate water production. The areas and slopes of HCPV modules are varied to attain maximum yearly power production. Maximum yearly distillate production and HCPV energy generation are found to correspond to HCPV facing south with tilt of 27° (Kuwait latitude-3°). The power needed to produce 1 l of clean drinking water ranged from 2 to 8 kW h/m³, based on the salinity of the feed water and the system operating conditions. Moreover, adapting HCPV systems achieve an avoided greenhouse gases emission by about 1128 ton CO₂ annually. Present outcomes certainly illustrate environmental advantages of water desalination system powered by high concentrated photovoltaic systems in Kuwait climate conditions.Keywords: desalination, high concentrated photovoltaic systems, reverse osmosis, solar radiation
Procedia PDF Downloads 1421462 Omani Community in Digital Age: A Study of Omani Women Using Back Channel Media to Empower Themselves for Frontline Entrepreneurship
Authors: Sangeeta Tripathi, Muna Al Shahri
Abstract:
This research article presents the changing role and status of women in Oman. Transformation of women’s status started with the regime of His Majesty Sultan Qaboos Bin Said in 1970. It is always desired by the Sultan to enable women in all the ways for the balance growth of the country. Forbidding full face veil for women in public offices is one of the best efforts for their empowerment. Women education is also increasing rapidly. They are getting friendly with new information communication technology and using different social media applications such as WhatsApp, Instagram and Facebook for interaction and economic growth. Though there are some traditional and tribal boundaries, women are infused with courage and enjoying fair treatment and equal opportunities in different career positions. The study will try to explore changing mindset of young Omani women towards these traditional tribal boundaries, cultural heritage, business and career: ‘How are young Omani women making balance between work and social prestige?’, ‘How are they preserving their cultural values, embracing new technologies and approaching social network to enhance their economic power.’ This paper will discover their hurdles while using internet for their new entrepreneur. It will also examine the prospects of online business in Oman. The mixed research methodology is applied to find out the result.Keywords: advertising, business, entrepreneurship, tribal barrier
Procedia PDF Downloads 3051461 Parametric Urbanism: A Climate Responsive Urban Form for the MENA Region
Authors: Norhan El Dallal
Abstract:
The MENA region is a challenging, rapid urbanizing region, with a special profile; culturally, socially, economically and environmentally. Despite the diversity between different countries of the MENA region they all share similar urban challenges where extensive interventions are crucial. A climate sensitive region as the MENA region requires special attention for development, adaptation and mitigation. Integrating climatic and environmental parameters into the planning process to create a responsive urban form is the aim of this research in which “Parametric Urbanism” as a trend serves as a tool to reach a more sustainable urban morphology. An attempt to parameterize the relation between the climate and the urban form in a detailed manner is the main objective of the thesis. The aim is relating the different passive approaches suitable for the MENA region with the design guidelines of each and every part of the planning phase. Various conceptual scenarios for the network pattern and block subdivision generation based on computational models are the next steps after the parameterization. These theoretical models could be applied on different climatic zones of the dense communities of the MENA region to achieve an energy efficient neighborhood or city with respect to the urban form, morphology, and urban planning pattern. A final criticism of the theoretical model is to be conducted showing the feasibility of the proposed solutions economically. Finally some push and pull policies are to be proposed to help integrate these solutions into the planning process.Keywords: parametric urbanism, climate responsive, urban form, urban and regional studies
Procedia PDF Downloads 4811460 Multi-Stage Optimization of Local Environmental Quality by Comprehensive Computer Simulated Person as Sensor for Air Conditioning Control
Authors: Sung-Jun Yoo, Kazuhide Ito
Abstract:
In this study, a comprehensive computer simulated person (CSP) that integrates computational human model (virtual manikin) and respiratory tract model (virtual airway), was applied for estimation of indoor environmental quality. Moreover, an inclusive prediction method was established by integrating computational fluid dynamics (CFD) analysis with advanced CSP which is combined with physiologically-based pharmacokinetic (PBPK) model, unsteady thermoregulation model for analysis targeting micro-climate around human body and respiratory area with high accuracy. This comprehensive method can estimate not only the contaminant inhalation but also constant interaction in the contaminant transfer between indoor spaces, i.e., a target area for indoor air quality (IAQ) assessment, and respiratory zone for health risk assessment. This study focused on the usage of the CSP as an air/thermal quality sensor in indoors, which means the application of comprehensive model for assessment of IAQ and thermal environmental quality. Demonstrative analysis was performed in order to examine the applicability of the comprehensive model to the heating, ventilation, air conditioning (HVAC) control scheme. CSP was located at the center of the simple model room which has dimension of 3m×3m×3m. Formaldehyde which is generated from floor material was assumed as a target contaminant, and flow field, sensible/latent heat and contaminant transfer analysis in indoor space were conducted by using CFD simulation coupled with CSP. In this analysis, thermal comfort was evaluated by thermoregulatory analysis, and respiratory exposure risks represented by adsorption flux/concentration at airway wall surface were estimated by PBPK-CFD hybrid analysis. These Analysis results concerning IAQ and thermal comfort will be fed back to the HVAC control and could be used to find a suitable ventilation rate and energy requirement for air conditioning system.Keywords: CFD simulation, computer simulated person, HVAC control, indoor environmental quality
Procedia PDF Downloads 3611459 Different Goals and Strategies of Smart Cities: Comparative Study between European and Asian Countries
Authors: Yountaik Leem, Sang Ho Lee
Abstract:
In this paper, different goals and the ways to reach smart cities shown in many countries during planning and implementation processes will be discussed. Each country dealt with technologies which have been embedded into space as development of ICTs (information and communication technologies) for their own purposes and by their own ways. For example, European countries tried to adapt technologies to reduce greenhouse gas emission to overcome global warming while US-based global companies focused on the way of life using ICTs such as EasyLiving of Microsoft™ and CoolTown of Hewlett-Packard™ during last decade of 20th century. In the North-East Asian countries, urban space with ICTs were developed in large scale on the viewpoint of capitalism. Ubiquitous city, first introduced in Korea which named after Marc Weiser’s concept of ubiquitous computing pursued new urban development with advanced technologies and high-tech infrastructure including wired and wireless network. Japan has developed smart cities as comprehensive and technology intensive cities which will lead other industries of the nation in the future. Not only the goals and strategies but also new directions to which smart cities are oriented also suggested at the end of the paper. Like a Finnish smart community whose slogan is ‘one more hour a day for citizens,’ recent trend is forwarding everyday lives and cultures of human beings, not capital gains nor physical urban spaces.Keywords: smart cities, urban strategy, future direction, comparative study
Procedia PDF Downloads 2621458 Deep Feature Augmentation with Generative Adversarial Networks for Class Imbalance Learning in Medical Images
Authors: Rongbo Shen, Jianhua Yao, Kezhou Yan, Kuan Tian, Cheng Jiang, Ke Zhou
Abstract:
This study proposes a generative adversarial networks (GAN) framework to perform synthetic sampling in feature space, i.e., feature augmentation, to address the class imbalance problem in medical image analysis. A feature extraction network is first trained to convert images into feature space. Then the GAN framework incorporates adversarial learning to train a feature generator for the minority class through playing a minimax game with a discriminator. The feature generator then generates features for minority class from arbitrary latent distributions to balance the data between the majority class and the minority class. Additionally, a data cleaning technique, i.e., Tomek link, is employed to clean up undesirable conflicting features introduced from the feature augmentation and thus establish well-defined class clusters for the training. The experiment section evaluates the proposed method on two medical image analysis tasks, i.e., mass classification on mammogram and cancer metastasis classification on histopathological images. Experimental results suggest that the proposed method obtains superior or comparable performance over the state-of-the-art counterparts. Compared to all counterparts, our proposed method improves more than 1.5 percentage of accuracy.Keywords: class imbalance, synthetic sampling, feature augmentation, generative adversarial networks, data cleaning
Procedia PDF Downloads 127