Search results for: double hurdle model
2956 Lithium Ion Supported on TiO2 Mixed Metal Oxides as a Heterogeneous Catalyst for Biodiesel Production from Canola Oil
Authors: Mariam Alsharifi, Hussein Znad, Ming Ang
Abstract:
Considering the environmental issues and the shortage in the conventional fossil fuel sources, biodiesel has gained a promising solution to shift away from fossil based fuel as one of the sustainable and renewable energy. It is synthesized by transesterification of vegetable oils or animal fats with alcohol (methanol or ethanol) in the presence of a catalyst. This study focuses on synthesizing a high efficient Li/TiO2 heterogeneous catalyst for biodiesel production from canola oil. In this work, lithium immobilized onto TiO2 by the simple impregnation method. The catalyst was evaluated by transesterification reaction in a batch reactor under moderate reaction conditions. To study the effect of Li concentrations, a series of LiNO3 concentrations (20, 30, 40 wt. %) at different calcination temperatures (450, 600, 750 ºC) were evaluated. The Li/TiO2 catalysts are characterized by several spectroscopic and analytical techniques such as XRD, FT-IR, BET, TG-DSC and FESEM. The optimum values of impregnated Lithium nitrate on TiO2 and calcination temperature are 30 wt. % and 600 ºC, respectively, along with a high conversion to be 98 %. The XRD study revealed that the insertion of Li improved the catalyst efficiency without any alteration in structure of TiO2 The best performance of the catalyst was achieved when using a methanol to oil ratio of 24:1, 5 wt. % of catalyst loading, at 65◦C reaction temperature for 3 hours of reaction time. Moreover, the experimental kinetic data were compatible with the pseudo-first order model and the activation energy was (39.366) kJ/mol. The synthesized catalyst Li/TiO2 was applied to trans- esterify used cooking oil and exhibited a 91.73% conversion. The prepared catalyst has shown a high catalytic activity to produce biodiesel from fresh and used oil within mild reaction conditions.Keywords: biodiesel, canola oil, environment, heterogeneous catalyst, impregnation method, renewable energy, transesterification
Procedia PDF Downloads 1762955 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces
Authors: Shweta Singh, Sudaman Katti
Abstract:
The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity
Procedia PDF Downloads 1372954 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences
Authors: C. Xavier Mendieta, J. J McArthur
Abstract:
Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions
Procedia PDF Downloads 3082953 An Assessment into Impact of Regional Conflicts upon Socio-Political Sustainability in Pakistan
Authors: Syed Toqueer Akhter, Muhammad Muzaffar Abbas
Abstract:
Conflicts in Pakistan are a result of a configuration of factors, which are directly related to the system of the state, the unstable regional setting, and the geo-strategic location of Pakistan at large. This paper examines the impact of regional conflict onto the socio-political sustainability of Pakistan. The magnitude of the spillover from a conflicted region is similar in size of the equivalent increase in domestic conflict. Pakistan has gone at war three times with India; the border with India is named as the tensest borderlines of the world. Disagreements with India and lack of dispute settlement mechanisms have negatively effected the peace in the region, influx of illegal weapons and refugees from Afghanistan as an outcome of 9/11 incidence, have exasperated the criticality of levels of internal conflict in Pakistan. Our empirical findings are based on the data collected on regional conflict levels, regional trade, global trade, comparative defence capabilities of the region in contrast to Pakistan and the government regime (Autocratic, Democratic) over 1972-2007. It has been proposed in this paper that the intent of domestic conflict is associated with the conflict in the region, regional trade, global trade and the government regime of Pakistan. The estimated model (OLS) implies that domestic conflict is effected positively and significantly with long term impact of conflict in the region. Also, if defence capabilities of the region are better than that of Pakistan it effects domestic conflict positively and significantly. Conflict in neighbouring countries are found as a source of domestic conflict in Pakistan, whereas the regional trade as well as type of government regimes in Pakistan lowered the intensity of domestic conflict significantly, while globalized trade imply risk of domestic conflict to be reduced but not significantly.Keywords: conflict, regional trade, socio-politcal instability
Procedia PDF Downloads 3222952 Towards a Multilevel System of Talent Management in Small And Medium-Sized Enterprises: French Context Exploration
Authors: Abid Kousay
Abstract:
Appeared and developed essentially in large companies and multinationals, Talent Management (TM) in Small and Medium-Sized Enterprises (SMEs) has remained an under-explored subject till today. Although the literature on TM in the Anglo-Saxon context is developing, it remains monopolized in non-European contexts, especially in France. Therefore, this article aims to address these shortcomings through contributing to TM issues, by adopting a multilevel approach holding the goal of reaching a global holistic vision of interactions between various levels, while applying TM. A qualitative research study carried out within 12 SMEs in France, built on the methodological perspective of grounded theory, will be used in order to go beyond description, to generate or discover a theory or even a unified theoretical explanation. Our theoretical contributions are the results of the grounded theory, the fruit of context considerations and the dynamic of the multilevel approach. We aim firstly to determine the perception of talent and TM in SMEs. Secondly, we formalize TM in SME through the empowerment of all 3 levels in the organization (individual, collective, and organizational). And we generate a multilevel dynamic system model, highlighting the institutionalization dimension in SMEs and the managerial conviction characterized by the domination of the leader's role. Thirdly, this first study shed the light on the importance of rigorous implementation of TM in SMEs in France by directing CEO and HR and TM managers to focus on elements that upstream TM implementation and influence the system internally. Indeed, our systematic multilevel approach policy reminds them of the importance of the strategic alignment while translating TM policy into strategies and practices in SMEs.Keywords: French context, institutionalization, talent, multilevel approach, talent management system
Procedia PDF Downloads 2022951 Research on Level Adjusting Mechanism System of Large Space Environment Simulator
Authors: Han Xiao, Zhang Lei, Huang Hai, Lv Shizeng
Abstract:
Space environment simulator is a device for spacecraft test. KM8 large space environment simulator built in Tianjing Space City is the largest as well as the most advanced space environment simulator in China. Large deviation of spacecraft level will lead to abnormally work of the thermal control device in spacecraft during the thermal vacuum test. In order to avoid thermal vacuum test failure, level adjusting mechanism system is developed in the KM8 large space environment simulator as one of the most important subsystems. According to the level adjusting requirements of spacecraft’s thermal vacuum tests, the four fulcrums adjusting model is established. By means of collecting level instruments and displacement sensors data, stepping motors controlled by PLC drive four supporting legs simultaneous movement. In addition, a PID algorithm is used to control the temperature of supporting legs and level instruments which long time work under the vacuum cold and black environment in KM8 large space environment simulator during thermal vacuum tests. Based on the above methods, the data acquisition and processing, the analysis and calculation, real time adjustment and fault alarming of the level adjusting mechanism system are implemented. The level adjusting accuracy reaches 1mm/m, and carrying capacity is 20 tons. Debugging showed that the level adjusting mechanism system of KM8 large space environment simulator can meet the thermal vacuum test requirement of the new generation spacecraft. The performance and technical indicators of the level adjusting mechanism system which provides important support for the development of spacecraft in China have been ahead of similar equipment in the world.Keywords: space environment simulator, thermal vacuum test, level adjusting, spacecraft, parallel mechanism
Procedia PDF Downloads 2482950 Application of a Compact Wastewater Treatment Unit in a Rural Area
Authors: Mohamed El-Khateeb
Abstract:
Encompassing inventory, warehousing, and transportation management, logistics is a crucial predictor of firm performance. This has been extensively proven by extant literature in business and operations management. Logistics is also a fundamental determinant of a country's ability to access international markets. Available studies in international and transport economics have shown that limited transport infrastructure and underperforming transport services can severely affect international competitiveness. However, the evidence lacks the overall impact of logistics performance-encompassing all inventory, warehousing, and transport components- on global trade. In order to fill this knowledge gap, the paper uses a gravitational trade model with 155 countries from all geographical regions between 2007 and 2018. Data on logistics performance is obtained from the World Bank's Logistics Performance Index (LPI). First, the relationship between logistics performance and a country’s total trade is estimated, followed by a breakdown by the economic sector. Then, the analysis is disaggregated according to the level of technological intensity of traded goods. Finally, after evaluating the intensive margin of trade, the relevance of logistics infrastructure and services for the extensive trade margin is assessed. Results suggest that: (i) improvements in both logistics infrastructure and services are associated with export growth; (ii) manufactured goods can significantly benefit from these improvements, especially when both exporting and importing countries increase their logistics performance; (iii) the quality of logistics infrastructure and services becomes more important as traded goods are technology-intensive; and (iv) improving the exporting country's logistics performance is essential in the intensive margin of trade while enhancing the importing country's logistics performance is more relevant in the extensive margin.Keywords: low-cost, recycling, reuse, solid waste, wastewater treatment
Procedia PDF Downloads 1972949 Harnessing Deep-Level Metagenomics to Explore the Three Dynamic One Health Areas: Healthcare, Domiciliary and Veterinary
Authors: Christina Killian, Katie Wall, Séamus Fanning, Guerrino Macori
Abstract:
Deep-level metagenomics offers a useful technical approach to explore the three dynamic One Health axes: healthcare, domiciliary and veterinary. There is currently limited understanding of the composition of complex biofilms, natural abundance of AMR genes and gene transfer occurrence in these ecological niches. By using a newly established small-scale complex biofilm model, COMBAT has the potential to provide new information on microbial diversity, antimicrobial resistance (AMR)-encoding gene abundance, and their transfer in complex biofilms of importance to these three One Health axes. Shotgun metagenomics has been used to sample the genomes of all microbes comprising the complex communities found in each biofilm source. A comparative analysis between untreated and biocide-treated biofilms is described. The basic steps include the purification of genomic DNA, followed by library preparation, sequencing, and finally, data analysis. The use of long-read sequencing facilitates the completion of metagenome-assembled genomes (MAG). Samples were sequenced using a PromethION platform, and following quality checks, binning methods, and bespoke bioinformatics pipelines, we describe the recovery of individual MAGs to identify mobile gene elements (MGE) and the corresponding AMR genotypes that map to these structures. High-throughput sequencing strategies have been deployed to characterize these communities. Accurately defining the profiles of these niches is an essential step towards elucidating the impact of the microbiota on each niche biofilm environment and their evolution.Keywords: COMBAT, biofilm, metagenomics, high-throughput sequencing
Procedia PDF Downloads 572948 Using Open Source Data and GIS Techniques to Overcome Data Deficiency and Accuracy Issues in the Construction and Validation of Transportation Network: Case of Kinshasa City
Authors: Christian Kapuku, Seung-Young Kho
Abstract:
An accurate representation of the transportation system serving the region is one of the important aspects of transportation modeling. Such representation often requires developing an abstract model of the system elements, which also requires important amount of data, surveys and time. However, in some cases such as in developing countries, data deficiencies, time and budget constraints do not always allow such accurate representation, leaving opportunities to assumptions that may negatively affect the quality of the analysis. With the emergence of Internet open source data especially in the mapping technologies as well as the advances in Geography Information System, opportunities to tackle these issues have raised. Therefore, the objective of this paper is to demonstrate such application through a practical case of the development of the transportation network for the city of Kinshasa. The GIS geo-referencing was used to construct the digitized map of Transportation Analysis Zones using available scanned images. Centroids were then dynamically placed at the center of activities using an activities density map. Next, the road network with its characteristics was built using OpenStreet data and other official road inventory data by intersecting their layers and cleaning up unnecessary links such as residential streets. The accuracy of the final network was then checked, comparing it with satellite images from Google and Bing. For the validation, the final network was exported into Emme3 to check for potential network coding issues. Results show a high accuracy between the built network and satellite images, which can mostly be attributed to the use of open source data.Keywords: geographic information system (GIS), network construction, transportation database, open source data
Procedia PDF Downloads 1682947 The Effect of Corporate Governance on Financial Stability and Solvency Margin for Insurance Companies in Jordan
Authors: Ghadeer A.Al-Jabaree, Husam Aldeen Al-Khadash, M. Nassar
Abstract:
This study aimed at investigating the effect of well-designed corporate governance system on the financial stability of insurance companies listed in ASE. Further, this study provides a comprehensive model for evaluating and analyzing insurance companies' financial position and prospective for comparing the degree of corporate governance application provisions among Jordanian insurance companies. In order to achieve the goals of the study, a whole population that consist of (27) listed insurance companies was introduced through the variables of (board of director, audit committee, internal and external auditor, board and management ownership and block holder's identities). Statistical methods were used with alternative techniques by (SPSS); where descriptive statistical techniques such as means, standard deviations were used to describe the variables, while (F) test and ANOVA analysis of variance were used to test the hypotheses of the study. The study revealed the existence of significant effect of corporate governance variables except local companies that are not listed in ASE on financial stability within control variables especially debt ratio (leverage),where it's also showed that concentration in motor third party doesn't have significant effect on insurance companies' financial stability during study period. Moreover, the study concludes that Global financial crisis affect the investment side of insurance companies with insignificant effect on the technical side. Finally, some recommendations were presented such as enhancing the laws and regulation that help the appropriate application of corporate governance, and work on activating the transparency in the disclosures of the financial statements and focusing on supporting the technical provisions for the companies, rather than focusing only on profit side.Keywords: corporate governance, financial stability and solvency margin, insurance companies, Jordan
Procedia PDF Downloads 4902946 Revenue Management of Perishable Products Considering Freshness and Price Sensitive Customers
Authors: Onur Kaya, Halit Bayer
Abstract:
Global grocery and supermarket sales are among the largest markets in the world and perishable products such as fresh produce, dairy and meat constitute the biggest section of these markets. Due to their deterioration over time, the demand for these products depends highly on their freshness. They become totally obsolete after a certain amount of time causing a high amount of wastage and decreases in grocery profits. In addition, customers are asking for higher product variety in perishable product categories, leading to less predictable demand per product and to more out-dating. Effective management of these perishable products is an important issue since it is observed that billions of dollars’ worth of food is expired and wasted every month. We consider coordinated inventory and pricing decisions for perishable products with a time and price dependent random demand function. We use stochastic dynamic programming to model this system for both periodically-reviewed and continuously-reviewed inventory systems and prove certain structural characteristics of the optimal solution. We prove that the optimal ordering decision scenario has a monotone structure and the optimal price value decreases by time. However, the optimal price changes in a non-monotonic structure with respect to inventory size. We also analyze the effect of 1 different parameters on the optimal solution through numerical experiments. In addition, we analyze simple-to-implement heuristics, investigate their effectiveness and extract managerial insights. This study gives valuable insights about the management of perishable products in order to decrease wastage and increase profits.Keywords: age-dependent demand, dynamic programming, perishable inventory, pricing
Procedia PDF Downloads 2472945 Experimental Monitoring of the Parameters of the Ionosphere in the Local Area Using the Results of Multifrequency GNSS-Measurements
Authors: Andrey Kupriyanov
Abstract:
In recent years, much attention has been paid to the problems of ionospheric disturbances and their influence on the signals of global navigation satellite systems (GNSS) around the world. This is due to the increase in solar activity, the expansion of the scope of GNSS, the emergence of new satellite systems, the introduction of new frequencies and many others. The influence of the Earth's ionosphere on the propagation of radio signals is an important factor in many applied fields of science and technology. The paper considers the application of the method of transionospheric sounding using measurements from signals from Global Navigation Satellite Systems to determine the TEC distribution and scintillations of the ionospheric layers. To calculate these parameters, the International Reference Ionosphere (IRI) model of the ionosphere, refined in the local area, is used. The organization of operational monitoring of ionospheric parameters is analyzed using several NovAtel GPStation6 base stations. It allows performing primary processing of GNSS measurement data, calculating TEC and fixing scintillation moments, modeling the ionosphere using the obtained data, storing data and performing ionospheric correction in measurements. As a result of the study, it was proved that the use of the transionospheric sounding method for reconstructing the altitude distribution of electron concentration in different altitude range and would provide operational information about the ionosphere, which is necessary for solving a number of practical problems in the field of many applications. Also, the use of multi-frequency multisystem GNSS equipment and special software will allow achieving the specified accuracy and volume of measurements.Keywords: global navigation satellite systems (GNSS), GPstation6, international reference ionosphere (IRI), ionosphere, scintillations, total electron content (TEC)
Procedia PDF Downloads 1812944 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector
Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh
Abstract:
A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score
Procedia PDF Downloads 1342943 A Network Optimization Study of Logistics for Enhancing Emergency Preparedness in Asia-Pacific
Authors: Giuseppe Timperio, Robert De Souza
Abstract:
The combination of factors such as temperamental climate change, rampant urbanization of risk exposed areas, political and social instabilities, is posing an alarming base for the further growth of number and magnitude of humanitarian crises worldwide. Given the unique features of humanitarian supply chain such as unpredictability of demand in space, time, and geography, spike in the number of requests for relief items in the first days after the calamity, uncertain state of logistics infrastructures, large volumes of unsolicited low-priority items, a proactive approach towards design of disaster response operations is needed to achieve high agility in mobilization of emergency supplies in the immediate aftermath of the event. This paper is an attempt in that direction, and it provides decision makers with crucial strategic insights for a more effective network design for disaster response. Decision sciences and ICT are integrated to analyse the robustness and resilience of a prepositioned network of emergency strategic stockpiles for a real-life case about Indonesia, one of the most vulnerable countries in Asia-Pacific, with the model being built upon a rich set of quantitative data. At this aim, a network optimization approach was implemented, with several what-if scenarios being accurately developed and tested. Findings of this study are able to support decision makers facing challenges related with disaster relief chains resilience, particularly about optimal configuration of supply chain facilities and optimal flows across the nodes, while considering the network structure from an end-to-end in-country distribution perspective.Keywords: disaster preparedness, humanitarian logistics, network optimization, resilience
Procedia PDF Downloads 1762942 A Context Aware Mobile Learning System with a Cognitive Recommendation Engine
Authors: Jalal Maqbool, Gyu Myoung Lee
Abstract:
Using smart devices for context aware mobile learning is becoming increasingly popular. This has led to mobile learning technology becoming an indispensable part of today’s learning environment and platforms. However, some fundamental issues remain - namely, mobile learning still lacks the ability to truly understand human reaction and user behaviour. This is due to the fact that current mobile learning systems are passive and not aware of learners’ changing contextual situations. They rely on static information about mobile learners. In addition, current mobile learning platforms lack the capability to incorporate dynamic contextual situations into learners’ preferences. Thus, this thesis aims to address these issues highlighted by designing a context aware framework which is able to sense learner’s contextual situations, handle data dynamically, and which can use contextual information to suggest bespoke learning content according to a learner’s preferences. This is to be underpinned by a robust recommendation system, which has the capability to perform these functions, thus providing learners with a truly context-aware mobile learning experience, delivering learning contents using smart devices and adapting to learning preferences as and when it is required. In addition, part of designing an algorithm for the recommendation engine has to be based on learner and application needs, personal characteristics and circumstances, as well as being able to comprehend human cognitive processes which would enable the technology to interact effectively and deliver mobile learning content which is relevant, according to the learner’s contextual situations. The concept of this proposed project is to provide a new method of smart learning, based on a capable recommendation engine for providing an intuitive mobile learning model based on learner actions.Keywords: aware, context, learning, mobile
Procedia PDF Downloads 2452941 Near Optimal Closed-Loop Guidance Gains Determination for Vector Guidance Law, from Impact Angle Errors and Miss Distance Considerations
Authors: Karthikeyan Kalirajan, Ashok Joshi
Abstract:
An optimization problem is to setup to maximize the terminal kinetic energy of a maneuverable reentry vehicle (MaRV). The target location, the impact angle is given as constraints. The MaRV uses an explicit guidance law called Vector guidance. This law has two gains which are taken as decision variables. The problem is to find the optimal value of these gains which will result in minimum miss distance and impact angle error. Using a simple 3DOF non-rotating flat earth model and Lockheed martin HP-MARV as the reentry vehicle, the nature of solutions of the optimization problem is studied. This is achieved by carrying out a parametric study for a range of closed loop gain values and the corresponding impact angle error and the miss distance values are generated. The results show that there are well defined lower and upper bounds on the gains that result in near optimal terminal guidance solution. It is found from this study, that there exist common permissible regions (values of gains) where all constraints are met. Moreover, the permissible region lies between flat regions and hence the optimization algorithm has to be chosen carefully. It is also found that, only one of the gain values is independent and that the other dependent gain value is related through a simple straight-line expression. Moreover, to reduce the computational burden of finding the optimal value of two gains, a guidance law called Diveline guidance is discussed, which uses single gain. The derivation of the Diveline guidance law from Vector guidance law is discussed in this paper.Keywords: Marv guidance, reentry trajectory, trajectory optimization, guidance gain selection
Procedia PDF Downloads 4292940 An Efficient Machine Learning Model to Detect Metastatic Cancer in Pathology Scans Using Principal Component Analysis Algorithm, Genetic Algorithm, and Classification Algorithms
Authors: Bliss Singhal
Abstract:
Machine learning (ML) is a branch of Artificial Intelligence (AI) where computers analyze data and find patterns in the data. The study focuses on the detection of metastatic cancer using ML. Metastatic cancer is the stage where cancer has spread to other parts of the body and is the cause of approximately 90% of cancer-related deaths. Normally, pathologists spend hours each day to manually classifying whether tumors are benign or malignant. This tedious task contributes to mislabeling metastasis being over 60% of the time and emphasizes the importance of being aware of human error and other inefficiencies. ML is a good candidate to improve the correct identification of metastatic cancer, saving thousands of lives and can also improve the speed and efficiency of the process, thereby taking fewer resources and time. So far, the deep learning methodology of AI has been used in research to detect cancer. This study is a novel approach to determining the potential of using preprocessing algorithms combined with classification algorithms in detecting metastatic cancer. The study used two preprocessing algorithms: principal component analysis (PCA) and the genetic algorithm, to reduce the dimensionality of the dataset and then used three classification algorithms: logistic regression, decision tree classifier, and k-nearest neighbors to detect metastatic cancer in the pathology scans. The highest accuracy of 71.14% was produced by the ML pipeline comprising of PCA, the genetic algorithm, and the k-nearest neighbor algorithm, suggesting that preprocessing and classification algorithms have great potential for detecting metastatic cancer.Keywords: breast cancer, principal component analysis, genetic algorithm, k-nearest neighbors, decision tree classifier, logistic regression
Procedia PDF Downloads 832939 More Precise: Patient-Reported Outcomes after Stroke
Authors: Amber Elyse Corrigan, Alexander Smith, Anna Pennington, Ben Carter, Jonathan Hewitt
Abstract:
Background and Purpose: Morbidity secondary to stroke is highly heterogeneous, but it is important to both patients and clinicians in post-stroke management and adjustment to life after stroke. The consideration of post-stroke morbidity clinically and from the patient perspective has been poorly measured. The patient-reported outcome measures (PROs) in morbidity assessment help improve this knowledge gap. The primary aim of this study was to consider the association between PRO outcomes and stroke predictors. Methods: A multicenter prospective cohort study assessed 549 stroke patients at 19 hospital sites across England and Wales during 2019. Following a stroke event, demographic, clinical, and PRO measures were collected. Prevalence of morbidity within PRO measures was calculated with associated 95% confidence intervals. Predictors of domain outcome were calculated using a multilevel generalized linear model. Associated P -values and 95% confidence intervals are reported. Results: Data were collected from 549 participants, 317 men (57.7%) and 232 women (42.3%) with ages ranging from 25 to 97 (mean 72.7). PRO morbidity was high post-stroke; 93.2% of the cohort report post-stroke PRO morbidity. Previous stroke, diabetes, and gender are associated with worse patient-reported outcomes across both the physical and cognitive domains. Conclusions: This large-scale multicenter cohort study illustrates the high proportion of morbidity in PRO measures. Further, we demonstrate key predictors of adverse outcomes (Diabetes, previous stroke, and gender) congruence with clinical predictors. The PRO has been demonstrated to be an informative and useful stroke when considering patient-reported outcomes and has wider implications for considerations of PROs in clinical management. Future longitudinal follow-up with PROs is needed to consider association of long-term morbidity.Keywords: morbidity, patient-reported outcome, PRO, stroke
Procedia PDF Downloads 1312938 Application of the Urban Forest Credit Standard as a Tool for Compensating CO2 Emissions in the Metalworking Industry: A Case Study in Brazil
Authors: Marie Madeleine Sarzi Inacio, Ligiane Carolina Leite Dauzacker, Rodrigo Henriques Lopes Da Silva
Abstract:
The climate changes resulting from human activity have increased interest in more sustainable production practices to reduce and offset pollutant emissions. Brazil, with its vast areas capable of carbon absorption, holds a significant advantage in this context. However, to optimize the country's sustainable potential, it is important to establish a robust carbon market with clear rules for the eligibility and validation of projects aimed at reducing and offsetting Greenhouse Gas (GHG) emissions. In this study, our objective is to conduct a feasibility analysis through a case study to evaluate the implementation of an urban forest credits standard in Brazil, using the Urban Forest Credits (UFC) model implemented in the United States as a reference. Thus, the city of Ribeirão Preto, located in Brazil, was selected to assess the availability of green areas. With the CO2 emissions value from the metalworking industry, it was possible to analyze information in the case study, considering the activity. The QGIS software was used to map potential urban forest areas, which can connect to various types of geospatial databases. Although the chosen municipality has little vegetative coverage, the mapping identified at least eight areas that fit the standard definitions within the delimited urban perimeter. The outlook was positive, and the implementation of projects like Urban Forest Credits (UFC) adapted to the Brazilian reality has great potential to benefit the country in the carbon market and contribute to achieving its Greenhouse Gas (GHG) emission reduction goals.Keywords: carbon neutrality, metalworking industry, carbon credits, urban forestry credits
Procedia PDF Downloads 772937 Use of Triclosan-Coated Sutures Led to Cost Saving in Public and Private Setting in India across Five Surgical Categories: An Economical Model Assessment
Authors: Anish Desai, Reshmi Pillai, Nilesh Mahajan, Hitesh Chopra, Vishal Mahajan, Ajay Grover, Ashish Kohli
Abstract:
Surgical Site Infection (SSI) is hospital acquired infection of growing concern. This study presents the efficacy and cost-effectiveness of triclosan-coated suture, in reducing the burden of SSI in India. Methodology: A systematic literature search was conducted for economic burden (1998-2018) of SSI and efficacy of triclosan-coated sutures (TCS) vs. non-coated sutures (NCS) (2000-2018). PubMed Medline and EMBASE indexed articles were searched using Mesh terms or Emtree. Decision tree analysis was used to calculate, the cost difference between TCS and NCS at private and public hospitals, respectively for 7 surgical procedures. Results: The SSI range from low to high for Caesarean section (C-section), Laparoscopic hysterectomy (L-hysterectomy), Open Hernia (O-Hernia), Laparoscopic Cholecystectomy (L-Cholecystectomy), Coronary artery bypass graft (CABG), Total knee replacement (TKR), and Mastectomy were (3.77 to 24.2%), (2.28 to 11.7%), (1.75 to 60%), (1.71 to 25.58%), (1.6 to 18.86%), (1.74 to 12.5%), and (5.56 to 25%), respectively. The incremental cost (%) of TCS ranged 0.1%-0.01% in private and from 0.9%-0.09% at public hospitals across all surgical procedures. Cost savings at median efficacy & SSI risk was 6.52%, 5.07 %, 11.39%, 9.63%, 3.62%, 2.71%, 9.41% for C-section, L-hysterectomy, O-Hernia, L-Cholecystectomy, CABG, TKR, and Mastectomy in private and 8.79%, 4.99%, 12.67%, 10.58%, 3.32%, 2.35%, 11.83% in public hospital, respectively. Efficacy of TCS and SSI incidence in a particular surgical procedure were important determinants of cost savings using one-way sensitivity analysis. Conclusion: TCS suture led to cost savings across all 7 surgeries in both private and public hospitals in India.Keywords: cost Savings, non-coated sutures, surgical site infection, triclosan-coated sutures
Procedia PDF Downloads 3992936 COVID-19 Detection from Computed Tomography Images Using UNet Segmentation, Region Extraction, and Classification Pipeline
Authors: Kenan Morani, Esra Kaya Ayana
Abstract:
This study aimed to develop a novel pipeline for COVID-19 detection using a large and rigorously annotated database of computed tomography (CT) images. The pipeline consists of UNet-based segmentation, lung extraction, and a classification part, with the addition of optional slice removal techniques following the segmentation part. In this work, a batch normalization was added to the original UNet model to produce lighter and better localization, which is then utilized to build a full pipeline for COVID-19 diagnosis. To evaluate the effectiveness of the proposed pipeline, various segmentation methods were compared in terms of their performance and complexity. The proposed segmentation method with batch normalization outperformed traditional methods and other alternatives, resulting in a higher dice score on a publicly available dataset. Moreover, at the slice level, the proposed pipeline demonstrated high validation accuracy, indicating the efficiency of predicting 2D slices. At the patient level, the full approach exhibited higher validation accuracy and macro F1 score compared to other alternatives, surpassing the baseline. The classification component of the proposed pipeline utilizes a convolutional neural network (CNN) to make final diagnosis decisions. The COV19-CT-DB dataset, which contains a large number of CT scans with various types of slices and rigorously annotated for COVID-19 detection, was utilized for classification. The proposed pipeline outperformed many other alternatives on the dataset.Keywords: classification, computed tomography, lung extraction, macro F1 score, UNet segmentation
Procedia PDF Downloads 1342935 African Women in Power: An Analysis of the Representation of Nigerian Business Women in Television
Authors: Ifeanyichukwu Valerie Oguafor
Abstract:
Women generally have been categorized and placed under the chain of business industry, sometimes highly regarded and other times merely. The social construction of womanhood does not in all sense support a woman going into business, let alone succeed in it because it is believed that it a man’s world. In a typical patriarchal setting, a woman is expected to know nothing more domestic roles. For some women, this is not the case as they have been able to break these barriers to excel in business amidst these social setting and stereotypes. This study examines media representation of Nigerians business women, using content analysis of TV interviews as media text, framing analysis as an approach in qualitative methodology, The study further aims to analyse media frames of two Nigerian business women: FolorunshoAlakija, a business woman in the petroleum industry with current net worth 1.1 billion U.S dollars, emerging as the richest black women in the world 2014. MosunmolaAbudu, a media magnate in Nigeria who launched the first Africa’s global black entertainment and lifestyle network in 2013. This study used six predefined frames: the business woman, the myth of business women, the non-traditional woman, women in leading roles, the family woman, the religious woman, and the philanthropist woman to analyse the representation of Nigerian business women in the media. The analysis of the aforementioned frames on TV interviews with these women reveals that the media perpetually reproduces existing gender stereotype and do not challenge patriarchy. Women face challenges in trying to succeed in business while trying to keep their homes stable. This study concludes that the media represent and reproduce gender stereotypes in spite of the expectation of empowering women. The media reduces these women’s success insignificant rather than a role model for women in society.Keywords: representation of business women in the media, business women in Nigeria, framing in the media, patriarchy, women's subordination
Procedia PDF Downloads 1622934 The Three-dimensional Response of Mussel Plaque Anchoring to Wet Substrates under Directional Tensions
Authors: Yingwei Hou, Tao Liu, Yong Pang
Abstract:
The paper explored the three-dimensional deformation of mussel plaques anchor to wet polydimethylsiloxane (PDMS) substrates under tension stress with different angles. Mussel plaques exhibiting natural adhesive structures, have attracted significant attention for their remarkable adhesion properties. Understanding their behavior under mechanical stress, particularly in a three-dimensional context, holds immense relevance for biomimetic material design and bio-inspired adhesive development. This study employed a novel approach to investigate the 3D deformation of the PDMS substrates anchored by mussel plaques subjected to controlled tension. Utilizing our customized stereo digital image correlation technique and mechanical mechanics analyses, we found the distributions of the displacement and resultant force on the substrate became concentrated under the plaque. Adhesion and sucking mechanisms were analyzed for the mussel plaque-substrate system under tension until detachment. The experimental findings were compared with a developed model using finite element analysis and the results provide new insights into mussels’ attachment mechanism. This research not only contributes to the fundamental understanding of biological adhesion but also holds promising implications for the design of innovative adhesive materials with applications in fields such as medical adhesives, underwater technologies, and industrial bonding. The comprehensive exploration of mussel plaque behavior in three dimensions is important for advancements in biomimicry and materials science, fostering the development of adhesives that emulate nature's efficiency.Keywords: adhesion mechanism, mytilus edulis, mussel plaque, stereo digital image correlation
Procedia PDF Downloads 572933 Nurse Practitioner Led Pediatric Primary Care Clinic in a Tertiary Care Setting: Improving Access and Health Outcomes
Authors: Minna K. Miller, Chantel. E. Canessa, Suzanna V. McRae, Susan Shumay, Alissa Collingridge
Abstract:
Primary care provides the first point of contact and access to health care services. For the pediatric population, the goal is to help healthy children stay healthy and to help those that are sick get better. Primary care facilitates regular well baby/child visits; health promotion and disease prevention; investigation, diagnosis and management of acute and chronic illnesses; health education; both consultation and collaboration with, and referral to other health care professionals. There is a protective association between regular well-child visit care and preventable hospitalization. Further, low adherence to well-child care and poor continuity of care are independently associated with increased risk of hospitalization. With a declining number of family physicians caring for children, and only a portion of pediatricians providing primary care services, it is becoming increasingly difficult for children and their families to access primary care. Nurse practitioners are in a unique position to improve access to primary care and improve health outcomes for children. Limited literature is available on the nurse practitioner role in primary care pediatrics. The purpose of this paper is to describe the development, implementation and evaluation of a Nurse Practitioner-led pediatric primary care clinic in a tertiary care setting. Utilizing the participatory, evidence-based, patient-focused process for advanced practice nursing (PEPPA framework), this paper highlights the results of the initial needs assessment/gap analysis, the new service delivery model, populations served, and outcome measures.Keywords: access, health outcomes, nurse practitioner, pediatric primary care, PEPPA framework
Procedia PDF Downloads 4972932 The Impact of Pediatric Cares, Infections and Vaccines on Community and People’s Lives
Authors: Nashed Atef Nashed Farag
Abstract:
Introduction: Reporting adverse events following vaccination remains a challenge. WHO has mandated pharmacovigilance centers around the world to submit Adverse Events Following Immunization (AEFI) reports from different countries to a large electronic database of adverse drug event data called Vigibase. Despite sufficient information about AEFIs on Vigibase, they are not available to the general public. However, the WHO has an alternative website called VigiAccess, an open-access website that serves as an archive for reported adverse reactions and AEFIs. The aim of the study was to establish a reporting model for a number of commonly used vaccines in the VigiAccess system. Methods: On February 5, 2018, VigiAccess comprehensively searched for ESSI reports on the measles vaccine, oral polio vaccine (OPV), yellow fever vaccine, pneumococcal vaccine, rotavirus vaccine, meningococcal vaccine, tetanus vaccine, and tuberculosis vaccine (BCG). These are reports from all pharmacovigilance centers around the world since they joined the WHO Drug Monitoring Program. Results: After an extensive search, VigiAccess found 9,062 AEFIs from the measles vaccine, 185,829 AEFIs from the OPV vaccine, 24,577 AEFIs from the yellow fever vaccine, 317,208 AEFIs from the pneumococcal vaccine, 73,513 AEFIs from the rotavirus vaccine, and 145,447 AEFIs from meningococcal cal vaccine, 22,781 EI FI vaccines against tetanus and 35,556 BCG vaccines against AEFI. Conclusion: The study found that among the eight vaccines examined, pneumococcal vaccines were associated with the highest number of AEFIs, while measles vaccines were associated with the fewest AEFIs.Keywords: surgical approach, anatomical approach, decompression, axillary nerve, quadrangular space adverse events following immunization, cameroon, COVID-19 vaccines, nOPV, ODK vaccines, adverse reactions, VigiAccess, adverse event reporting
Procedia PDF Downloads 732931 Geostatistical Simulation of Carcinogenic Industrial Effluent on the Irrigated Soil and Groundwater, District Sheikhupura, Pakistan
Authors: Asma Shaheen, Javed Iqbal
Abstract:
The water resources are depleting due to an intrusion of industrial pollution. There are clusters of industries including leather tanning, textiles, batteries, and chemical causing contamination. These industries use bulk quantity of water and discharge it with toxic effluents. The penetration of heavy metals through irrigation from industrial effluent has toxic effect on soil and groundwater. There was strong positive significant correlation between all the heavy metals in three media of industrial effluent, soil and groundwater (P < 0.001). The metal to the metal association was supported by dendrograms using cluster analysis. The geospatial variability was assessed by using geographically weighted regression (GWR) and pollution model to identify the simulation of carcinogenic elements in soil and groundwater. The principal component analysis identified the metals source, 48.8% variation in factor 1 have significant loading for sodium (Na), calcium (Ca), magnesium (Mg), iron (Fe), chromium (Cr), nickel (Ni), lead (Pb) and zinc (Zn) of tannery effluent-based process. In soil and groundwater, the metals have significant loading in factor 1 representing more than half of the total variation with 51.3 % and 53.6 % respectively which showed that pollutants in soil and water were driven by industrial effluent. The cumulative eigen values for the three media were also found to be greater than 1 representing significant clustering of related heavy metals. The results showed that heavy metals from industrial processes are seeping up toxic trace metals in the soil and groundwater. The poisonous pollutants from heavy metals turned the fresh resources of groundwater into unusable water. The availability of fresh water for irrigation and domestic use is being alarming.Keywords: groundwater, geostatistical, heavy metals, industrial effluent
Procedia PDF Downloads 2292930 Development of a Geomechanical Risk Assessment Model for Underground Openings
Authors: Ali Mortazavi
Abstract:
The main objective of this research project is to delve into a multitude of geomechanical risks associated with various mining methods employed within the underground mining industry. Controlling geotechnical design parameters and operational factors affecting the selection of suitable mining techniques for a given underground mining condition will be considered from a risk assessment point of view. Important geomechanical challenges will be investigated as appropriate and relevant to the commonly used underground mining methods. Given the complicated nature of rock mass in-situ and complicated boundary conditions and operational complexities associated with various underground mining methods, the selection of a safe and economic mining operation is of paramount significance. Rock failure at varying scales within the underground mining openings is always a threat to mining operations and causes human and capital losses worldwide. Geotechnical design is a major design component of all underground mines and basically dominates the safety of an underground mine. With regard to uncertainties that exist in rock characterization prior to mine development, there are always risks associated with inappropriate design as a function of mining conditions and the selected mining method. Uncertainty often results from the inherent variability of rock masse, which in turn is a function of both geological materials and rock mass in-situ conditions. The focus of this research is on developing a methodology which enables a geomechanical risk assessment of given underground mining conditions. The outcome of this research is a geotechnical risk analysis algorithm, which can be used as an aid in selecting the appropriate mining method as a function of mine design parameters (e.g., rock in-situ properties, design method, governing boundary conditions such as in-situ stress and groundwater, etc.).Keywords: geomechanical risk assessment, rock mechanics, underground mining, rock engineering
Procedia PDF Downloads 1472929 Accounting for Rice Productivity Heterogeneity in Ghana: The Two-Step Stochastic Metafrontier Approach
Authors: Franklin Nantui Mabe, Samuel A. Donkoh, Seidu Al-Hassan
Abstract:
Rice yields among agro-ecological zones are heterogeneous. Farmers, researchers and policy makers are making frantic efforts to bridge rice yield gaps between agro-ecological zones through the promotion of improved agricultural technologies (IATs). Farmers are also modifying these IATs and blending them with indigenous farming practices (IFPs) to form farmer innovation systems (FISs). Also, different metafrontier models have been used in estimating productivity performances and their drivers. This study used the two-step stochastic metafrontier model to estimate the productivity performances of rice farmers and their determining factors in GSZ, FSTZ and CSZ. The study used both primary and secondary data. Farmers in CSZ are the most technically efficient. Technical inefficiencies of farmers are negatively influenced by age, sex, household size, education years, extension visits, contract farming, access to improved seeds, access to irrigation, high rainfall amount, less lodging of rice, and well-coordinated and synergized adoption of technologies. Albeit farmers in CSZ are doing well in terms of rice yield, they still have the highest potential of increasing rice yield since they had the lowest TGR. It is recommended that government through the ministry of food and agriculture, development partners and individual private companies promote the adoption of IATs as well as educate farmers on how to coordinate and synergize the adoption of the whole package. Contract farming concept and agricultural extension intensification should be vigorously pursued to the latter.Keywords: efficiency, farmer innovation systems, improved agricultural technologies, two-step stochastic metafrontier approach
Procedia PDF Downloads 2692928 The Optimization of the Parameters for Eco-Friendly Leaching of Precious Metals from Waste Catalyst
Authors: Silindile Gumede, Amir Hossein Mohammadi, Mbuyu Germain Ntunka
Abstract:
Goal 12 of the 17 Sustainable Development Goals (SDGs) encourages sustainable consumption and production patterns. This necessitates achieving the environmentally safe management of chemicals and all wastes throughout their life cycle and the proper disposal of pollutants and toxic waste. Fluid catalytic cracking (FCC) catalysts are widely used in the refinery to convert heavy feedstocks to lighter ones. During the refining processes, the catalysts are deactivated and discarded as hazardous toxic solid waste. Spent catalysts (SC) contain high-cost metal, and the recovery of metals from SCs is a tactical plan for supplying part of the demand for these substances and minimizing the environmental impacts. Leaching followed by solvent extraction, has been found to be the most efficient method to recover valuable metals with high purity from spent catalysts. However, the use of inorganic acids during the leaching process causes a secondary environmental issue. Therefore, it is necessary to explore other alternative efficient leaching agents that are economical and environmentally friendly. In this study, the waste catalyst was collected from a domestic refinery and was characterised using XRD, ICP, XRF, and SEM. Response surface methodology (RSM) and Box Behnken design were used to model and optimize the influence of some parameters affecting the acidic leaching process. The parameters selected in this investigation were the acid concentration, temperature, and leaching time. From the characterisation results, it was found that the spent catalyst consists of high concentrations of Vanadium (V) and Nickel (Ni); hence this study focuses on the leaching of Ni and V using a biodegradable acid to eliminate the formation of the secondary pollution.Keywords: eco-friendly leaching, optimization, metal recovery, leaching
Procedia PDF Downloads 682927 A Rationale to Describe Ambident Reactivity
Authors: David Ryan, Martin Breugst, Turlough Downes, Peter A. Byrne, Gerard P. McGlacken
Abstract:
An ambident nucleophile is a nucleophile that possesses two or more distinct nucleophilic sites that are linked through resonance and are effectively “in competition” for reaction with an electrophile. Examples include enolates, pyridone anions, and nitrite anions, among many others. Reactions of ambident nucleophiles and electrophiles are extremely prevalent at all levels of organic synthesis. The principle of hard and soft acids and bases (the “HSAB principle”) is most commonly cited in the explanation of selectivities in such reactions. Although this rationale is pervasive in any discussion on ambident reactivity, the HSAB principle has received considerable criticism. As a result, the principle’s supplantation has become an area of active interest in recent years. This project focuses on developing a model for rationalizing ambident reactivity. Presented here is an approach that incorporates computational calculations and experimental kinetic data to construct Gibbs energy profile diagrams. The preferred site of alkylation of nitrite anion with a range of ‘hard’ and ‘soft’ alkylating agents was established by ¹H NMR spectroscopy. Pseudo-first-order rate constants were measured directly by ¹H NMR reaction monitoring, and the corresponding second-order constants and Gibbs energies of activation were derived. These, in combination with computationally derived standard Gibbs energies of reaction, were sufficient to construct Gibbs energy wells. By representing the ambident system as a series of overlapping Gibbs energy wells, a more intuitive picture of ambident reactivity emerges. Here, previously unexplained switches in reactivity in reactions involving closely related electrophiles are elucidated.Keywords: ambident, Gibbs, nucleophile, rates
Procedia PDF Downloads 86