Search results for: mathematical programming
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2485

Search results for: mathematical programming

415 Effects of Process Parameters on the Yield of Oil from Coconut Fruit

Authors: Ndidi F. Amulu, Godian O. Mbah, Maxwel I. Onyiah, Callistus N. Ude

Abstract:

Analysis of the properties of coconut (Cocos nucifera) and its oil was evaluated in this work using standard analytical techniques. The analyses carried out include proximate composition of the fruit, extraction of oil from the fruit using different process parameters and physicochemical analysis of the extracted oil. The results showed the percentage (%) moisture, crude lipid, crude protein, ash, and carbohydrate content of the coconut as 7.59, 55.15, 5.65, 7.35, and 19.51 respectively. The oil from the coconut fruit was odourless and yellowish liquid at room temperature (30oC). The treatment combinations used (leaching time, leaching temperature and solute: solvent ratio) showed significant differences (P˂0.05) in the yield of oil from coconut flour. The oil yield ranged between 36.25%-49.83%. Lipid indices of the coconut oil indicated the acid value (AV) as 10.05 Na0H/g of oil, free fatty acid (FFA) as 5.03%, saponification values (SV) as 183.26 mgKOH-1 g of oil, iodine value (IV) as 81.00 I2/g of oil, peroxide value (PV) as 5.00 ml/ g of oil and viscosity (V) as 0.002. A standard statistical package minitab version 16.0 program was used in the regression analysis and analysis of variance (ANOVA). The statistical software mentioned above was also used to generate various plots such as single effect plot, interactions effect plot and contour plot. The response or yield of oil from the coconut flour was used to develop a mathematical model that correlates the yield to the process variables studied. The maximum conditions obtained that gave the highest yield of coconut oil were leaching time of 2 hrs, leaching temperature of 50 oC and solute/solvent ratio of 0.05 g/ml.

Keywords: coconut, oil-extraction, optimization, physicochemical, proximate

Procedia PDF Downloads 322
414 Computer-Aided Detection of Simultaneous Abdominal Organ CT Images by Iterative Watershed Transform

Authors: Belgherbi Aicha, Hadjidj Ismahen, Bessaid Abdelhafid

Abstract:

Interpretation of medical images benefits from anatomical and physiological priors to optimize computer-aided diagnosis applications. Segmentation of liver, spleen and kidneys is regarded as a major primary step in the computer-aided diagnosis of abdominal organ diseases. In this paper, a semi-automated method for medical image data is presented for the abdominal organ segmentation data using mathematical morphology. Our proposed method is based on hierarchical segmentation and watershed algorithm. In our approach, a powerful technique has been designed to suppress over-segmentation based on mosaic image and on the computation of the watershed transform. Our algorithm is currency in two parts. In the first, we seek to improve the quality of the gradient-mosaic image. In this step, we propose a method for improving the gradient-mosaic image by applying the anisotropic diffusion filter followed by the morphological filters. Thereafter, we proceed to the hierarchical segmentation of the liver, spleen and kidney. To validate the segmentation technique proposed, we have tested it on several images. Our segmentation approach is evaluated by comparing our results with the manual segmentation performed by an expert. The experimental results are described in the last part of this work.

Keywords: anisotropic diffusion filter, CT images, morphological filter, mosaic image, simultaneous organ segmentation, the watershed algorithm

Procedia PDF Downloads 413
413 Getting Out of the Box: Tangible Music Production in the Age of Virtual Technological Abundance

Authors: Tim Nikolsky

Abstract:

This paper seeks to explore the different ways in which music producers choose to embrace various levels of technology based on musical values, objectives, affordability, access and workflow benefits. Current digital audio production workflow is questioned. Engineers and music producers of today are increasingly divorced from the tangibility of music production. Making music no longer requires you to reach over and turn a knob. Ideas of authenticity in music production are being redefined. Calculations from the mathematical algorithm with the pretty pictures are increasingly being chosen over hardware containing transformers and tubes. Are mouse clicks and movements equivalent or inferior to the master brush strokes we are seeking to conjure? We are making audio production decisions visually by constantly looking at a screen rather than listening. Have we compromised our music objectives and values by removing the ‘hands-on’ nature of music making? DAW interfaces are making our musical decisions for us not necessarily in our best interests. Technological innovation has presented opportunities as well as challenges for education. What do music production students actually need to learn in a formalised education environment, and to what extent do they need to know it? In this brave new world of omnipresent music creation tools, do we still need tangibility in music production? Interviews with prominent Australian music producers that work in a variety of fields will be featured in this paper, and will provide insight in answering these questions and move towards developing an understanding how tangibility can be rediscovered in the next generation of music production.

Keywords: analogue, digital, digital audio workstation, music production, plugins, tangibility, technology, workflow

Procedia PDF Downloads 245
412 Load-Enabled Deployment and Sensing Range Optimization for Lifetime Enhancement of WSNs

Authors: Krishan P. Sharma, T. P. Sharma

Abstract:

Wireless sensor nodes are resource constrained battery powered devices usually deployed in hostile and ill-disposed areas to cooperatively monitor physical or environmental conditions. Due to their limited power supply, the major challenge for researchers is to utilize their battery power for enhancing the lifetime of whole network. Communication and sensing are two major sources of energy consumption in sensor networks. In this paper, we propose a deployment strategy for enhancing the average lifetime of a sensor network by effectively utilizing communication and sensing energy to provide full coverage. The proposed scheme is based on the fact that due to heavy relaying load, sensor nodes near to the sink drain energy at much faster rate than other nodes in the network and consequently die much earlier. To cover this imbalance, proposed scheme finds optimal communication and sensing ranges according to effective load at each node and uses a non-uniform deployment strategy where there is a comparatively high density of nodes near to the sink. Probable relaying load factor at particular node is calculated and accordingly optimal communication distance and sensing range for each sensor node is adjusted. Thus, sensor nodes are placed at locations that optimize energy during network operation. Formal mathematical analysis for calculating optimized locations is reported in present work.

Keywords: load factor, network lifetime, non-uniform deployment, sensing range

Procedia PDF Downloads 350
411 Boundary Layer Flow of a Casson Nanofluid Past a Vertical Exponentially Stretching Cylinder in the Presence of a Transverse Magnetic Field with Internal Heat Generation/Absorption

Authors: G. Sarojamma, K. Vendabai

Abstract:

An analysis is carried out to investigate the effect of magnetic field and heat source on the steady boundary layer flow and heat transfer of a Casson nanofluid over a vertical cylinder stretching exponentially along its radial direction. Using a similarity transformation, the governing mathematical equations, with the boundary conditions are reduced to a system of coupled, non –linear ordinary differential equations. The resulting system is solved numerically by the fourth order Runge – Kutta scheme with shooting technique. The influence of various physical parameters such as Reynolds number, Prandtl number, magnetic field, Brownian motion parameter, thermophoresis parameter, Lewis number and the natural convection parameter are presented graphically and discussed for non – dimensional velocity, temperature and nanoparticle volume fraction. Numerical data for the skin – friction coefficient, local Nusselt number and the local Sherwood number have been tabulated for various parametric conditions. It is found that the local Nusselt number is a decreasing function of Brownian motion parameter Nb and the thermophoresis parameter Nt.

Keywords: casson nanofluid, boundary layer flow, internal heat generation/absorption, exponentially stretching cylinder, heat transfer, brownian motion, thermophoresis

Procedia PDF Downloads 358
410 Determination of Surface Deformations with Global Navigation Satellite System Time Series

Authors: Ibrahim Tiryakioglu, Mehmet Ali Ugur, Caglar Ozkaymak

Abstract:

The development of GNSS technology has led to increasingly widespread and successful applications of GNSS surveys for monitoring crustal movements. However, multi-period GPS survey solutions have not been applied in monitoring vertical surface deformation. This study uses long-term GNSS time series that are required to determine vertical deformations. In recent years, the surface deformations that are parallel and semi-parallel to Bolvadin fault have occurred in Western Anatolia. These surface deformations have continued to occur in Bolvadin settlement area that is located mostly on alluvium ground. Due to these surface deformations, a number of cracks in the buildings located in the residential areas and breaks in underground water and sewage systems have been observed. In order to determine the amount of vertical surface deformations, two continuous GNSS stations have been established in the region. The stations have been operating since 2015 and 2017, respectively. In this study, GNSS observations from the mentioned two GNSS stations were processed with GAMIT/GLOBK (GNSS Analysis Massachusetts Institute of Technology/GLOBal Kalman) program package to create a coordinate time series. With the time series analyses, the GNSS stations’ behavior models (linear, periodical, etc.), the causes of these behaviors, and mathematical models were determined. The study results from the time series analysis of these two 2 GNSS stations shows approximately 50-80 mm/yr vertical movement.

Keywords: Bolvadin fault, GAMIT, GNSS time series, surface deformations

Procedia PDF Downloads 130
409 Heat and Mass Transfer Modelling of Industrial Sludge Drying at Different Pressures and Temperatures

Authors: L. Al Ahmad, C. Latrille, D. Hainos, D. Blanc, M. Clausse

Abstract:

A two-dimensional finite volume axisymmetric model is developed to predict the simultaneous heat and mass transfers during the drying of industrial sludge. The simulations were run using COMSOL-Multiphysics 3.5a. The input parameters of the numerical model were acquired from a preliminary experimental work. Results permit to establish correlations describing the evolution of the various parameters as a function of the drying temperature and the sludge water content. The selection and coupling of the equation are validated based on the drying kinetics acquired experimentally at a temperature range of 45-65 °C and absolute pressure range of 200-1000 mbar. The model, incorporating the heat and mass transfer mechanisms at different operating conditions, shows simulated values of temperature and water content. Simulated results are found concordant with the experimental values, only at the first and last drying stages where sludge shrinkage is insignificant. Simulated and experimental results show that sludge drying is favored at high temperatures and low pressure. As experimentally observed, the drying time is reduced by 68% for drying at 65 °C compared to 45 °C under 1 atm. At 65 °C, a 200-mbar absolute pressure vacuum leads to an additional reduction in drying time estimated by 61%. However, the drying rate is underestimated in the intermediate stage. This rate underestimation could be improved in the model by considering the shrinkage phenomena that occurs during sludge drying.

Keywords: industrial sludge drying, heat transfer, mass transfer, mathematical modelling

Procedia PDF Downloads 103
408 Integration of Two Thermodynamic Cycles by Absorption for Simultaneous Production of Fresh Water and Cooling

Authors: Javier Delgado-Gonzaga, Wilfrido Rivera, David Juárez-Romero

Abstract:

Cooling and water purification are processes that have contributed to the economic and social development of the modern world. However, these processes require a significant amount of energy globally. Nowadays, absorption heat pumps have been studied with great interest since they are capable of producing cooling and/or purifying water from low-temperature energy sources such as industrial waste heat or renewable energy. In addition, absorption heat pumps require negligible amounts of electricity for their operation and generally use working fluids that do not represent a risk to the environment. The objective of this work is to evaluate a system that integrates an absorption heat transformer and an absorption cooling system to produce fresh water and cooling from a low-temperature heat source. Both cycles operate with the working pair LiBr-H2O. The integration is possible through the interaction of the LiBr-H2O solution streams between both cycles and also by recycling heat from the absorption heat transformer to the absorption cooling system. Mathematical models were developed to compare the performance of four different configurations. The results showed that the configuration in which the hottest streams of LiBr-H2O solution preheated the coldest streams in the economizers of both cycles was one that achieved the best performance. The interaction of the solution currents and the heat recycling analyzed in this work serves as a record of the possibilities of integration between absorption cycles for cogeneration.

Keywords: absorption heat transformer, absorption cooling system, water desalination, integrated system

Procedia PDF Downloads 54
407 Nutrition Budgets in Uganda: Research to Inform Implementation

Authors: Alexis D'Agostino, Amanda Pomeroy

Abstract:

Background: Resource availability is essential to effective implementation of national nutrition policies. To this end, the SPRING Project has collected and analyzed budget data from government ministries in Uganda, international donors, and other nutrition implementers to provide data for the first time on what funding is actually allocated to implement nutrition activities named in the national nutrition plan. Methodology: USAID’s SPRING Project used the Uganda Nutrition Action Plan (UNAP) as the starting point for budget analysis. Thorough desk reviews of public budgets from government, donors, and NGOs were mapped to activities named in the UNAP and validated by key informants (KIs) across the stakeholder groups. By relying on nationally-recognized and locally-created documents, SPRING provided a familiar basis for discussions to increase credibility and local ownership of findings. Among other things, the KIs validated the amount, source, and type (specific or sensitive) of funding. When only high-level budget data were available, KIs provided rough estimates of the percentage of allocations that were actually nutrition-relevant, allowing creation of confidence intervals around some funding estimates. Results: After validating data and narrowing in on estimates of funding to nutrition-relevant programming, researchers applied a formula to estimate overall nutrition allocations. In line with guidance by the SUN Movement and its three-step process, nutrition-specific funding was counted at 100% of its allocation amount, while nutrition sensitive funding was counted at 25%. The vast majority of nutrition funding in Uganda is off-budget, with over 90 percent of all nutrition funding is provided outside of the government system. Overall allocations are split nearly evenly between nutrition-specific and –sensitive activities. In FY 2013/14, the two-year study’s baseline year, on- and off-budget funding for nutrition was estimated to be around 60 million USD. While the 60 million USD allocations compare favorably to the 66 million USD estimate of the cost of the UNAP, not all activities are sufficiently funded. Those activities with a focus on behavior change were the most underfunded. In addition, accompanying qualitative research suggested that donor funding for nutrition activities may shift government funding into other areas of work, making it difficult to estimate the sustainability of current nutrition investments.Conclusions: Beyond providing figures, these estimates can be used together with the qualitative results of the study to explain how and why these amounts were allocated for particular activities and not others, examine the negotiation process that occurred, and suggest options for improving the flow of finances to UNAP activities for the remainder of the policy tenure. By the end of the PBN study, several years of nutrition budget estimates will be available to compare changes in funding over time. Halfway through SPRING’s work, there is evidence that country stakeholders have begun to feel ownership over the ultimate findings and some ministries are requesting increased technical assistance in nutrition budgeting. Ultimately, these data can be used within organization to advocate for more and improved nutrition funding and to improve targeting of nutrition allocations.

Keywords: budget, nutrition, financing, scale-up

Procedia PDF Downloads 408
406 A Project Screening System for Energy Enterprise Based on Dempster-Shafer Theory

Authors: Woosik Jang, Seung Heon Han, Seung Won Baek

Abstract:

Natural gas (NG) is an energy resource in a few countries, and most NG producers do business in politically unstable countries. In addition, as 90% of the LNG market is controlled by a small number of international oil companies (IOCs) and national oil companies (NOCs), entry of latecomers into the market is extremely limited. To meet these challenges, project viability needs to be assessed based on limited information from a project screening perspective. However, the early stages of the project have the following difficulties: (1) What are the factors to consider? (2) How many professionals do you need to decide? (3) How to make the best decision with limited information? To address this problem, this study proposes a model for evaluating LNG project viability based on the Dempster-Shafer theory (DST). A total of 11 indicators for analyzing the gas field, reflecting the characteristics of the LNG industry, and 23 indicators for analyzing the market environment, were identified. The proposed model also evaluates the LNG project based on the survey and provides uncertainty of the results based on DST as well as quantified results. Thus, the proposed model is expected to be able to support the decision-making process of the gas field project using quantitative results as a systematic framework, and it was developed as a stand-alone system to improve its usefulness in practice. Consequently, the amount of information and the mathematical approach are expected to improve the quality and opportunity of decision making for LNG projects for enterprises.

Keywords: project screen, energy enterprise, decision support system, Dempster-Shafer theory

Procedia PDF Downloads 310
405 Controlled Release of Glucosamine from Pluronic-Based Hydrogels for the Treatment of Osteoarthritis

Authors: Papon Thamvasupong, Kwanchanok Viravaidya-Pasuwat

Abstract:

Osteoarthritis affects a lot of people worldwide. Local injection of glucosamine is one of the alternative treatment methods to replenish the natural lubrication of cartilage. However, multiple injections can potentially lead to possible bacterial infection. Therefore, a drug delivery system is desired to reduce the frequencies of injections. A hydrogel is one of the delivery systems that can control the release of drugs. Thermo-reversible hydrogels can be beneficial to the drug delivery system especially in the local injection route because this formulation can change from liquid to gel after getting into human body. Once the gel is in the body, it will slowly release the drug in a controlled manner. In this study, various formulations of Pluronic-based hydrogels were synthesized for the controlled release of glucosamine. One of the challenges of the Pluronic controlled release system is its fast dissolution rate. To overcome this problem, alginate and calcium sulfate (CaSO4) were added to the polymer solution. The characteristics of the hydrogels were investigated including the gelation temperature, gelation time, hydrogel dissolution and glucosamine release mechanism. Finally, a mathematical model of glucosamine release from Pluronic-alginate-hyaluronic acid hydrogel was developed. Our results have shown that crosslinking Pluronic gel with alginate did not significantly extend the dissolution rate of the gel. Moreover, the gel dissolution profiles and the glucosamine release mechanisms were best described using the zeroth-order kinetic model, indicating that the release of glucosamine was primarily governed by the gel dissolution.

Keywords: controlled release, drug delivery system, glucosamine, pluronic, thermoreversible hydrogel

Procedia PDF Downloads 242
404 Investigation of Crack Formation in Ordinary Reinforced Concrete Beams and in Beams Strengthened with Carbon Fiber Sheet: Theory and Experiment

Authors: Anton A. Bykov, Irina O. Glot, Igor N. Shardakov, Alexey P. Shestakov

Abstract:

This paper presents the results of experimental and theoretical investigations of the mechanisms of crack formation in reinforced concrete beams subjected to quasi-static bending. The boundary-value problem has been formulated in the framework of brittle fracture mechanics and has been solved by using the finite-element method. Numerical simulation of the vibrations of an uncracked beam and a beam with cracks of different size serves to determine the pattern of changes in the spectrum of eigenfrequencies observed during crack evolution. Experiments were performed on the sequential quasistatic four-point bending of the beam leading to the formation of cracks in concrete. At each loading stage, the beam was subjected to an impulse load to induce vibrations. Two stages of cracking were detected. At the first stage the conservative process of deformation is realized. The second stage is an active cracking, which is marked by a sharp change in eingenfrequencies. The boundary of a transition from one stage to another is well registered. The vibration behavior was examined for the beams strengthened by carbon-fiber sheet before loading and at the intermediate stage of loading after the grouting of initial cracks. The obtained results show that the vibrodiagnostic approach is an effective tool for monitoring of cracking and for assessing the quality of measures aimed at strengthening concrete structures.

Keywords: crack formation, experiment, mathematical modeling, reinforced concrete, vibrodiagnostics

Procedia PDF Downloads 272
403 Statistical Time-Series and Neural Architecture of Malaria Patients Records in Lagos, Nigeria

Authors: Akinbo Razak Yinka, Adesanya Kehinde Kazeem, Oladokun Oluwagbenga Peter

Abstract:

Time series data are sequences of observations collected over a period of time. Such data can be used to predict health outcomes, such as disease progression, mortality, hospitalization, etc. The Statistical approach is based on mathematical models that capture the patterns and trends of the data, such as autocorrelation, seasonality, and noise, while Neural methods are based on artificial neural networks, which are computational models that mimic the structure and function of biological neurons. This paper compared both parametric and non-parametric time series models of patients treated for malaria in Maternal and Child Health Centres in Lagos State, Nigeria. The forecast methods considered linear regression, Integrated Moving Average, ARIMA and SARIMA Modeling for the parametric approach, while Multilayer Perceptron (MLP) and Long Short-Term Memory (LSTM) Network were used for the non-parametric model. The performance of each method is evaluated using the Mean Absolute Error (MAE), R-squared (R2) and Root Mean Square Error (RMSE) as criteria to determine the accuracy of each model. The study revealed that the best performance in terms of error was found in MLP, followed by the LSTM and ARIMA models. In addition, the Bootstrap Aggregating technique was used to make robust forecasts when there are uncertainties in the data.

Keywords: ARIMA, bootstrap aggregation, MLP, LSTM, SARIMA, time-series analysis

Procedia PDF Downloads 34
402 Test Suite Optimization Using an Effective Meta-Heuristic BAT Algorithm

Authors: Anuradha Chug, Sunali Gandhi

Abstract:

Regression Testing is a very expensive and time-consuming process carried out to ensure the validity of modified software. Due to the availability of insufficient resources to re-execute all the test cases in time constrained environment, efforts are going on to generate test data automatically without human efforts. Many search based techniques have been proposed to generate efficient, effective as well as optimized test data, so that the overall cost of the software testing can be minimized. The generated test data should be able to uncover all potential lapses that exist in the software or product. Inspired from the natural behavior of bat for searching her food sources, current study employed a meta-heuristic, search-based bat algorithm for optimizing the test data on the basis certain parameters without compromising their effectiveness. Mathematical functions are also applied that can effectively filter out the redundant test data. As many as 50 Java programs are used to check the effectiveness of proposed test data generation and it has been found that 86% saving in testing efforts can be achieved using bat algorithm while covering 100% of the software code for testing. Bat algorithm was found to be more efficient in terms of simplicity and flexibility when the results were compared with another nature inspired algorithms such as Firefly Algorithm (FA), Hill Climbing Algorithm (HC) and Ant Colony Optimization (ACO). The output of this study would be useful to testers as they can achieve 100% path coverage for testing with minimum number of test cases.

Keywords: regression testing, test case selection, test case prioritization, genetic algorithm, bat algorithm

Procedia PDF Downloads 338
401 Designing Ecologically and Economically Optimal Electric Vehicle Charging Stations

Authors: Y. Ghiassi-Farrokhfal

Abstract:

The number of electric vehicles (EVs) is increasing worldwide. Replacing gas fueled cars with EVs reduces carbon emission. However, the extensive energy consumption of EVs stresses the energy systems, requiring non-green sources of energy (such as gas turbines) to compensate for the new energy demand caused by EVs in the energy systems. To make EVs even a greener solution for the future energy systems, new EV charging stations are equipped with solar PV panels and batteries. This will help serve the energy demand of EVs through the green energy of solar panels. To ensure energy availability, solar panels are combined with batteries. The energy surplus at any point is stored in batteries and is used when there is not enough solar energy to serve the demand. While EV charging stations equipped with solar panels and batteries are green and ecologically optimal, they might not be financially viable solutions, due to battery prices. To make the system viable, we should size the battery economically and operate the system optimally. This is, in general, a challenging problem because of the stochastic nature of the EV arrivals at the charging station, the available solar energy, and the battery operating system. In this work, we provide a mathematical model for this problem and we compute the return on investment (ROI) of such a system, which is designed to be ecologically and financially optimal. We also quantify the minimum required investment in terms of battery and solar panels along with the operating strategy to ensure that a charging station has enough energy to serve its EV demand at any time.

Keywords: solar energy, battery storage, electric vehicle, charging stations

Procedia PDF Downloads 194
400 An Analytical Approach to Assess and Compare the Vulnerability Risk of Operating Systems

Authors: Pubudu K. Hitigala Kaluarachchilage, Champike Attanayake, Sasith Rajasooriya, Chris P. Tsokos

Abstract:

Operating system (OS) security is a key component of computer security. Assessing and improving OSs strength to resist against vulnerabilities and attacks is a mandatory requirement given the rate of new vulnerabilities discovered and attacks occurring. Frequency and the number of different kinds of vulnerabilities found in an OS can be considered an index of its information security level. In the present study five mostly used OSs, Microsoft Windows (windows 7, windows 8 and windows 10), Apple’s Mac and Linux are assessed for their discovered vulnerabilities and the risk associated with each. Each discovered and reported vulnerability has an exploitability score assigned in CVSS score of the national vulnerability database. In this study the risk from vulnerabilities in each of the five Operating Systems is compared. Risk Indexes used are developed based on the Markov model to evaluate the risk of each vulnerability. Statistical methodology and underlying mathematical approach is described. Initially, parametric procedures are conducted and measured. There were, however, violations of some statistical assumptions observed. Therefore the need for non-parametric approaches was recognized. 6838 vulnerabilities recorded were considered in the analysis. According to the risk associated with all the vulnerabilities considered, it was found that there is a statistically significant difference among average risk levels for some operating systems, indicating that according to our method some operating systems have been more risk vulnerable than others given the assumptions and limitations. Relevant test results revealing a statistically significant difference in the Risk levels of different OSs are presented.

Keywords: cybersecurity, Markov chain, non-parametric analysis, vulnerability, operating system

Procedia PDF Downloads 156
399 Hydrogen-Fueled Micro-Thermophotovoltaic Power Generator: Flame Regimes and Flame Stability

Authors: Hosein Faramarzpour

Abstract:

This work presents the optimum operational conditions for a hydrogen-based micro-scale power source, using a verified mathematical model including fluid dynamics and reaction kinetics. Thereafter the stable operational flame regime is pursued as a key factor in optimizing the design of micro-combustors. The results show that with increasing velocities, four H2 flame regimes develop in the micro-combustor, namely: 1) periodic ignition-extinction regime, 2) steady symmetric regime, 3) pulsating asymmetric regime, and 4) steady asymmetric regime. The first regime that appears in 0.8 m/s inlet velocity is a periodic ignition-extinction regime which is characterized by counter flows and tulip-shape flames. For flow velocity above 0.2 m/s, the flame shifts downstream, and the combustion regime switches to a steady symmetric flame where temperature increases considerably due to the increased rate of incoming energy. Further elevation in flow velocity up to 1 m/s leads to the pulsating asymmetric flame formation, which is associated with pulses in various flame properties such as temperature and species concentration. Further elevation in flow velocity up to 1 m/s leads to the pulsating asymmetric flame formation, which is associated with pulses in various flame properties such as temperature and species concentration. Ultimately, when the inlet velocity reached 1.2 m/s, the last regime was observed, and a steady asymmetric regime appeared.

Keywords: thermophotovoltaic generator, micro combustor, micro power generator, combustion regimes, flame dynamic

Procedia PDF Downloads 62
398 Design and Development of Optical Sensor Based Ground Reaction Force Measurement Platform for GAIT and Geriatric Studies

Authors: K. Chethana, A. S. Guru Prasad, S. N. Omkar, B. Vadiraj, S. Asokan

Abstract:

This paper describes an ab-initio design, development and calibration results of an Optical Sensor Ground Reaction Force Measurement Platform (OSGRFP) for gait and geriatric studies. The developed system employs an array of FBG sensors to measure the respective ground reaction forces from all three axes (X, Y and Z), which are perpendicular to each other. The novelty of this work is two folded. One is in its uniqueness to resolve the tri axial resultant forces during the stance in to the respective pure axis loads and the other is the applicability of inherently advantageous FBG sensors which are most suitable for biomechanical instrumentation. To validate the response of the FBG sensors installed in OSGRFP and to measure the cross sensitivity of the force applied in other directions, load sensors with indicators are used. Further in this work, relevant mathematical formulations are presented for extracting respective ground reaction forces from wavelength shifts/strain of FBG sensors on the OSGRFP. The result of this device has implications in understanding the foot function, identifying issues in gait cycle and measuring discrepancies between left and right foot. The device also provides a method to quantify and compare relative postural stability of different subjects under test, which has implications in post surgical rehabilitation, geriatrics and optimizing training protocols for sports personnel.

Keywords: balance and stability, gait analysis, FBG applications, optical sensor ground reaction force platform

Procedia PDF Downloads 371
397 Interdisciplinarity as a Regular Pedagogical Practice in the Classrooms

Authors: Catarina Maria Neto Da Cruz, Ana Maria Reis D’Azevedo Breda

Abstract:

The world is changing and, consequently, the young people need more sophisticated tools and skills to lead with the world’s complexity. The Organisation for Economic Co-operation and Development Learning Framework 2030 suggests an interdisciplinary knowledge as a principle for the future of education systems. In the curricular document Portuguese about the profile of students leaving compulsory education, the critical thinking and creative thinking are pointed out as skills to be developed, which imply the interconnection of different knowledge, applying it in different contexts and learning areas. Unlike primary school teachers, teachers specialized in a specific area lead to more difficulties in the implementation of interdisciplinary approaches in the classrooms and, despite the effort, the interdisciplinarity is not a common practice in schools. Statement like "Mathematics is everywhere" is unquestionable, however, many math teachers show difficulties in presenting such evidence in their classes. Mathematical modelling and problems in real contexts are promising in the development of interdisciplinary pedagogical practices and in Portugal there is a continuous training offer to contribute to the development of teachers in terms of their pedagogical approaches. But when teachers find themselves in the classroom, without a support, do they feel able to implement interdisciplinary practices? In this communication we will try to approach this issue through a case study involving a group of Mathematics teachers, who attended a training aimed at stimulating interdisciplinary practices in real contexts, namely related to the COVID-19 pandemic.

Keywords: education, mathematics, teacher training, interdisciplinarity

Procedia PDF Downloads 59
396 The Efficiency Analysis in the Health Sector: Marmara Region

Authors: Hale Kirer Silva Lecuna, Beyza Aydin

Abstract:

Health is one of the main components of human capital and sustainable development, and it is very important for economic growth. Health economics, which is an indisputable part of the science of economics, has five stages in general. These are health and development, financing of health services, economic regulation in the health, allocation of resources and efficiency of health services. A well-developed and efficient health sector plays a major role by increasing the level of development of countries. The most crucial pillars of the health sector are the hospitals that are divided into public and private. The main purpose of the hospitals is to provide more efficient services. Therefore the aim is to meet patients’ satisfaction by increasing the service quality. Health-related studies in Turkey date back to the Ottoman and Seljuk Empires. In the near past, Turkey applied 'Health Sector Transformation Programs' under different titles between 2003 and 2010. Our aim in this paper is to measure how effective these transformation programs are for the health sector, to see how much they can increase the efficiency of hospitals over the years, to see the return of investments, to make comments and suggestions on the results, and to provide a new reference for the literature. Within this framework, the public and private hospitals in Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, Istanbul, Kirklareli, Kocaeli, Sakarya, Tekirdağ, Yalova will be examined by using Data Envelopment Analysis (DEA) for the years between 2000 and 2019. DEA is a linear programming-based technique, which gives relatively good results in multivariate studies. DEA basically estimates an efficiency frontier and make a comparison. Constant returns to scale and variable returns to scale are two most commonly used DEA methods. Both models are divided into two as input and output-oriented. To analyze the data, the number of personnel, number of specialist physicians, number of practitioners, number of beds, number of examinations will be used as input variables; and the number of surgeries, in-patient ratio, and crude mortality rate as output variables. 11 hospitals belonging to the Marmara region were included in the study. It is seen that these hospitals worked effectively only in 7 provinces (Balıkesir, Bilecik, Bursa, Edirne, İstanbul, Kırklareli, Yalova) for the year 2001 when no transformation program was implemented. After the transformation program was implemented, for example, in 2014 and 2016, 10 hospitals (Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, İstanbul, Kocaeli, Kırklareli, Tekirdağ, Yalova) were found to be effective. In 2015, ineffective results were observed for Sakarya, Tekirdağ and Yalova. However, since these values are closer to 1 after the transformation program, we can say that the transformation program has positive effects. For Sakarya alone, no effective results have been achieved in any year. When we look at the results in general, it shows that the transformation program has a positive effect on the effectiveness of hospitals.

Keywords: data envelopment analysis, efficiency, health sector, Marmara region

Procedia PDF Downloads 103
395 Project HDMI: A Hybrid-Differentiated Mathematics Instruction for Grade 11 Senior High School Students at Las Piñas City Technical Vocational High School

Authors: Mary Ann Cristine R. Olgado

Abstract:

Diversity in the classroom might make it difficult to promote individualized learning, but differentiated instruction that caters to students' various learning preferences may prove to be beneficial. Hence, this study examined the effectiveness of Hybrid-Differentiated Mathematics Instruction (HDMI) in improving the students’ academic performance in Mathematics. It employed the quasi-experimental research design by using a comparative analysis of the two variables: the experimental and control groups. The learning styles of the students were identified using the Grasha-Riechmann Student Learning Style Scale (GRSLSS), which served as the basis for designing differentiated action plans in Mathematics. In addition, adapted survey questionnaires, pre-tests, and post-tests were used to gather information and were analyzed using descriptive and correlational statistics to find the relationship between variables. The experimental group received differentiated instruction for a month, while the control group received traditional teaching instruction. The study found that Hybrid-Differentiated Mathematics Instruction (HDMI) improved the academic performance of Grade 11-TVL students, with the experimental group performing better than the control group. This program has effectively tailored the teaching methods to meet the diverse learning needs of the students, fostering and enhancing a deeper understanding of mathematical concepts in Statistics & Probability, both within and beyond the classroom.

Keywords: differentiated instruction, hybrid, learning styles, academic performance

Procedia PDF Downloads 26
394 Gas Network Noncooperative Game

Authors: Teresa Azevedo PerdicoúLis, Paulo Lopes Dos Santos

Abstract:

The conceptualisation of the problem of network optimisation as a noncooperative game sets up a holistic interactive approach that brings together different network features (e.g., com-pressor stations, sources, and pipelines, in the gas context) where the optimisation objectives are different, and a single optimisation procedure becomes possible without having to feed results from diverse software packages into each other. A mathematical model of this type, where independent entities take action, offers the ideal modularity and subsequent problem decomposition in view to design a decentralised algorithm to optimise the operation and management of the network. In a game framework, compressor stations and sources are under-stood as players which communicate through network connectivity constraints–the pipeline model. That is, in a scheme similar to tatonnementˆ, the players appoint their best settings and then interact to check for network feasibility. The devolved degree of network unfeasibility informs the players about the ’quality’ of their settings, and this two-phase iterative scheme is repeated until a global optimum is obtained. Due to network transients, its optimisation needs to be assessed at different points of the control interval. For this reason, the proposed approach to optimisation has two stages: (i) the first stage computes along the period of optimisation in order to fulfil the requirement just mentioned; (ii) the second stage is initialised with the solution found by the problem computed at the first stage, and computes in the end of the period of optimisation to rectify the solution found at the first stage. The liability of the proposed scheme is proven correct on an abstract prototype and three example networks.

Keywords: connectivity matrix, gas network optimisation, large-scale, noncooperative game, system decomposition

Procedia PDF Downloads 124
393 Wind Velocity Climate Zonation Based on Observation Data in Indonesia Using Cluster and Principal Component Analysis

Authors: I Dewa Gede Arya Putra

Abstract:

Principal Component Analysis (PCA) is a mathematical procedure that uses orthogonal transformation techniques to change a set of data with components that may be related become components that are not related to each other. This can have an impact on clustering wind speed characteristics in Indonesia. This study uses data daily wind speed observations of the Site Meteorological Station network for 30 years. Multicollinearity tests were also performed on all of these data before doing clustering with PCA. The results show that the four main components have a total diversity of above 80% which will be used for clusters. Division of clusters using Ward's method obtained 3 types of clusters. Cluster 1 covers the central part of Sumatra Island, northern Kalimantan, northern Sulawesi, and northern Maluku with the climatological pattern of wind speed that does not have an annual cycle and a weak speed throughout the year with a low-speed ranging from 0 to 1,5 m/s². Cluster 2 covers the northern part of Sumatra Island, South Sulawesi, Bali, northern Papua with the climatological pattern conditions of wind speed that have annual cycle variations with low speeds ranging from 1 to 3 m/s². Cluster 3 covers the eastern part of Java Island, the Southeast Nusa Islands, and the southern Maluku Islands with the climatological pattern of wind speed conditions that have annual cycle variations with high speeds ranging from 1 to 4.5 m/s².

Keywords: PCA, cluster, Ward's method, wind speed

Procedia PDF Downloads 166
392 Multifluid Computational Fluid Dynamics Simulation for Sawdust Gasification inside an Industrial Scale Fluidized Bed Gasifier

Authors: Vasujeet Singh, Pruthiviraj Nemalipuri, Vivek Vitankar, Harish Chandra Das

Abstract:

For the correct prediction of thermal and hydraulic performance (bed voidage, suspension density, pressure drop, heat transfer, and combustion kinetics), one should incorporate the correct parameters in the computational fluid dynamics simulation of a fluidized bed gasifier. Scarcity of fossil fuels, and to fulfill the energy demand of the increasing population, researchers need to shift their attention to the alternative to fossil fuels. The current research work focuses on hydrodynamics behavior and gasification of sawdust inside a 2D industrial scale FBG using the Eulerian-Eulerian multifluid model. The present numerical model is validated with experimental data. Further, this model extended for the prediction of gasification characteristics of sawdust by incorporating eight heterogeneous moisture release, volatile cracking, tar cracking, tar oxidation, char combustion, CO₂ gasification, steam gasification, methanation reaction, and five homogeneous oxidation of CO, CH₄, H₂, forward and backward water gas shift (WGS) reactions. In the result section, composition of gasification products is analyzed, along with the hydrodynamics of sawdust and sand phase, heat transfer between the gas, sand and sawdust, reaction rates of different homogeneous and heterogeneous reactions is being analyzed along the height of the domain.

Keywords: devolatilization, Eulerian-Eulerian, fluidized bed gasifier, mathematical modelling, sawdust gasification

Procedia PDF Downloads 67
391 Seismic Assessment of an Existing Dual System RC Buildings in Madinah City

Authors: Tarek M. Alguhane, Ayman H. Khalil, M. N. Fayed, Ayman M. Ismail

Abstract:

A 15-storey RC building, studied in this paper, is representative of modern building type constructed in Madina City in Saudi Arabia before 10 years ago. These buildings are almost consisting of reinforced concrete skeleton, i. e. columns, beams and flat slab as well as shear walls in the stairs and elevator areas arranged in the way to have a resistance system for lateral loads (wind–earthquake loads). In this study, the dynamic properties of the 15-storey RC building were identified using ambient motions recorded at several spatially-distributed locations within each building. After updating the mathematical models for this building with the experimental results, three dimensional pushover analysis (nonlinear static analysis) was carried out using SAP2000 software incorporating inelastic material properties for concrete, infill and steel. The effect of modeling the building with and without infill walls on the performance point as well as capacity and demand spectra due to EQ design spectrum function in Madina area has been investigated. The response modification factor (R) for the 15 storey RC building is evaluated from capacity and demand spectra (ATC-40). The purpose of this analysis is to evaluate the expected performance of structural systems by estimating, strength and deformation demands in design, and comparing these demands to available capacities at the performance levels of interest. The results are summarized and discussed.

Keywords: seismic assessment, pushover analysis, ambient vibration, modal update

Procedia PDF Downloads 365
390 Detection and Classification of Mammogram Images Using Principle Component Analysis and Lazy Classifiers

Authors: Rajkumar Kolangarakandy

Abstract:

Feature extraction and selection is the primary part of any mammogram classification algorithms. The choice of feature, attribute or measurements have an important influence in any classification system. Discrete Wavelet Transformation (DWT) coefficients are one of the prominent features for representing images in frequency domain. The features obtained after the decomposition of the mammogram images using wavelet transformations have higher dimension. Even though the features are higher in dimension, they were highly correlated and redundant in nature. The dimensionality reduction techniques play an important role in selecting the optimum number of features from the higher dimension data, which are highly correlated. PCA is a mathematical tool that reduces the dimensionality of the data while retaining most of the variation in the dataset. In this paper, a multilevel classification of mammogram images using reduced discrete wavelet transformation coefficients and lazy classifiers is proposed. The classification is accomplished in two different levels. In the first level, mammogram ROIs extracted from the dataset is classified as normal and abnormal types. In the second level, all the abnormal mammogram ROIs is classified into benign and malignant too. A further classification is also accomplished based on the variation in structure and intensity distribution of the images in the dataset. The Lazy classifiers called Kstar, IBL and LWL are used for classification. The classification results obtained with the reduced feature set is highly promising and the result is also compared with the performance obtained without dimension reduction.

Keywords: PCA, wavelet transformation, lazy classifiers, Kstar, IBL, LWL

Procedia PDF Downloads 313
389 Hybrid Intelligent Optimization Methods for Optimal Design of Horizontal-Axis Wind Turbine Blades

Authors: E. Tandis, E. Assareh

Abstract:

Designing the optimal shape of MW wind turbine blades is provided in a number of cases through evolutionary algorithms associated with mathematical modeling (Blade Element Momentum Theory). Evolutionary algorithms, among the optimization methods, enjoy many advantages, particularly in stability. However, they usually need a large number of function evaluations. Since there are a large number of local extremes, the optimization method has to find the global extreme accurately. The present paper introduces a new population-based hybrid algorithm called Genetic-Based Bees Algorithm (GBBA). This algorithm is meant to design the optimal shape for MW wind turbine blades. The current method employs crossover and neighborhood searching operators taken from the respective Genetic Algorithm (GA) and Bees Algorithm (BA) to provide a method with good performance in accuracy and speed convergence. Different blade designs, twenty-one to be exact, were considered based on the chord length, twist angle and tip speed ratio using GA results. They were compared with BA and GBBA optimum design results targeting the power coefficient and solidity. The results suggest that the final shape, obtained by the proposed hybrid algorithm, performs better compared to either BA or GA. Furthermore, the accuracy and speed convergence increases when the GBBA is employed

Keywords: Blade Design, Optimization, Genetic Algorithm, Bees Algorithm, Genetic-Based Bees Algorithm, Large Wind Turbine

Procedia PDF Downloads 282
388 Fuzzy Optimization Multi-Objective Clustering Ensemble Model for Multi-Source Data Analysis

Authors: C. B. Le, V. N. Pham

Abstract:

In modern data analysis, multi-source data appears more and more in real applications. Multi-source data clustering has emerged as a important issue in the data mining and machine learning community. Different data sources provide information about different data. Therefore, multi-source data linking is essential to improve clustering performance. However, in practice multi-source data is often heterogeneous, uncertain, and large. This issue is considered a major challenge from multi-source data. Ensemble is a versatile machine learning model in which learning techniques can work in parallel, with big data. Clustering ensemble has been shown to outperform any standard clustering algorithm in terms of accuracy and robustness. However, most of the traditional clustering ensemble approaches are based on single-objective function and single-source data. This paper proposes a new clustering ensemble method for multi-source data analysis. The fuzzy optimized multi-objective clustering ensemble method is called FOMOCE. Firstly, a clustering ensemble mathematical model based on the structure of multi-objective clustering function, multi-source data, and dark knowledge is introduced. Then, rules for extracting dark knowledge from the input data, clustering algorithms, and base clusterings are designed and applied. Finally, a clustering ensemble algorithm is proposed for multi-source data analysis. The experiments were performed on the standard sample data set. The experimental results demonstrate the superior performance of the FOMOCE method compared to the existing clustering ensemble methods and multi-source clustering methods.

Keywords: clustering ensemble, multi-source, multi-objective, fuzzy clustering

Procedia PDF Downloads 144
387 High Accuracy Analytic Approximations for Modified Bessel Functions I₀(x)

Authors: Pablo Martin, Jorge Olivares, Fernando Maass

Abstract:

A method to obtain analytic approximations for special function of interest in engineering and physics is described here. Each approximate function will be valid for every positive value of the variable and accuracy will be high and increasing with the number of parameters to determine. The general technique will be shown through an application to the modified Bessel function of order zero, I₀(x). The form and the calculation of the parameters are performed with the simultaneous use of the power series and asymptotic expansion. As in Padé method rational functions are used, but now they are combined with other elementary functions as; fractional powers, hyperbolic, trigonometric and exponential functions, and others. The elementary function is determined, considering that the approximate function should be a bridge between the power series and the asymptotic expansion. In the case of the I₀(x) function two analytic approximations have been already determined. The simplest one is (1+x²/4)⁻¹/⁴(1+0.24273x²) cosh(x)/(1+0.43023x²). The parameters of I₀(x) were determined using the leading term of the asymptotic expansion and two coefficients of the power series, and the maximum relative error is 0.05. In a second case, two terms of the asymptotic expansion were used and 4 of the power series and the maximum relative error is 0.001 at x≈9.5. Approximations with much higher accuracy will be also shown. In conclusion a new technique is described to obtain analytic approximations to some functions of interest in sciences, such that they have a high accuracy, they are valid for every positive value of the variable, they can be integrated and differentiated as the usual, functions, and furthermore they can be calculated easily even with a regular pocket calculator.

Keywords: analytic approximations, mathematical-physics applications, quasi-rational functions, special functions

Procedia PDF Downloads 225
386 Automated Prediction of HIV-associated Cervical Cancer Patients Using Data Mining Techniques for Survival Analysis

Authors: O. J. Akinsola, Yinan Zheng, Rose Anorlu, F. T. Ogunsola, Lifang Hou, Robert Leo-Murphy

Abstract:

Cervical Cancer (CC) is the 2nd most common cancer among women living in low and middle-income countries, with no associated symptoms during formative periods. With the advancement and innovative medical research, there are numerous preventive measures being utilized, but the incidence of cervical cancer cannot be truncated with the application of only screening tests. The mortality associated with this invasive cervical cancer can be nipped in the bud through the important role of early-stage detection. This study research selected an array of different top features selection techniques which was aimed at developing a model that could validly diagnose the risk factors of cervical cancer. A retrospective clinic-based cohort study was conducted on 178 HIV-associated cervical cancer patients in Lagos University teaching Hospital, Nigeria (U54 data repository) in April 2022. The outcome measure was the automated prediction of the HIV-associated cervical cancer cases, while the predictor variables include: demographic information, reproductive history, birth control, sexual history, cervical cancer screening history for invasive cervical cancer. The proposed technique was assessed with R and Python programming software to produce the model by utilizing the classification algorithms for the detection and diagnosis of cervical cancer disease. Four machine learning classification algorithms used are: the machine learning model was split into training and testing dataset into ratio 80:20. The numerical features were also standardized while hyperparameter tuning was carried out on the machine learning to train and test the data. Logistic Regression (LR), Decision Tree (DT), Random Forest (RF), and K-Nearest Neighbor (KNN). Some fitting features were selected for the detection and diagnosis of cervical cancer diseases from selected characteristics in the dataset using the contribution of various selection methods for the classification cervical cancer into healthy or diseased status. The mean age of patients was 49.7±12.1 years, mean age at pregnancy was 23.3±5.5 years, mean age at first sexual experience was 19.4±3.2 years, while the mean BMI was 27.1±5.6 kg/m2. A larger percentage of the patients are Married (62.9%), while most of them have at least two sexual partners (72.5%). Age of patients (OR=1.065, p<0.001**), marital status (OR=0.375, p=0.011**), number of pregnancy live-births (OR=1.317, p=0.007**), and use of birth control pills (OR=0.291, p=0.015**) were found to be significantly associated with HIV-associated cervical cancer. On top ten 10 features (variables) considered in the analysis, RF claims the overall model performance, which include: accuracy of (72.0%), the precision of (84.6%), a recall of (84.6%) and F1-score of (74.0%) while LR has: an accuracy of (74.0%), precision of (70.0%), recall of (70.0%) and F1-score of (70.0%). The RF model identified 10 features predictive of developing cervical cancer. The age of patients was considered as the most important risk factor, followed by the number of pregnancy livebirths, marital status, and use of birth control pills, The study shows that data mining techniques could be used to identify women living with HIV at high risk of developing cervical cancer in Nigeria and other sub-Saharan African countries.

Keywords: associated cervical cancer, data mining, random forest, logistic regression

Procedia PDF Downloads 54