Search results for: optimal operating parameters
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12943

Search results for: optimal operating parameters

10783 Optimization Approach to Integrated Production-Inventory-Routing Problem for Oxygen Supply Chains

Authors: Yena Lee, Vassilis M. Charitopoulos, Karthik Thyagarajan, Ian Morris, Jose M. Pinto, Lazaros G. Papageorgiou

Abstract:

With globalisation, the need to have better coordination of production and distribution decisions has become increasingly important for industrial gas companies in order to remain competitive in the marketplace. In this work, we investigate a problem that integrates production, inventory, and routing decisions in a liquid oxygen supply chain. The oxygen supply chain consists of production facilities, external third-party suppliers, and multiple customers, including hospitals and industrial customers. The product produced by the plants or sourced from the competitors, i.e., third-party suppliers, is distributed by a fleet of heterogenous vehicles to satisfy customer demands. The objective is to minimise the total operating cost involving production, third-party, and transportation costs. The key decisions for production include production and inventory levels and product amount from third-party suppliers. In contrast, the distribution decisions involve customer allocation, delivery timing, delivery amount, and vehicle routing. The optimisation of the coordinated production, inventory, and routing decisions is a challenging problem, especially when dealing with large-size problems. Thus, we present a two-stage procedure to solve the integrated problem efficiently. First, the problem is formulated as a mixed-integer linear programming (MILP) model by simplifying the routing component. The solution from the first-stage MILP model yields the optimal customer allocation, production and inventory levels, and delivery timing and amount. Then, we fix the previous decisions and solve a detailed routing. In the second stage, we propose a column generation scheme to address the computational complexity of the resulting detailed routing problem. A case study considering a real-life oxygen supply chain in the UK is presented to illustrate the capability of the proposed models and solution method. Furthermore, a comparison of the solutions from the proposed approach with the corresponding solutions provided by existing metaheuristic techniques (e.g., guided local search and tabu search algorithms) is presented to evaluate the efficiency.

Keywords: production planning, inventory routing, column generation, mixed-integer linear programming

Procedia PDF Downloads 112
10782 Thermo-Exergy Optimization of Gas Turbine Cycle with Two Different Regenerator Designs

Authors: Saria Abed, Tahar Khir, Ammar Ben Brahim

Abstract:

A thermo-exergy optimization of a gas turbine cycle with two different regenerator designs is established. A comparison was made between the performance of the two regenerators and their roles in improving the cycle efficiencies. The effect of operational parameters (the pressure ratio of the compressor, the ambient temperature, excess of air, geometric parameters of the regenerators, etc.) on thermal efficiencies, the exergy efficiencies, and irreversibilities were studied using thermal balances and quantitative exegetic equilibrium for each component and for the whole system. The results are given graphically by using the EES software, and an appropriate discussion and conclusion was made.

Keywords: exergy efficiency, gas turbine, heat transfer, irreversibility, optimization, regenerator, thermal efficiency

Procedia PDF Downloads 451
10781 Characterising the Performance Benefits of a 1/7-Scale Morphing Rotor Blade

Authors: Mars Burke, Alvin Gatto

Abstract:

Rotary-wing aircraft serve as indispensable components in the advancement of aviation, valued for their ability to operate in diverse and challenging environments without the need for conventional runways. This versatility makes them ideal for applications like environmental conservation, precision agriculture, emergency medical support, and rapid-response operations in rugged terrains. However, although highly maneuverable, rotary-wing platforms generally have lower aerodynamic efficiency than fixed-wing aircraft. This study takes the view of improving aerodynamic performance by examining a 1/7th scale rotor blade model with a NACA0012 airfoil using CROTOR software. The analysis focuses on optimal spanwise locations for separating morphing and fixed blade sections at 85%, 90%, and 95% of the blade radius (r/R) with up to +20 degrees of twist incorporated to the design.. Key performance metrics assessed include lift coefficient (CL), drag coefficient (CD), lift-to-drag ratio (CL / CD), Mach number, power, thrust coefficient, and Figure of Merit (FOM). Results indicate that the 0.90 r/R position is optimal for dividing the morphing and fixed sections, achieving a significant improvement of over 7% in both lift-to-drag ratio and FOM. These findings underscoring the substantial impact on overall performance of the rotor system and rotational aerodynamics that geometric modifications through the inclusion of a morphing capability can ultimately realise.

Keywords: rotary morphing, rotational aerodynamics, rotorcraft morphing, rotor blade, twist morphing

Procedia PDF Downloads 12
10780 Electrochemical Study of Prepared Cubic Fluorite Structured Titanium Doped Lanthanum Gallium Cerate Electrolyte for Low Temperature Solid Oxide Fuel Cell

Authors: Rida Batool, Faizah Altaf, Saba Nadeem, Afifa Aslam, Faisal Alamgir, Ghazanfar Abbas

Abstract:

Today, the need of the hour is to find out alternative renewable energy resources in order to reduce the burden on fossil fuels and prevent alarming environmental degradation. Solid oxide fuel cell (SOFC) is considered a good alternative energy conversion device because it is environmentally benign and supplies energy on demand. The only drawback associated with SOFC is its high operating temperature. In order to reduce operating temperature, different types of composite material are prepared. In this work, titanium doped lanthanum gallium cerate (LGCT) composite is prepared through the co-precipitation method as electrolyte and examined for low temperature SOFCs (LTSOFCs). The structural properties are analyzed by X-Ray Diffractometry (XRD) and Fourier Transform Infrared (FTIR) Spectrometry. The surface properties are investigated by Scanning Electron Microscopy (SEM). The electrolyte LGCT has the formula LGCTO₃ because it showed two phases La.GaO and Ti.CeO₂. The average particle size is found to be (32 ± 0.9311) nm. The ionic conductivity is achieved to be 0.073S/cm at 650°C. Arrhenius plots are drawn to calculate activation energy and found 2.96 eV. The maximum power density and current density are achieved at 68.25mW/cm² and 357mA/cm², respectively, at 650°C with hydrogen. The prepared material shows excellent ionic conductivity at comparatively low temperature, that makes it a potentially good candidate for LTSOFCs.

Keywords: solid oxide fuel cell, LGCTO₃, cerium composite oxide, ionic conductivity, low temperature electrolyte

Procedia PDF Downloads 108
10779 Theoretical Evaluation of Minimum Superheat, Energy and Exergy in a High-Temperature Heat Pump System Operating with Low GWP Refrigerants

Authors: Adam Y. Sulaiman, Donal F. Cotter, Ming J. Huang, Neil J. Hewitt

Abstract:

Suitable low global warming potential (GWP) refrigerants that conform to F-gas regulations are required to extend the operational envelope of high-temperature heat pumps (HTHPs) used for industrial waste heat recovery processes. The thermophysical properties and characteristics of these working fluids need to be assessed to provide a comprehensive understanding of operational effectiveness in HTHP applications. This paper presents the results of a theoretical simulation to investigate a range of low-GWP refrigerants and their suitability to supersede refrigerants HFC-245fa and HFC-365mfc. A steady-state thermodynamic model of a single-stage HTHP with an internal heat exchanger (IHX) was developed to assess system cycle characteristics at temperature ranges between 50 to 80 °C heat source and 90 to 150 °C heat sink. A practical approach to maximize the operational efficiency was examined to determine the effects of regulating minimum superheat within the process and subsequent influence on energetic and exergetic efficiencies. A comprehensive map of minimum superheat across the HTHP operating variables were used to assess specific tipping points in performance at 30 and 70 K temperature lifts. Based on initial results, the refrigerants HCFO-1233zd(E) and HFO-1336mzz(Z) were found to be closely aligned matches for refrigerants HFC-245fa and HFC-365mfc. The overall results show effective performance for HCFO-1233zd(E) occurs between 5-7 K minimum superheat, and HFO-1336mzz(Z) between 18-21 K dependant on temperature lift. This work provides a method to optimize refrigerant selection based on operational indicators to maximize overall HTHPs system performance.

Keywords: high-temperature heat pump, minimum superheat, energy & exergy efficiency, low GWP refrigerants

Procedia PDF Downloads 184
10778 Acoustic Emission Techniques in Monitoring Low-Speed Bearing Conditions

Authors: Faisal AlShammari, Abdulmajid Addali, Mosab Alrashed

Abstract:

It is widely acknowledged that bearing failures are the primary reason for breakdowns in rotating machinery. These failures are extremely costly, particularly in terms of lost production. Roller bearings are widely used in industrial machinery and need to be maintained in good condition to ensure the continuing efficiency, effectiveness, and profitability of the production process. The research presented here is an investigation of the use of acoustic emission (AE) to monitor bearing conditions at low speeds. Many machines, particularly large, expensive machines operate at speeds below 100 rpm, and such machines are important to the industry. However, the overwhelming proportion of studies have investigated the use of AE techniques for condition monitoring of higher-speed machines (typically several hundred rpm, or even higher). Few researchers have investigated the application of these techniques to low-speed machines ( < 100 rpm). This paper addressed this omission and has established which, of the available, AE techniques are suitable for the detection of incipient faults and measurement of fault growth in low-speed bearings. The first objective of this paper program was to assess the applicability of AE techniques to monitor low-speed bearings. It was found that the measured statistical parameters successfully monitored bearing conditions at low speeds (10-100 rpm). The second objective was to identify which commonly used statistical parameters derived from the AE signal (RMS, kurtosis, amplitude and counts) could identify the onset of a fault in the out race. It was found that these parameters effectually identify the presence of a small fault seeded into the outer races. Also, it is concluded that rotational speed has a strong influence on the measured AE parameters but that they are entirely independent of the load under such load and speed conditions.

Keywords: acoustic emission, condition monitoring, NDT, statistical analysis

Procedia PDF Downloads 248
10777 Optimal Pricing Based on Real Estate Demand Data

Authors: Vanessa Kummer, Maik Meusel

Abstract:

Real estate demand estimates are typically derived from transaction data. However, in regions with excess demand, transactions are driven by supply and therefore do not indicate what people are actually looking for. To estimate the demand for housing in Switzerland, search subscriptions from all important Swiss real estate platforms are used. These data do, however, suffer from missing information—for example, many users do not specify how many rooms they would like or what price they would be willing to pay. In economic analyses, it is often the case that only complete data is used. Usually, however, the proportion of complete data is rather small which leads to most information being neglected. Also, the data might have a strong distortion if it is complete. In addition, the reason that data is missing might itself also contain information, which is however ignored with that approach. An interesting issue is, therefore, if for economic analyses such as the one at hand, there is an added value by using the whole data set with the imputed missing values compared to using the usually small percentage of complete data (baseline). Also, it is interesting to see how different algorithms affect that result. The imputation of the missing data is done using unsupervised learning. Out of the numerous unsupervised learning approaches, the most common ones, such as clustering, principal component analysis, or neural networks techniques are applied. By training the model iteratively on the imputed data and, thereby, including the information of all data into the model, the distortion of the first training set—the complete data—vanishes. In a next step, the performances of the algorithms are measured. This is done by randomly creating missing values in subsets of the data, estimating those values with the relevant algorithms and several parameter combinations, and comparing the estimates to the actual data. After having found the optimal parameter set for each algorithm, the missing values are being imputed. Using the resulting data sets, the next step is to estimate the willingness to pay for real estate. This is done by fitting price distributions for real estate properties with certain characteristics, such as the region or the number of rooms. Based on these distributions, survival functions are computed to obtain the functional relationship between characteristics and selling probabilities. Comparing the survival functions shows that estimates which are based on imputed data sets do not differ significantly from each other; however, the demand estimate that is derived from the baseline data does. This indicates that the baseline data set does not include all available information and is therefore not representative for the entire sample. Also, demand estimates derived from the whole data set are much more accurate than the baseline estimation. Thus, in order to obtain optimal results, it is important to make use of all available data, even though it involves additional procedures such as data imputation.

Keywords: demand estimate, missing-data imputation, real estate, unsupervised learning

Procedia PDF Downloads 285
10776 Bench Tests of Two-Stroke Opposed Piston Aircraft Diesel Engine under Propeller Characteristics Conditions

Authors: A. Majczak, G. Baranski, K. Pietrykowski

Abstract:

Due to the growing popularity of light aircraft, it has become necessary to develop aircraft engines for this type of construction. One of engine system, designed to increase efficiency and reduce weight, is the engine with opposed pistons. In such an engine, the combustion chamber is formed by two pistons moving in one cylinder. Therefore, this type of engines run in a two-stroke cycle, so they have many advantages such as high power and torque, high efficiency, or a favorable power-to-weight ratio. Tests of one of the available aircraft engines with opposing piston system fueled with diesel oil were carried out on an engine dynamometer equipped with an eddy current brake and the necessary measuring and testing equipment. In order to get to know the basic parameters of the engine, the tests were carried out under partial load conditions for the following torque values: 40, 60, 80, 100 Nm. The rotational speed was changed from 1600 to 2500 rpm. Measurements were also taken for designated points of propeller characteristics. During the tests, the engine torque, engine power, fuel consumption, intake manifold pressure, and oil pressure were recorded. On the basis of the measurements carried out for particular loads, the power curve, hourly and specific fuel consumption curves were determined. Characteristics of charge pressure as a function of rotational speed as well as power, torque, hourly and specific fuel consumption curves for propeller characteristics were also prepared. The obtained characteristics make it possible to select the optimal points of engine operation.

Keywords: aircraft, diesel, engine testing, opposed piston

Procedia PDF Downloads 154
10775 Covariate-Adjusted Response-Adaptive Designs for Semi-Parametric Survival Responses

Authors: Ayon Mukherjee

Abstract:

Covariate-adjusted response-adaptive (CARA) designs use the available responses to skew the treatment allocation in a clinical trial in towards treatment found at an interim stage to be best for a given patient's covariate profile. Extensive research has been done on various aspects of CARA designs with the patient responses assumed to follow a parametric model. However, ranges of application for such designs are limited in real-life clinical trials where the responses infrequently fit a certain parametric form. On the other hand, robust estimates for the covariate-adjusted treatment effects are obtained from the parametric assumption. To balance these two requirements, designs are developed which are free from distributional assumptions about the survival responses, relying only on the assumption of proportional hazards for the two treatment arms. The proposed designs are developed by deriving two types of optimum allocation designs, and also by using a distribution function to link the past allocation, covariate and response histories to the present allocation. The optimal designs are based on biased coin procedures, with a bias towards the better treatment arm. These are the doubly-adaptive biased coin design (DBCD) and the efficient randomized adaptive design (ERADE). The treatment allocation proportions for these designs converge to the expected target values, which are functions of the Cox regression coefficients that are estimated sequentially. These expected target values are derived based on constrained optimization problems and are updated as information accrues with sequential arrival of patients. The design based on the link function is derived using the distribution function of a probit model whose parameters are adjusted based on the covariate profile of the incoming patient. To apply such designs, the treatment allocation probabilities are sequentially modified based on the treatment allocation history, response history, previous patients’ covariates and also the covariates of the incoming patient. Given these information, an expression is obtained for the conditional probability of a patient allocation to a treatment arm. Based on simulation studies, it is found that the ERADE is preferable to the DBCD when the main aim is to minimize the variance of the observed allocation proportion and to maximize the power of the Wald test for a treatment difference. However, the former procedure being discrete tends to be slower in converging towards the expected target allocation proportion. The link function based design achieves the highest skewness of patient allocation to the best treatment arm and thus ethically is the best design. Other comparative merits of the proposed designs have been highlighted and their preferred areas of application are discussed. It is concluded that the proposed CARA designs can be considered as suitable alternatives to the traditional balanced randomization designs in survival trials in terms of the power of the Wald test, provided that response data are available during the recruitment phase of the trial to enable adaptations to the designs. Moreover, the proposed designs enable more patients to get treated with the better treatment during the trial thus making the designs more ethically attractive to the patients. An existing clinical trial has been redesigned using these methods.

Keywords: censored response, Cox regression, efficiency, ethics, optimal allocation, power, variability

Procedia PDF Downloads 165
10774 Optimal 3D Deployment and Path Planning of Multiple Uavs for Maximum Coverage and Autonomy

Authors: Indu Chandran, Shubham Sharma, Rohan Mehta, Vipin Kizheppatt

Abstract:

Unmanned aerial vehicles are increasingly being explored as the most promising solution to disaster monitoring, assessment, and recovery. Current relief operations heavily rely on intelligent robot swarms to capture the damage caused, provide timely rescue, and create road maps for the victims. To perform these time-critical missions, efficient path planning that ensures quick coverage of the area is vital. This study aims to develop a technically balanced approach to provide maximum coverage of the affected area in a minimum time using the optimal number of UAVs. A coverage trajectory is designed through area decomposition and task assignment. To perform efficient and autonomous coverage mission, solution to a TSP-based optimization problem using meta-heuristic approaches is designed to allocate waypoints to the UAVs of different flight capacities. The study exploits multi-agent simulations like PX4-SITL and QGroundcontrol through the ROS framework and visualizes the dynamics of UAV deployment to different search paths in a 3D Gazebo environment. Through detailed theoretical analysis and simulation tests, we illustrate the optimality and efficiency of the proposed methodologies.

Keywords: area coverage, coverage path planning, heuristic algorithm, mission monitoring, optimization, task assignment, unmanned aerial vehicles

Procedia PDF Downloads 215
10773 Preparation and Modeling Carbon Nanofibers as an Adsorbent to Protect the Environment

Authors: Maryam Ziaei, Saeedeh Rafiei, Leila Mivehi, Akbar Khodaparast Haghi

Abstract:

Carbon nanofibers possess properties that are rarely present in any other types of carbon adsorbents, including a small cross-sectional area, combined with a multitude of slit shaped nanopores that are suitable for adsorption of certain types of molecules. Because of their unique properties these materials can be used for the selective adsorption of organic molecules. On the other hand, activated carbon fiber (ACF) has been widely applied as an effective adsorbent for micro-pollutants in recent years. ACF effectively adsorbs and removes a full spectrum of harmful substances. Although there are various methods of fabricating carbon nanofibres, electrospinning is perhaps the most versatile procedure. This technique has been given great attention in current decades because of the nearly simple, comfortable and low cost. Spinning process control and achieve optimal conditions is important in order to effect on its physical properties, absorbency and versatility with different industrial purposes. Modeling and simulation are suitable methods to obtain this approach. In this paper, activated carbon nanofibers were produced during electrospinning of polyacrylonitrile solution. Stabilization, carbonization and activation of electrospun nanofibers in optimized conditions were achieved, and mathematical modelling of electrosinning process done by focusing on governing equations of electrified fluid jet motion (using FeniCS software). Experimental and theoretical results will be compared with each other in order to estimate the accuracy of the model. The simulation can provide the possibility of predicting essential parameters, which affect the electrospinning process.

Keywords: carbon nanofibers, electrospinning, electrospinning modeling, simulation

Procedia PDF Downloads 287
10772 Calculational-Experimental Approach of Radiation Damage Parameters on VVER Equipment Evaluation

Authors: Pavel Borodkin, Nikolay Khrennikov, Azamat Gazetdinov

Abstract:

The problem of ensuring of VVER type reactor equipment integrity is now most actual in connection with justification of safety of the NPP Units and extension of their service life to 60 years and more. First of all, it concerns old units with VVER-440 and VVER-1000. The justification of the VVER equipment integrity depends on the reliability of estimation of the degree of the equipment damage. One of the mandatory requirements, providing the reliability of such estimation, and also evaluation of VVER equipment lifetime, is the monitoring of equipment radiation loading parameters. In this connection, there is a problem of justification of such normative parameters, used for an estimation of the pressure vessel metal embrittlement, as the fluence and fluence rate (FR) of fast neutrons above 0.5 MeV. From the point of view of regulatory practice, a comparison of displacement per atom (DPA) and fast neutron fluence (FNF) above 0.5 MeV has a practical concern. In accordance with the Russian regulatory rules, neutron fluence F(E > 0.5 MeV) is a radiation exposure parameter used in steel embrittlement prediction under neutron irradiation. However, the DPA parameter is a more physically legitimate quantity of neutron damage of Fe based materials. If DPA distribution in reactor structures is more conservative as neutron fluence, this case should attract the attention of the regulatory authority. The purpose of this work was to show what radiation load parameters (fluence, DPA) on all VVER equipment should be under control, and give the reasonable estimations of such parameters in the volume of all equipment. The second task is to give the conservative estimation of each parameter including its uncertainty. Results of recently received investigations allow to test the conservatism of calculational predictions, and, as it has been shown in the paper, combination of ex-vessel measured data with calculated ones allows to assess unpredicted uncertainties which are results of specific unique features of individual equipment for VVER reactor. Some results of calculational-experimental investigations are presented in this paper.

Keywords: equipment integrity, fluence, displacement per atom, nuclear power plant, neutron activation measurements, neutron transport calculations

Procedia PDF Downloads 157
10771 A Study on the Effect of Design Factors of Slim Keyboard’s Tactile Feedback

Authors: Kai-Chieh Lin, Chih-Fu Wu, Hsiang Ling Hsu, Yung-Hsiang Tu, Chia-Chen Wu

Abstract:

With the rapid development of computer technology, the design of computers and keyboards moves towards a trend of slimness. The change of mobile input devices directly influences users’ behavior. Although multi-touch applications allow entering texts through a virtual keyboard, the performance, feedback, and comfortableness of the technology is inferior to traditional keyboard, and while manufacturers launch mobile touch keyboards and projection keyboards, the performance has not been satisfying. Therefore, this study discussed the design factors of slim pressure-sensitive keyboards. The factors were evaluated with an objective (accuracy and speed) and a subjective evaluation (operability, recognition, feedback, and difficulty) depending on the shape (circle, rectangle, and L-shaped), thickness (flat, 3mm, and 6mm), and force (35±10g, 60±10g, and 85±10g) of the keyboard. Moreover, MANOVA and Taguchi methods (regarding signal-to-noise ratios) were conducted to find the optimal level of each design factor. The research participants, by their typing speed (30 words/ minute), were divided in two groups. Considering the multitude of variables and levels, the experiments were implemented using the fractional factorial design. A representative model of the research samples were established for input task testing. The findings of this study showed that participants with low typing speed primarily relied on vision to recognize the keys, and those with high typing speed relied on tactile feedback that was affected by the thickness and force of the keys. In the objective and subjective evaluation, a combination of keyboard design factors that might result in higher performance and satisfaction was identified (L-shaped, 3mm, and 60±10g) as the optimal combination. The learning curve was analyzed to make a comparison with a traditional standard keyboard to investigate the influence of user experience on keyboard operation. The research results indicated the optimal combination provided input performance to inferior to a standard keyboard. The results could serve as a reference for the development of related products in industry and for applying comprehensively to touch devices and input interfaces which are interacted with people.

Keywords: input performance, mobile device, slim keyboard, tactile feedback

Procedia PDF Downloads 299
10770 Bone Mineral Density in Type 2 Diabetes Mellitus Postmenopausal Egyptian Female Patients: Correlation with Fetuin-A Level and Metabolic Parameters

Authors: Ahmed A. M. Shoaib, Heba A. Esaily, Mahmoud M. Emara, Eman A. E. Badr, Amany S. Khalifa, Mayada M. M., Abdel-Raizk

Abstract:

Background: DM is associated with metabolic bone diseases, osteoporosis, low-impact fractures and falls in geriatrics. Fetuin-A, which is a serum protein produced by the liver and promotes bone mineralization, is an independent risk factor for type 2 diabetes. Aim: Evaluation of fetuin-A level and bone mineral density in postmenopausal Egyptian female patients with type 2 diabetes mellitus and their correlation with each other & with other metabolic parameters. Patients and methods: Seventy postmenopausal female patients with type II diabetes and thirty postmenopausal female as control were included in this study. Measurement of Fetuin-A together with metabolic parameters and DXA in wrist, hip and spine, ALP, CBC, FBS, PP2H and HBA1c was done in all participants. Results: - Fetuin-A level was found to be highly significant (p< 0.001) between diabetic and nondiabetic groups and negatively correlated with BMD in spine. No difference in BMD was found between patients and control groups while significant negative correlation was found between FBS and hip BMD (<0.05) and between 2hpp and HBA1c with spine BMD in the diabetic group (<0.05). Osteoporosis represented 12.9% in spine area and 7.2% in hip and wrist areas in diabetic patients, while osteopenia were found in 58.5%, 57.1%, and 37.1% in diabetic patients in spine, wrist, and hip respectively. Conclusion: - type II diabetes cannot be considered as a risk factor for osteoporosis; while glycemic parameters (FBS, 2hpp & HBA1c) and serum Fetuin-A levels were correlated with BMD in diabetics. Good glycemic control can be protective against osteoporosis in diabetic elderly.

Keywords: fetuin-A, BMD, postmenopausal, DM type II

Procedia PDF Downloads 266
10769 Evaluation of the Power Generation Effect Obtained by Inserting a Piezoelectric Sheet in the Backlash Clearance of a Circular Arc Helical Gear

Authors: Barenten Suciu, Yuya Nakamoto

Abstract:

Power generation effect, obtained by inserting a piezo- electric sheet in the backlash clearance of a circular arc helical gear, is evaluated. Such type of screw gear is preferred since, in comparison with the involute tooth profile, the circular arc profile leads to reduced stress-concentration effects, and improved life of the piezoelectric film. Firstly, geometry of the circular arc helical gear, and properties of the piezoelectric sheet are presented. Then, description of the test-rig, consisted of a right-hand thread gear meshing with a left-hand thread gear, and the voltage measurement procedure are given. After creating the tridimensional (3D) model of the meshing gears in SolidWorks, they are 3D-printed in acrylonitrile butadiene styrene (ABS) resin. Variation of the generated voltage versus time, during a meshing cycle of the circular arc helical gear, is measured for various values of the center distance. Then, the change of the maximal, minimal, and peak-to-peak voltage versus the center distance is illustrated. Optimal center distance of the gear, to achieve voltage maximization, is found and its significance is discussed. Such results prove that the contact pressure of the meshing gears can be measured, and also, the electrical power can be generated by employing the proposed technique.

Keywords: circular arc helical gear, contact problem, optimal center distance, piezoelectric sheet, power generation

Procedia PDF Downloads 167
10768 Greenhouse Controlled with Graphical Plotting in Matlab

Authors: Bruno R. A. Oliveira, Italo V. V. Braga, Jonas P. Reges, Luiz P. O. Santos, Sidney C. Duarte, Emilson R. R. Melo, Auzuir R. Alexandria

Abstract:

This project aims to building a controlled greenhouse, or for better understanding, a structure where one can maintain a given range of temperature values (°C) coming from radiation emitted by an incandescent light, as previously defined, characterizing as a kind of on-off control and a differential, which is the plotting of temperature versus time graphs assisted by MATLAB software via serial communication. That way it is possible to connect the stove with a computer and monitor parameters. In the control, it was performed using a PIC 16F877A microprocessor which enabled convert analog signals to digital, perform serial communication with the IC MAX232 and enable signal transistors. The language used in the PIC's management is Basic. There are also a cooling system realized by two coolers 12V distributed in lateral structure, being used for venting and the other for exhaust air. To find out existing temperature inside is used LM35DZ sensor. Other mechanism used in the greenhouse construction was comprised of a reed switch and a magnet; their function is in recognition of the door position where a signal is sent to a buzzer when the door is open. Beyond it exist LEDs that help to identify the operation which the stove is located. To facilitate human-machine communication is employed an LCD display that tells real-time temperature and other information. The average range of design operating without any major problems, taking into account the limitations of the construction material and structure of electrical current conduction, is approximately 65 to 70 ° C. The project is efficient in these conditions, that is, when you wish to get information from a given material to be tested at temperatures not as high. With the implementation of the greenhouse automation, facilitating the temperature control and the development of a structure that encourages correct environment for the most diverse applications.

Keywords: greenhouse, microcontroller, temperature, control, MATLAB

Procedia PDF Downloads 402
10767 Parametrical Simulation of Sheet Metal Forming Process to Control the Localized Thinning

Authors: Hatem Mrad, Alban Notin, Mohamed Bouazara

Abstract:

Sheet metal forming process has a multiple successive steps starting from sheets fixation to sheets evacuation. Often after forming operation, the sheet has defects requiring additional corrections steps. For example, in the drawing process, the formed sheet may have several defects such as springback, localized thinning and bends. All these defects are directly dependent on process, geometric and material parameters. The prediction and elimination of these defects requires the control of most sensitive parameters. The present study is concerned with a reliable parametric study of deep forming process in order to control the localized thinning. The proposed approach will be based on stochastic finite element method. Especially, the polynomial Chaos development will be used to establish a reliable relationship between input (process, geometric and material parameters) and output variables (sheet thickness). The commercial software Abaqus is used to conduct numerical finite elements simulations. The automatized parametrical modification is provided by coupling a FORTRAN routine, a PYTHON script and input Abaqus files.

Keywords: sheet metal forming, reliability, localized thinning, parametric simulation

Procedia PDF Downloads 423
10766 Detection of Intravenous Infiltration Using Impedance Parameters in Patients in a Long-Term Care Hospital

Authors: Ihn Sook Jeong, Eun Joo Lee, Jae Hyung Kim, Gun Ho Kim, Young Jun Hwang

Abstract:

This study investigated intravenous (IV) infiltration using bioelectrical impedance for 27 hospitalized patients in a long-term care hospital. Impedance parameters showed significant differences before and after infiltration as follows. First, the resistance (R) after infiltration significantly decreased compared to the initial resistance. This indicates that the IV solution flowing from the vein due to infiltration accumulates in the extracellular fluid (ECF). Second, the relative resistance at 50 kHz was 0.94 ± 0.07 in 9 subjects without infiltration and was 0.75 ± 0.12 in 18 subjects with infiltration. Third, the magnitude of the reactance (Xc) decreased after infiltration. This is because IV solution and blood components released from the vein tend to aggregate in the cell membrane (and acts analogously to the linear/parallel circuit), thereby increasing the capacitance (Cm) of the cell membrane and reducing the magnitude of reactance. Finally, the data points plotted in the R-Xc graph were distributed on the upper right before infiltration but on the lower left after infiltration. This indicates that the infiltration caused accumulation of fluid or blood components in the epidermal and subcutaneous tissues, resulting in reduced resistance and reactance, thereby lowering integrity of the cell membrane. Our findings suggest that bioelectrical impedance is an effective method for detection of infiltration in a noninvasive and quantitative manner.

Keywords: intravenous infiltration, impedance, parameters, resistance, reactance

Procedia PDF Downloads 182
10765 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model

Authors: Seydou Sinde

Abstract:

The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.

Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression

Procedia PDF Downloads 84
10764 An Artificial Intelligence Supported QUAL2K Model for the Simulation of Various Physiochemical Parameters of Water

Authors: Mehvish Bilal, Navneet Singh, Jasir Mushtaq

Abstract:

Water pollution puts people's health at risk, and it can also impact the ecology. For practitioners of integrated water resources management (IWRM), water quality modelling may be useful for informing decisions about pollution control (such as discharge permitting) or demand management (such as abstraction permitting). To comprehend the current pollutant load, movement of effective load movement of contaminants generates effective relation between pollutants, mathematical simulation, source, and water quality is regarded as one of the best estimating tools. The current study involves the Qual2k model, which includes manual simulation of the various physiochemical characteristics of water. To this end, various sensors could be installed for the automatic simulation of various physiochemical characteristics of water. An artificial intelligence model has been proposed for the automatic simulation of water quality parameters. Models of water quality have become an effective tool for identifying worldwide water contamination, as well as the ultimate fate and behavior of contaminants in the water environment. Water quality model research is primarily conducted in Europe and other industrialized countries in the first world, where theoretical underpinnings and practical research are prioritized.

Keywords: artificial intelligence, QUAL2K, simulation, physiochemical parameters

Procedia PDF Downloads 104
10763 Modular Probe for Basic Monitoring of Water and Air Quality

Authors: Andrés Calvillo Téllez, Marianne Martínez Zanzarric, José Cruz Núñez Pérez

Abstract:

A modular system that performs basic monitoring of both water and air quality is presented. Monitoring is essential for environmental, aquaculture, and agricultural disciplines, where this type of instrumentation is necessary for data collection. The system uses low-cost components, which allows readings close to those with high-cost probes. The probe collects readings such as the coordinates of the geographical position, as well as the time it records the target parameters of the monitored. The modules or subsystems that make up the probe are the global positioning (GPS), which shows the altitude, latitude, and longitude data of the point where the reading will be recorded, a real-time clock stage, the date marking the time, the module SD memory continuously stores data, data acquisition system, central processing unit, and energy. The system acquires parameters to measure water quality, conductivity, pressure, and temperature, and for air, three types of ammonia, dioxide, and carbon monoxide gases were censored. The information obtained allowed us to identify the schedule of modification of the parameters and the identification of the ideal conditions for the growth of microorganisms in the water.

Keywords: calibration, conductivity, datalogger, monitoring, real time clock, water quality

Procedia PDF Downloads 103
10762 A Suggestive Framework for Measuring the Effectiveness of Social Media: An Irish Tourism Study

Authors: Colm Barcoe, Garvan Whelan

Abstract:

Over the past five years, visitations of American holidaymakers to Ireland have grown exponentially owing to the online strategies of Tourism Ireland, a Destination Marketer (DMO) with a meagre budget which is extended by their understanding of best practices to maximise their monetary allowance. This suggested framework incorporates a range of Key Performance Indicators (KPI’s) such as financial, marketing, and operational that offer a scale of measurement from which the Irish DMO can monitor the success of each promotional campaign when targeting the US and Canada. These are presented not as final solutions but rather as suggestions based on empirical evidence obtained from both primary and secondary sources. This research combines the wisdom extracted through qualitative methodologies with the objective of understanding the processes that drive both emergent and agile strategies. The Study extends the work relative to performance and examines the role of social media in the context of promoting Ireland to North America. There are two main themes that are identified and analysed in this investigation, these are the approach of the DMO when advocating Ireland as a brand and the benefits of digital platforms set against a proposed scale of KPIs, such as destination marketing, brand positioning, and identity development. The key narrative of this analysis is to focus on the power of social media when capitalising upon marketing opportunities, operating on a relatively small budget. This will always be a relevant theme of discussion due to the responsibility of an organisation like Tourism Ireland operating under the restraints imposed by government funding. The overall conclusions of this research may help inform those concerned with the implementing of social media strategies develop clearer models of measurement when promoting a destination to North America. The suggestions of this study will benefit small and medium enterprises particularly.

Keywords: destination marketing, framework, measure, performance

Procedia PDF Downloads 154
10761 Fabrication of Miniature Gear of Hastelloy X by WEDM Process

Authors: Bhupinder Singh, Joy Prakash Misra

Abstract:

This article provides the information regarding machining of hastelloy-X on wire electro spark machining (WEDM). Experimental investigation has been carried out by varying pulse-on time (TON), pulse-off time (TOFF), peak current (IP) and spark gap voltage (SV). Effect of these parameters is studied on material removal rate (MRR). Experiments are designed as per box-behnken design (BBD) technique of response surface methodology (RSM). Analysis of variance (ANOVA) results indicates that TON, TOFF, IP, SV, TON x IP are significant parameters that influenced the MRR, and it is depicted that value of MRR is more at high discharge energy (HDE) and less at low discharge energy (LDE). Furthermore, miniature impeller and miniature gear (OD≤10MM) is fabricated by WEDM at optimized condition.

Keywords: advanced manufacturing, WEDM, super alloy, gear

Procedia PDF Downloads 225
10760 A Mixed-Integer Nonlinear Program to Optimally Pace and Fuel Ultramarathons

Authors: Kristopher A. Pruitt, Justin M. Hill

Abstract:

The purpose of this research is to determine the pacing and nutrition strategies which minimize completion time and carbohydrate intake for athletes competing in ultramarathon races. The model formulation consists of a two-phase optimization. The first-phase mixed-integer nonlinear program (MINLP) determines the minimum completion time subject to the altitude, terrain, and distance of the race, as well as the mass and cardiovascular fitness of the athlete. The second-phase MINLP determines the minimum total carbohydrate intake required for the athlete to achieve the completion time prescribed by the first phase, subject to the flow of carbohydrates through the stomach, liver, and muscles. Consequently, the second phase model provides the optimal pacing and nutrition strategies for a particular athlete for each kilometer of a particular race. Validation of the model results over a wide range of athlete parameters against completion times for real competitive events suggests strong agreement. Additionally, the kilometer-by-kilometer pacing and nutrition strategies, the model prescribes for a particular athlete suggest unconventional approaches could result in lower completion times. Thus, the MINLP provides prescriptive guidance that athletes can leverage when developing pacing and nutrition strategies prior to competing in ultramarathon races. Given the highly-variable topographical characteristics common to many ultramarathon courses and the potential inexperience of many athletes with such courses, the model provides valuable insight to competitors who might otherwise fail to complete the event due to exhaustion or carbohydrate depletion.

Keywords: nutrition, optimization, pacing, ultramarathons

Procedia PDF Downloads 189
10759 Computational Study of Blood Flow Analysis for Coronary Artery Disease

Authors: Radhe Tado, Ashish B. Deoghare, K. M. Pandey

Abstract:

The aim of this study is to estimate the effect of blood flow through the coronary artery in human heart so as to assess the coronary artery disease.Velocity, wall shear stress (WSS), strain rate and wall pressure distribution are some of the important hemodynamic parameters that are non-invasively assessed with computational fluid dynamics (CFD). These parameters are used to identify the mechanical factors responsible for the plaque progression and/or rupture in left coronary arteries (LCA) in coronary arteries.The initial step for CFD simulations was the construction of a geometrical model of the LCA. Patient specific artery model is constructed using computed tomography (CT) scan data with the help of MIMICS Research 19.0. For CFD analysis ANSYS FLUENT-14.5 is used.Hemodynamic parameters were quantified and flow patterns were visualized both in the absence and presence of coronary plaques. The wall pressure continuously decreased towards distal segments and showed pressure drops in stenotic segments. Areas of high WSS and high flow velocities were found adjacent to plaques deposition.

Keywords: angiography, computational fluid dynamics (CFD), time-average wall shear stress (TAWSS), wall pressure, wall shear stress (WSS)

Procedia PDF Downloads 183
10758 Machine Learning Techniques for COVID-19 Detection: A Comparative Analysis

Authors: Abeer A. Aljohani

Abstract:

COVID-19 virus spread has been one of the extreme pandemics across the globe. It is also referred to as coronavirus, which is a contagious disease that continuously mutates into numerous variants. Currently, the B.1.1.529 variant labeled as omicron is detected in South Africa. The huge spread of COVID-19 disease has affected several lives and has surged exceptional pressure on the healthcare systems worldwide. Also, everyday life and the global economy have been at stake. This research aims to predict COVID-19 disease in its initial stage to reduce the death count. Machine learning (ML) is nowadays used in almost every area. Numerous COVID-19 cases have produced a huge burden on the hospitals as well as health workers. To reduce this burden, this paper predicts COVID-19 disease is based on the symptoms and medical history of the patient. This research presents a unique architecture for COVID-19 detection using ML techniques integrated with feature dimensionality reduction. This paper uses a standard UCI dataset for predicting COVID-19 disease. This dataset comprises symptoms of 5434 patients. This paper also compares several supervised ML techniques to the presented architecture. The architecture has also utilized 10-fold cross validation process for generalization and the principal component analysis (PCA) technique for feature reduction. Standard parameters are used to evaluate the proposed architecture including F1-Score, precision, accuracy, recall, receiver operating characteristic (ROC), and area under curve (AUC). The results depict that decision tree, random forest, and neural networks outperform all other state-of-the-art ML techniques. This achieved result can help effectively in identifying COVID-19 infection cases.

Keywords: supervised machine learning, COVID-19 prediction, healthcare analytics, random forest, neural network

Procedia PDF Downloads 92
10757 Farmers' Perspective on Soil Health in the Indian Punjab: A Quantitative Analysis of Major Soil Parameters

Authors: Sukhwinder Singh, Julian Park, Dinesh Kumar Benbi

Abstract:

Although soil health, which is recognized as one of the key determinants of sustainable agricultural development, can be measured by a range of physical, chemical and biological parameters, the widely used parameters include pH, electrical conductivity (EC), organic carbon (OC), plant available phosphorus (P) and potassium (K). Soil health is largely affected by the occurrence of natural events or human activities and can be improved by various land management practices. A database of 120 soil samples collected from farmers’ fields spread across three major agro-climatic zones of Punjab suggested that the average pH, EC, OC, P and K was 8.2 (SD = 0.75, Min = 5.5, Max = 9.1), 0.27 dS/m (SD = 0.17, Min = 0.072 dS/m, Max = 1.22 dS/m), 0.49% (SD = 0.20, Min = 0.06%, Max = 1.2%), 19 mg/kg soil (SD = 22.07, Min = 3 mg/kg soil, Max = 207 mg/kg soil) and 171 mg/kg soil (SD = 47.57, Min = 54 mg/kg soil, Max = 288 mg/kg soil), respectively. Region-wise, pH, EC and K were the highest in south-western district of Ferozpur whereas farmers in north-eastern district of Gurdaspur had the best soils in terms of OC and P. The soils in the central district of Barnala had lower OC, P and K than the respective overall averages while its soils were normal but skewed towards alkalinity. Besides agro-climatic conditions, the size of landholding and farmer education showed a significant association with Soil Fertility Index (SFI), a composite index calculated using the aforementioned parameters’ normalized weightage. All the four stakeholder groups cited the current cropping patterns, burning of rice crop residue, and imbalanced use of chemical fertilizers for change in soil health. However, the current state of soil health in Punjab is unclear, which needs further investigation based on temporal data collected from the same field to see the short and long-term impacts of various crop combinations and varied cropping intensity levels on soil health.

Keywords: soil health, punjab agriculture, sustainability, soil fertility index

Procedia PDF Downloads 362
10756 Probability-Based Damage Detection of Structures Using Model Updating with Enhanced Ideal Gas Molecular Movement Algorithm

Authors: M. R. Ghasemi, R. Ghiasi, H. Varaee

Abstract:

Model updating method has received increasing attention in damage detection structures based on measured modal parameters. Therefore, a probability-based damage detection (PBDD) procedure based on a model updating procedure is presented in this paper, in which a one-stage model-based damage identification technique based on the dynamic features of a structure is investigated. The presented framework uses a finite element updating method with a Monte Carlo simulation that considers the uncertainty caused by measurement noise. Enhanced ideal gas molecular movement (EIGMM) is used as the main algorithm for model updating. Ideal gas molecular movement (IGMM) is a multiagent algorithm based on the ideal gas molecular movement. Ideal gas molecules disperse rapidly in different directions and cover all the space inside. This is embedded in the high speed of molecules, collisions between them and with the surrounding barriers. In IGMM algorithm to accomplish the optimal solutions, the initial population of gas molecules is randomly generated and the governing equations related to the velocity of gas molecules and collisions between those are utilized. In this paper, an enhanced version of IGMM, which removes unchanged variables after specified iterations, is developed. The proposed method is implemented on two numerical examples in the field of structural damage detection. The results show that the proposed method can perform well and competitive in PBDD of structures.

Keywords: enhanced ideal gas molecular movement (EIGMM), ideal gas molecular movement (IGMM), model updating method, probability-based damage detection (PBDD), uncertainty quantification

Procedia PDF Downloads 277
10755 Genomic Prediction Reliability Using Haplotypes Defined by Different Methods

Authors: Sohyoung Won, Heebal Kim, Dajeong Lim

Abstract:

Genomic prediction is an effective way to measure the abilities of livestock for breeding based on genomic estimated breeding values, statistically predicted values from genotype data using best linear unbiased prediction (BLUP). Using haplotypes, clusters of linked single nucleotide polymorphisms (SNPs), as markers instead of individual SNPs can improve the reliability of genomic prediction since the probability of a quantitative trait loci to be in strong linkage disequilibrium (LD) with markers is higher. To efficiently use haplotypes in genomic prediction, finding optimal ways to define haplotypes is needed. In this study, 770K SNP chip data was collected from Hanwoo (Korean cattle) population consisted of 2506 cattle. Haplotypes were first defined in three different ways using 770K SNP chip data: haplotypes were defined based on 1) length of haplotypes (bp), 2) the number of SNPs, and 3) k-medoids clustering by LD. To compare the methods in parallel, haplotypes defined by all methods were set to have comparable sizes; in each method, haplotypes defined to have an average number of 5, 10, 20 or 50 SNPs were tested respectively. A modified GBLUP method using haplotype alleles as predictor variables was implemented for testing the prediction reliability of each haplotype set. Also, conventional genomic BLUP (GBLUP) method, which uses individual SNPs were tested to evaluate the performance of the haplotype sets on genomic prediction. Carcass weight was used as the phenotype for testing. As a result, using haplotypes defined by all three methods showed increased reliability compared to conventional GBLUP. There were not many differences in the reliability between different haplotype defining methods. The reliability of genomic prediction was highest when the average number of SNPs per haplotype was 20 in all three methods, implying that haplotypes including around 20 SNPs can be optimal to use as markers for genomic prediction. When the number of alleles generated by each haplotype defining methods was compared, clustering by LD generated the least number of alleles. Using haplotype alleles for genomic prediction showed better performance, suggesting improved accuracy in genomic selection. The number of predictor variables was decreased when the LD-based method was used while all three haplotype defining methods showed similar performances. This suggests that defining haplotypes based on LD can reduce computational costs and allows efficient prediction. Finding optimal ways to define haplotypes and using the haplotype alleles as markers can provide improved performance and efficiency in genomic prediction.

Keywords: best linear unbiased predictor, genomic prediction, haplotype, linkage disequilibrium

Procedia PDF Downloads 141
10754 A Comparative Soft Computing Approach to Supplier Performance Prediction Using GEP and ANN Models: An Automotive Case Study

Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari

Abstract:

In multi-echelon supply chain networks, optimal supplier selection significantly depends on the accuracy of suppliers’ performance prediction. Different methods of multi criteria decision making such as ANN, GA, Fuzzy, AHP, etc have been previously used to predict the supplier performance but the “black-box” characteristic of these methods is yet a major concern to be resolved. Therefore, the primary objective in this paper is to implement an artificial intelligence-based gene expression programming (GEP) model to compare the prediction accuracy with that of ANN. A full factorial design with %95 confidence interval is initially applied to determine the appropriate set of criteria for supplier performance evaluation. A test-train approach is then utilized for the ANN and GEP exclusively. The training results are used to find the optimal network architecture and the testing data will determine the prediction accuracy of each method based on measures of root mean square error (RMSE) and correlation coefficient (R2). The results of a case study conducted in Supplying Automotive Parts Co. (SAPCO) with more than 100 local and foreign supply chain members revealed that, in comparison with ANN, gene expression programming has a significant preference in predicting supplier performance by referring to the respective RMSE and R-squared values. Moreover, using GEP, a mathematical function was also derived to solve the issue of ANN black-box structure in modeling the performance prediction.

Keywords: Supplier Performance Prediction, ANN, GEP, Automotive, SAPCO

Procedia PDF Downloads 419