Search results for: optimization algorithms
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4733

Search results for: optimization algorithms

623 Regression-Based Approach for Development of a Cuff-Less Non-Intrusive Cardiovascular Health Monitor

Authors: Pranav Gulati, Isha Sharma

Abstract:

Hypertension and hypotension are known to have repercussions on the health of an individual, with hypertension contributing to an increased probability of risk to cardiovascular diseases and hypotension resulting in syncope. This prompts the development of a non-invasive, non-intrusive, continuous and cuff-less blood pressure monitoring system to detect blood pressure variations and to identify individuals with acute and chronic heart ailments, but due to the unavailability of such devices for practical daily use, it becomes difficult to screen and subsequently regulate blood pressure. The complexities which hamper the steady monitoring of blood pressure comprises of the variations in physical characteristics from individual to individual and the postural differences at the site of monitoring. We propose to develop a continuous, comprehensive cardio-analysis tool, based on reflective photoplethysmography (PPG). The proposed device, in the form of an eyewear captures the PPG signal and estimates the systolic and diastolic blood pressure using a sensor positioned near the temporal artery. This system relies on regression models which are based on extraction of key points from a pair of PPG wavelets. The proposed system provides an edge over the existing wearables considering that it allows for uniform contact and pressure with the temporal site, in addition to minimal disturbance by movement. Additionally, the feature extraction algorithms enhance the integrity and quality of the extracted features by reducing unreliable data sets. We tested the system with 12 subjects of which 6 served as the training dataset. For this, we measured the blood pressure using a cuff based BP monitor (Omron HEM-8712) and at the same time recorded the PPG signal from our cardio-analysis tool. The complete test was conducted by using the cuff based blood pressure monitor on the left arm while the PPG signal was acquired from the temporal site on the left side of the head. This acquisition served as the training input for the regression model on the selected features. The other 6 subjects were used to validate the model by conducting the same test on them. Results show that the developed prototype can robustly acquire the PPG signal and can therefore be used to reliably predict blood pressure levels.

Keywords: blood pressure, photoplethysmograph, eyewear, physiological monitoring

Procedia PDF Downloads 254
622 Biological Optimization following BM-MSC Seeding of Partially Demineralized and Partially Demineralized Laser-Perforated Structural Bone Allografts Implanted in Critical Femoral Defects

Authors: S. AliReza Mirghasemi, Zameer Hussain, Mohammad Saleh Sadeghi, Narges Rahimi Gabaran, Mohamadreza Baghaban Eslaminejad

Abstract:

Background: Despite promising results have shown by osteogenic cell-based demineralized bone matrix composites, they need to be optimized for grafts that act as structural frameworks in load-bearing defects. The purpose of this experiment is to determine the effect of bone-marrow-mesenchymal-stem-cells seeding on partially demineralized laser-perforated structural allografts that have been implanted in critical femoral defects. Materials and Methods: P3 stem cells were used for graft seeding. Laser perforation in four rows of three holes was achieved. Cell-seeded grafts were incubated for one hour until they were planted into the defect. We used four types of grafts: partially demineralized only (Donly), partially demineralized stem cell seeded (DST), partially demineralized laser-perforated (DLP), and partially demineralized laser-perforated stem cell seeded (DLPST). histologic and histomorphometric analysis were performed at 12 weeks. Results: Partially demineralized laser-perforated had the highest woven bone formation within graft limits, stem cell seeded demineralized laser-perforated remained intact, and the difference between partially demineralized only and partially demineralized stem cell seeded was insignificant. At interface, partially demineralized laser-perforated and partially demineralized only had comparable osteogenesis, but partially demineralized stem cell seeded was inferior. The interface in stem cell seeded demineralized laser-perforated was almost replaced by distinct endochondral osteogenesis with higher angiogenesis in the vicinity. Partially demineralized stem cell seeded and stem cell seeded demineralized laser-perforated graft surfaces had extra vessel-ingrowth-like porosities, a sign of delayed resorption. Conclusion: This demonstrates that simple cell-based composites are not optimal and necessitates the supplementation of synergistic stipulations and surface changes.

Keywords: structural bone allograft, partial demineralization, laser perforation, mesenchymal stem cell

Procedia PDF Downloads 399
621 The Optimization of Sexual Health Resource Information and Services for Persons with Spinal Cord Injury

Authors: Nasrin Nejatbakhsh, Anita Kaiser, Sander Hitzig, Colleen McGillivray

Abstract:

Following spinal cord injury (SCI), many individuals experience anxiety in adjusting to their lives, and its impacts on their sexuality. Research has demonstrated that regaining sexual function is a very high priority for individuals with SCI. Despite this, sexual health is one of the least likely areas of focus in rehabilitating individuals with SCI. There is currently a considerable gap in appropriate education and resources that address sexual health concerns and needs of people with spinal cord injury. Furthermore, the determinants of sexual health in individuals with SCI are poorly understood and thus poorly addressed. The purpose of this study was to improve current practices by informing a service delivery model that rehabilitation centers can adopt for appropriate delivery of their services. Methodology: We utilized qualitative methods in the form of a semi-structured interview containing open-ended questions to assess 1) sexual health concerns, 2) helpful strategies in current resources, 3) unhelpful strategies in current resources, and 4) Barriers to obtaining sexual health information. In addition to the interviews, participants completed surveys to identify socio-demographic factors. Data gathered was coded and evaluated for emerging themes and subthemes through a ‘code-recode’ technique. Results: We have identified several robust themes that are important for SCI sexual health resource development. Through analysis of these themes and their subthemes, several important concepts have emerged that could provide agencies with helpful strategies for providing sexual health resources. Some of the important considerations are that services be; anonymous, accessible, frequent, affordable, mandatory, casual and supported by peers. Implications: By incorporating the perspectives of individuals with SCI, the finding from this study can be used to develop appropriate sexual health services and improve access to information through tailored needs based program development.

Keywords: spinal cord injury, sexual health, determinants of health, resource development

Procedia PDF Downloads 238
620 Selection of Optimal Reduced Feature Sets of Brain Signal Analysis Using Heuristically Optimized Deep Autoencoder

Authors: Souvik Phadikar, Nidul Sinha, Rajdeep Ghosh

Abstract:

In brainwaves research using electroencephalogram (EEG) signals, finding the most relevant and effective feature set for identification of activities in the human brain is a big challenge till today because of the random nature of the signals. The feature extraction method is a key issue to solve this problem. Finding those features that prove to give distinctive pictures for different activities and similar for the same activities is very difficult, especially for the number of activities. The performance of a classifier accuracy depends on this quality of feature set. Further, more number of features result in high computational complexity and less number of features compromise with the lower performance. In this paper, a novel idea of the selection of optimal feature set using a heuristically optimized deep autoencoder is presented. Using various feature extraction methods, a vast number of features are extracted from the EEG signals and fed to the autoencoder deep neural network. The autoencoder encodes the input features into a small set of codes. To avoid the gradient vanish problem and normalization of the dataset, a meta-heuristic search algorithm is used to minimize the mean square error (MSE) between encoder input and decoder output. To reduce the feature set into a smaller one, 4 hidden layers are considered in the autoencoder network; hence it is called Heuristically Optimized Deep Autoencoder (HO-DAE). In this method, no features are rejected; all the features are combined into the response of responses of the hidden layer. The results reveal that higher accuracy can be achieved using optimal reduced features. The proposed HO-DAE is also compared with the regular autoencoder to test the performance of both. The performance of the proposed method is validated and compared with the other two methods recently reported in the literature, which reveals that the proposed method is far better than the other two methods in terms of classification accuracy.

Keywords: autoencoder, brainwave signal analysis, electroencephalogram, feature extraction, feature selection, optimization

Procedia PDF Downloads 100
619 Probabilistic Building Life-Cycle Planning as a Strategy for Sustainability

Authors: Rui Calejo Rodrigues

Abstract:

Building Refurbishing and Maintenance is a major area of knowledge ultimately dispensed to user/occupant criteria. The optimization of the service life of a building needs a special background to be assessed as it is one of those concepts that needs proficiency to be implemented. ISO 15686-2 Buildings and constructed assets - Service life planning: Part 2, Service life prediction procedures, states a factorial method based on deterministic data for building components life span. Major consequences result on a deterministic approach because users/occupants are not sensible to understand the end of components life span and so simply act on deterministic periods and so costly and resources consuming solutions do not meet global targets of planet sustainability. The estimation of 2 thousand million conventional buildings in the world, if submitted to a probabilistic method for service life planning rather than a deterministic one provide an immense amount of resources savings. Since 1989 the research team nowadays stating for CEES–Center for Building in Service Studies developed a methodology based on Montecarlo method for probabilistic approach regarding life span of building components, cost and service life care time spans. The research question of this deals with the importance of probabilistic approach of buildings life planning compared with deterministic methods. It is presented the mathematic model developed for buildings probabilistic lifespan approach and experimental data is obtained to be compared with deterministic data. Assuming that buildings lifecycle depends a lot on component replacement this methodology allows to conclude on the global impact of fixed replacements methodologies such as those on result of deterministic models usage. Major conclusions based on conventional buildings estimate are presented and evaluated under a sustainable perspective.

Keywords: building components life cycle, building maintenance, building sustainability, Montecarlo Simulation

Procedia PDF Downloads 193
618 Design and Analysis of Hybrid Morphing Smart Wing for Unmanned Aerial Vehicles

Authors: Chetan Gupta, Ramesh Gupta

Abstract:

Unmanned aerial vehicles, of all sizes, are prime targets of the wing morphing concept as their lightweight structures demand high aerodynamic stability while traversing unsteady atmospheric conditions. In this research study, a hybrid morphing technology is developed to aid the trailing edge of the aircraft wing to alter its camber as a monolithic element rather than functioning as conventional appendages like flaps. Kinematic tailoring, actuation techniques involving shape memory alloys (SMA), piezoelectrics – individually fall short of providing a simplistic solution to the conundrum of morphing aircraft wings. On the other hand, the feature of negligible hysteresis while actuating using compliant mechanisms has shown higher levels of applicability and deliverability in morphing wings of even large aircrafts. This research paper delves into designing a wing section model with a periodic, multi-stable compliant structure requiring lower orders of topological optimization. The design is sub-divided into three smaller domains with external hyperelastic connections to achieve deflections ranging from -15° to +15° at the trailing edge of the wing. To facilitate this functioning, a hybrid actuation system by combining the larger bandwidth feature of piezoelectric macro-fibre composites and relatively higher work densities of shape memory alloy wires are used. Finite element analysis is applied to optimize piezoelectric actuation of the internal compliant structure. A coupled fluid-surface interaction analysis is conducted on the wing section during morphing to study the development of the velocity boundary layer at low Reynold’s numbers of airflow.

Keywords: compliant mechanism, hybrid morphing, piezoelectrics, shape memory alloys

Procedia PDF Downloads 295
617 Formulation of Famotidine Solid Lipid Nanoparticles (SLN): Preparation, Evaluation and Release Study

Authors: Rachmat Mauludin, Nurmazidah

Abstract:

Background and purpose: Famotidine is an H2 receptor blocker. Absorption orally is rapid enough, but famotidine can be degraded by stomach acid causing dose reduction until 35.8% after 50 minutes. This drug also undergoes first-pass metabolism which reduced its bio availability only until 40-50%. To overcome these problems, Solid Lipid Nano particles (SLNs) as alternative delivery systems can be formulated. SLNs is a lipid-based drug delivery technology with 50-1000 nm particle size, where the drug incorporated into the bio compatible lipids and the lipid particles are stabilized using appropriate stabilizers. When the particle size is 200 nm or below, lipid containing famotidine can be absorbed through the lymphatic vessels to the subclavian vein, so first-pass metabolism can be avoided. Method: Famotidine SLNs with various compositions of stabilizer was prepared using a high-speed homogenization and sonication method. Then, the particle size distribution, zeta potential, entrapment efficiency, particle morphology and in vitro release profiles were evaluated. Optimization of sonication time also carried out. Result: Particle size of SLN by Particle Size Analyzer was in range 114.6 up to 455.267 nm. Ultrasonicated SLNs within 5 minutes generated smaller particle size than SLNs which was ultrasonicated for 10 and 15 minutes. Entrapment efficiency of SLNs were 74.17 up to 79.45%. Particle morphology of the SLNs was spherical and distributed individually. Release study of Famotidine revealed that in acid medium, 28.89 up to 80.55% of famotidine could be released after 2 hours. Nevertheless in basic medium, famotidine was released 40.5 up to 86.88% in the same period. Conclusion: The best formula was SLNs which stabilized by 4% Poloxamer 188 and 1 % Span 20, that had particle size 114.6 nm in diameter, 77.14% famotidine entrapped, and the particle morphology was spherical and distributed individually. SLNs with the best drug release profile was SLNs which stabilized by 4% Eudragit L 100-55 and 1% Tween 80 which had released 36.34 % in pH 1.2 solution, and 74.13% in pH 7.4 solution after 2 hours. The optimum sonication time was 5 minutes.

Keywords: famotodine, SLN, high speed homogenization, particle size, release study

Procedia PDF Downloads 841
616 Optimal Design of a PV/Diesel Hybrid System for Decentralized Areas through Economic Criteria

Authors: David B. Tsuanyo, Didier Aussel, Yao Azoumah, Pierre Neveu

Abstract:

An innovative concept called “Flexy-Energy”is developing at 2iE. This concept aims to produce electricity at lower cost by smartly mix different available energies sources in accordance to the load profile of the region. With a higher solar irradiation and due to the fact that Diesel generator are massively used in sub-Saharan rural areas, PV/Diesel hybrid systems could be a good application of this concept and a good solution to electrify this region, provided they are reliable, cost effective and economically attractive to investors. Presentation of the developed approach is the aims of this paper. The PV/Diesel hybrid system designed consists to produce electricity and/or heat from a coupling between Diesel gensets and PV panels without batteries storage, while ensuring the substitution of gasoil by bio-fuels available in the area where the system will be installed. The optimal design of this system is based on his technical performances; the Life Cycle Cost (LCC) and Levelized Cost of Energy are developed and use as economic criteria. The Net Present Value (NPV), the internal rate of return (IRR) and the discounted payback (DPB) are also evaluated according to dual electricity pricing (in sunny and unsunny hours). The PV/Diesel hybrid system obtained is compared to the standalone Diesel gensets. The approach carried out in this paper has been applied to Siby village in Mali (Latitude 12 ° 23'N 8 ° 20'W) with 295 kWh as daily demand. This approach provides optimal physical characteristics (size of the components, number of component) and dynamical characteristics in real time (number of Diesel generator on, their load rate, fuel specific consumptions, and PV penetration rate) of the system. The system obtained is slightly cost effective; but could be improved with optimized tariffing strategies.

Keywords: investments criteria, optimization, PV hybrid, sizing, rural electrification

Procedia PDF Downloads 425
615 Applications of Artificial Intelligence (AI) in Cardiac imaging

Authors: Angelis P. Barlampas

Abstract:

The purpose of this study is to inform the reader, about the various applications of artificial intelligence (AI), in cardiac imaging. AI grows fast and its role is crucial in medical specialties, which use large amounts of digital data, that are very difficult or even impossible to be managed by human beings and especially doctors.Artificial intelligence (AI) refers to the ability of computers to mimic human cognitive function, performing tasks such as learning, problem-solving, and autonomous decision making based on digital data. Whereas AI describes the concept of using computers to mimic human cognitive tasks, machine learning (ML) describes the category of algorithms that enable most current applications described as AI. Some of the current applications of AI in cardiac imaging are the follows: Ultrasound: Automated segmentation of cardiac chambers across five common views and consequently quantify chamber volumes/mass, ascertain ejection fraction and determine longitudinal strain through speckle tracking. Determine the severity of mitral regurgitation (accuracy > 99% for every degree of severity). Identify myocardial infarction. Distinguish between Athlete’s heart and hypertrophic cardiomyopathy, as well as restrictive cardiomyopathy and constrictive pericarditis. Predict all-cause mortality. CT Reduce radiation doses. Calculate the calcium score. Diagnose coronary artery disease (CAD). Predict all-cause 5-year mortality. Predict major cardiovascular events in patients with suspected CAD. MRI Segment of cardiac structures and infarct tissue. Calculate cardiac mass and function parameters. Distinguish between patients with myocardial infarction and control subjects. It could potentially reduce costs since it would preclude the need for gadolinium-enhanced CMR. Predict 4-year survival in patients with pulmonary hypertension. Nuclear Imaging Classify normal and abnormal myocardium in CAD. Detect locations with abnormal myocardium. Predict cardiac death. ML was comparable to or better than two experienced readers in predicting the need for revascularization. AI emerge as a helpful tool in cardiac imaging and for the doctors who can not manage the overall increasing demand, in examinations such as ultrasound, computed tomography, MRI, or nuclear imaging studies.

Keywords: artificial intelligence, cardiac imaging, ultrasound, MRI, CT, nuclear medicine

Procedia PDF Downloads 61
614 GPU-Based Back-Projection of Synthetic Aperture Radar (SAR) Data onto 3D Reference Voxels

Authors: Joshua Buli, David Pietrowski, Samuel Britton

Abstract:

Processing SAR data usually requires constraints in extent in the Fourier domain as well as approximations and interpolations onto a planar surface to form an exploitable image. This results in a potential loss of data requires several interpolative techniques, and restricts visualization to two-dimensional plane imagery. The data can be interpolated into a ground plane projection, with or without terrain as a component, all to better view SAR data in an image domain comparable to what a human would view, to ease interpretation. An alternate but computationally heavy method to make use of more of the data is the basis of this research. Pre-processing of the SAR data is completed first (matched-filtering, motion compensation, etc.), the data is then range compressed, and lastly, the contribution from each pulse is determined for each specific point in space by searching the time history data for the reflectivity values for each pulse summed over the entire collection. This results in a per-3D-point reflectivity using the entire collection domain. New advances in GPU processing have finally allowed this rapid projection of acquired SAR data onto any desired reference surface (called backprojection). Mathematically, the computations are fast and easy to implement, despite limitations in SAR phase history data size and 3D-point cloud size. Backprojection processing algorithms are embarrassingly parallel since each 3D point in the scene has the same reflectivity calculation applied for all pulses, independent of all other 3D points and pulse data under consideration. Therefore, given the simplicity of the single backprojection calculation, the work can be spread across thousands of GPU threads allowing for accurate reflectivity representation of a scene. Furthermore, because reflectivity values are associated with individual three-dimensional points, a plane is no longer the sole permissible mapping base; a digital elevation model or even a cloud of points (collected from any sensor capable of measuring ground topography) can be used as a basis for the backprojection technique. This technique minimizes any interpolations and modifications of the raw data, maintaining maximum data integrity. This innovative processing will allow for SAR data to be rapidly brought into a common reference frame for immediate exploitation and data fusion with other three-dimensional data and representations.

Keywords: backprojection, data fusion, exploitation, three-dimensional, visualization

Procedia PDF Downloads 56
613 Kinetics of Sugar Losses in Hot Water Blanching of Water Yam (Dioscorea alata)

Authors: Ayobami Solomon Popoola

Abstract:

Yam is majorly a carbohydrate food grown in most parts of the world. It could be boiled, fried or roasted for consumption in a variety of ways. Blanching is an established heat pre-treatment given to fruits and vegetables prior to further processing such as dehydration, canning, freezing etc. Losses of soluble solids during blanching has been a great problem because a reasonable quantity of the water-soluble nutrients are inevitably leached into the blanching water. Without blanching, the high residual levels of reducing sugars after extended storage produce a dark, bitter-tasting product because of the Maillard reactions of reducing sugars at frying temperature. Measurement and prediction of such losses are necessary for economic efficiency in production and to establish the level of effluent treatment of the blanching water. This paper aims at resolving this problem by investigating the effects of cube size and temperature on the rate of diffusional losses of reducing sugars and total sugars during hot water blanching of water-yam. The study was carried out using four temperature levels (65, 70, 80 and 90 °C) and two cubes sizes (0.02 m³ and 0.03 m³) at 4 times intervals (5, 10, 15 and 20 mins) respectively. Obtained data were fitted into Fick’s non-steady equation from which diffusion coefficients (Da) were obtained. The Da values were subsequently fitted into Arrhenius plot to obtain activation energies (Ea-values) for diffusional losses. The diffusion co-efficient were independent of cube size and time but highly temperature dependent. The diffusion coefficients were ≥ 1.0 ×10⁻⁹ m²s⁻¹ for reducing sugars and ≥ 5.0 × 10⁻⁹ m²s⁻¹ for total sugars. The Ea values ranged between 68.2 to 73.9 KJmol⁻¹ and 7.2 to 14.30 KJmol⁻¹ for reducing sugars and total sugars losses respectively. Predictive equations for estimating amount of reducing sugars and total sugars with blanching time of water-yam at various temperatures were also presented. The equation could be valuable in process design and optimization. However, amount of other soluble solids that might have leached into the water along with reducing and total sugars during blanching was not investigated in the study.

Keywords: blanching, kinetics, sugar losses, water yam

Procedia PDF Downloads 149
612 Early Hypothyroidism after Radiotherapy for Nasopharyngeal Carcinoma

Authors: Nejla Fourati, Zied Fessi, Fatma Dhouib, Wicem Siala, Leila Farhat, Afef Khanfir, Wafa Mnejja, Jamel Daoud

Abstract:

Purpose: Radiation induced hypothyroidism in nasopharyngeal cancer (NPC) ranged from 15% to 55%. In reported data, it is considered as a common late complication of definitive radiation and is mainly observed 2 years after the end of treatment. The aim of this study was to evaluate the incidence of early hypothyroidism within 6 months after radiotherapy. Patients and methods: From June 2017 to February 2020, 35 patients treated with concurrent chemo-radiotherapy (CCR) for NPC were included in this prospective study. Median age was 49 years [23-68] with a sex ratio of 2.88. All patients received intensity modulated radiotherapy (IMRT) at a dose of 69.96 Gy in 33 daily fractions with weekly cisplatin (40mg/m²) chemotherapy. Thyroid stimulating hormone (TSH) and Free Thyroxine 4 (FT4) dosage was performed before the start of radiotherapy and 6 months after. Different dosimetric parameters for the thyroid gland were reported: the volume (cc); the mean dose (Dmean) and the %age of volume receiving more than 45 Gy (V45Gy). Wilcoxon Test was used to compare these different parameters between patients with or without hypothyroidism. Results: At baseline, 5 patients (14.3%) had hypothyroidism and were excluded from the analysis. For the remaining 30 patients, 9 patients (30%) developed a hypothyroidism 6 months after the end of radiotherapy. The median thyroid volume was 10.3 cc [4.6-23]. The median Dmean and V45Gy were 48.3 Gy [43.15-55.4] and 74.8 [38.2-97.9] respectively. No significant difference was noted for all studied parameters. Conclusion: Early hypothyroidism occurring within 6 months after CCR for NPC seems to be a common complication (30%) that should be screened. Good patient monitoring with regular dosage of TSH and FT4 makes it possible to treat hypothyroidism in asymptomatic phase. This would be correlated with an improvement in the quality of life of these patients. The results of our study do not show a correlation between the thyroid doses and the occurrence of hypothyroidism. This is probably related to the high doses received by the thyroid in our series. These findings encourage more optimization to limit thyroid doses and then the risk of radiation-induced hypothyroidism

Keywords: nasopharyngeal carcinoma, hypothyroidism, early complication, thyroid dose

Procedia PDF Downloads 116
611 Analysis of the Evolution of Landscape Spatial Patterns in Banan District, Chongqing, China

Authors: Wenyang Wan

Abstract:

The study of urban land use and landscape pattern is the current hotspot in the fields of planning and design, ecology, etc., which is of great significance for the construction of the overall humanistic ecosystem of the city and optimization of the urban spatial structure. Banan District, as the main part of the eastern eco-city planning of Chongqing Municipality, is a high ground for highlighting the ecological characteristics of Chongqing, realizing effective transformation of ecological value, and promoting the integrated development of urban and rural areas. The analytical methods of land use transfer matrix (GIS) and landscape pattern index (Fragstats) were used to study the characteristics and laws of the evolution of land use landscape pattern in Banan District from 2000 to 2020, which provide some reference value for Banan District to alleviate the ecological contradiction of landscape. The results of the study show that ① Banan District is rich in land use types, of which the area of cultivated land will still account for 57.15% of the total area of the landscape until 2020, accounting for an absolute advantage in land use structure of Banan District; ② From 2000 to 2020, land use conversion in Banan District is characterized as Cropland > woodland > grassland > shrubland > built-up land > water bodies > wetlands, with cropland converted to built-up land being the largest; ③ From 2000 to 2020, the landscape elements of Banan District were distributed in a balanced way, and the landscape types were rich and diversified, but due to the influence of human interference, it also presented the characteristics that the shape of the landscape elements tended to be irregular, and the dominant patches were distributed in a scattered manner, and the patches had poor connectivity. It is recommended that in future regional ecological construction, the layout should be rationally optimized, the relationship between landscape components should be coordinated, the connectivity between landscape patches should be strengthened, and the degree of landscape fragmentation should be reduced.

Keywords: land use transfer, landscape pattern evolution, GIS and Fragstats, Banan district

Procedia PDF Downloads 55
610 Effect of Wettability Alteration on Production Performance in Unconventional Tight Oil Reservoirs

Authors: Rashid S. Mohammad, Shicheng Zhang, Xinzhe Zhao

Abstract:

In tight oil reservoirs, wettability alteration has generally been considered as an effective way to remove fracturing fluid retention on the surface of the fracture and consequently improved oil production. However, there is a lack of a reliable productivity prediction model to show the relationship between the wettability and oil production in tight oil well. In this paper, a new oil productivity prediction model of immiscible oil-water flow and miscible CO₂-oil flow accounting for wettability is developed. This mathematical model is established by considering two different length scales: nonporous network and propped fractures. CO₂ flow diffuses in the nonporous network and high velocity non-Darcy flow in propped fractures are considered by taking into account the effect of wettability alteration on capillary pressure and relative permeability. A laboratory experiment is also conducted here to validate this model. Laboratory experiments have been designed to compare the water saturation profiles for different contact angle, revealing the fluid retention in rock pores that affects capillary force and relative permeability. Four kinds of brines with different concentrations are selected here to create different contact angles. In water-wet porous media, as the system becomes more oil-wet, water saturation decreases. As a result, oil relative permeability increases. On the other hand, capillary pressure which is the resistance for the oil flow increases as well. The oil production change due to wettability alteration is the result of the comprehensive changes of oil relative permeability and capillary pressure. The results indicate that wettability is a key factor for fracturing fluid retention removal and oil enhancement in tight reservoirs. By incorporating laboratory test into a mathematical model, this work shows the relationship between wettability and oil production is not a simple linear pattern but a parabolic one. Additionally, it can be used for a better understanding of optimization design of fracturing fluids.

Keywords: wettability, relative permeability, fluid retention, oil production, unconventional and tight reservoirs

Procedia PDF Downloads 225
609 D-Wave Quantum Computing Ising Model: A Case Study for Forecasting of Heat Waves

Authors: Dmytro Zubov, Francesco Volponi

Abstract:

In this paper, D-Wave quantum computing Ising model is used for the forecasting of positive extremes of daily mean air temperature. Forecast models are designed with two to five qubits, which represent 2-, 3-, 4-, and 5-day historical data respectively. Ising model’s real-valued weights and dimensionless coefficients are calculated using daily mean air temperatures from 119 places around the world, as well as sea level (Aburatsu, Japan). In comparison with current methods, this approach is better suited to predict heat wave values because it does not require the estimation of a probability distribution from scarce observations. Proposed forecast quantum computing algorithm is simulated based on traditional computer architecture and combinatorial optimization of Ising model parameters for the Ronald Reagan Washington National Airport dataset with 1-day lead-time on learning sample (1975-2010 yr). Analysis of the forecast accuracy (ratio of successful predictions to total number of predictions) on the validation sample (2011-2014 yr) shows that Ising model with three qubits has 100 % accuracy, which is quite significant as compared to other methods. However, number of identified heat waves is small (only one out of nineteen in this case). Other models with 2, 4, and 5 qubits have 20 %, 3.8 %, and 3.8 % accuracy respectively. Presented three-qubit forecast model is applied for prediction of heat waves at other five locations: Aurel Vlaicu, Romania – accuracy is 28.6 %; Bratislava, Slovakia – accuracy is 21.7 %; Brussels, Belgium – accuracy is 33.3 %; Sofia, Bulgaria – accuracy is 50 %; Akhisar, Turkey – accuracy is 21.4 %. These predictions are not ideal, but not zeros. They can be used independently or together with other predictions generated by different method(s). The loss of human life, as well as environmental, economic, and material damage, from extreme air temperatures could be reduced if some of heat waves are predicted. Even a small success rate implies a large socio-economic benefit.

Keywords: heat wave, D-wave, forecast, Ising model, quantum computing

Procedia PDF Downloads 483
608 The Untreated Burden of Parkinson’s Disease: A Patient Perspective

Authors: John Acord, Ankita Batla, Kiran Khepar, Maude Schmidt, Charlotte Allen, Russ Bradford

Abstract:

Objectives: Despite the availability oftreatment options, Parkinson’s disease (PD) continues to impact heavily on a patient’s quality of life (QoL), as many symptoms that bother the patient remain unexplored and untreated in clinical settings. The aims of this research were to understand the burden of PDsymptoms from a patient perspective, particularly those which are the most persistent and debilitating, and to determine if current treatments and treatment algorithms adequately focus on their resolution. Methods: A13-question, online, patient-reported survey was created based on the MDS-Unified Parkinson’s Disease Rating Scale (MDS-UPDRS)and symptoms listed on Parkinson’s Disease Patient Advocacy Groups websites, and then validated by 10 Parkinson’s patients. In the survey, patients were asked to choose both their most common and their most bothersome symptoms, whether they had received treatment for those and, if so, had it been effective in resolving those symptoms. Results: The most bothersome symptoms reported by the 111 participants who completed the survey were sleep problems (61%), feeling tired (56%), slowness of movements (54%), and pain in some parts of the body (49%). However, while 86% of patients reported receiving dopamine or dopamine like drugs to treat their PD, far fewer reported receiving targeted therapies for additional symptoms. For example, of the patients who reported having sleep problems, only 33% received some form of treatment for this symptom. This was also true for feeling tired (30% received treatment for this symptom), slowness of movements (62% received treatment for this symptom), and pain in some parts of the body (61% received treatment for this symptom). Additionally, 65% of patients reported that the symptoms they experienced were not adequately controlled by the treatments they received, and 9% reported that their current treatments had no effect on their symptoms whatsoever. Conclusion: The survey outcomes highlight that the majority of patients involved in the study received treatment focused on their disease, however, symptom-based treatments were less well represented. Consequently, patient-reported symptoms such as sleep problems and feeling tired tended to receive more fragmented intervention than ‘classical’ PD symptoms, such as slowness of movement, even though they were reported as being amongst the most bothersome symptoms for patients. This research highlights the need to explore symptom burden from the patient’s perspective and offer Customised treatment/support for both motor and non-motor symptoms maximize patients’ quality of life.

Keywords: survey, patient reported symptom burden, unmet needs, parkinson's disease

Procedia PDF Downloads 278
607 Enhancing Sewage Sludge Management through Integrated Hydrothermal Liquefaction and Anaerobic Digestion: A Comparative Study

Authors: Harveen Kaur Tatla, Parisa Niknejad, Rajender Gupta, Bipro Ranjan Dhar, Mohd. Adana Khan

Abstract:

Sewage sludge management presents a pressing challenge in the realm of wastewater treatment, calling for sustainable and efficient solutions. This study explores the integration of Hydrothermal Liquefaction (HTL) and Anaerobic Digestion (AD) as a promising approach to address the complexities associated with sewage sludge treatment. The integration of these two processes offers a complementary and synergistic framework, allowing for the mitigation of inherent limitations, thereby enhancing overall efficiency, product quality, and the comprehensive utilization of sewage sludge. In this research, we investigate the optimal sequencing of HTL and AD within the treatment framework, aiming to discern which sequence, whether HTL followed by AD or AD followed by HTL, yields superior results. We explore a range of HTL working temperatures, including 250°C, 300°C, and 350°C, coupled with residence times of 30 and 60 minutes. To evaluate the effectiveness of each sequence, a battery of tests is conducted on the resultant products, encompassing Total Ammonia Nitrogen (TAN), Chemical Oxygen Demand (COD), and Volatile Fatty Acids (VFA). Additionally, elemental analysis is employed to determine which sequence maximizes energy recovery. Our findings illuminate the intricate dynamics of HTL and AD integration for sewage sludge management, shedding light on the temperature-residence time interplay and its impact on treatment efficiency. This study not only contributes to the optimization of sewage sludge treatment but also underscores the potential of integrated processes in sustainable waste management strategies. The insights gleaned from this research hold promise for advancing the field of wastewater treatment and resource recovery, addressing critical environmental and energy challenges.

Keywords: Anaerobic Digestion (AD), aqueous phase, energy recovery, Hydrothermal Liquefaction (HTL), sewage sludge management, sustainability.

Procedia PDF Downloads 55
606 Identification of Ideal Plain Sufu (Fermented Soybean Curds) Based on Ideal Profile Method and Assessment of the Consistency of Ideal Profiles Obtained from Consumers

Authors: Yan Ping Chen, Hau Yin Chung

Abstract:

The Ideal Profile Method (IPM) is a newly developed descriptive sensory analysis conducted by consumers without previous training. To perform this test, both the perceived and the ideal intensities from the judgements of consumers on products’ attributes, as well as their hedonic ratings were collected for formulating an ideal product (the most liked one). In addition, Ideal Profile Analysis (IPA) was conducted to check the consistency of the ideal data at both the panel and consumer levels. In this test, 12 commercial plain sufus bought from Hong Kong local market were tested by 113 consumers according to the IPM, and rated on 22 attributes. Principal component analysis was used to profile the perceived and the ideal spaces of tested products. The consistency of ideal data was then checked by IPA. The result showed that most consumers shared a common ideal. It was observed that the sensory product space and the ideal product space were structurally similar. Their first dimensions all opposed products with intense fermented related aroma to products with less fermented related aroma. And the predicted ideal profile (the estimated liking score around 7.0 in a 9.0-point scale) got higher hedonic score than the tested products (the average liking score around 6.0 in a 9.0-point scale). For the majority of consumers (95.2%), the stated ideal product considered as a potential ideal through checking the R2 coefficient value. Among all the tested products, sample-6 was the most popular one with consumer liking percentage around 30%. This product with less fermented and moldy flavour but easier to melt in mouth texture possessed close sensory profile according to the ideal product. This experiment validated that data from untrained consumers could be guided as useful information. Appreciated sensory characteristics could be served as reference in the optimization of the commercial plain sufu.

Keywords: ideal profile method, product development, sensory evaluation, sufu (fermented soybean curd)

Procedia PDF Downloads 177
605 Improvement of the Geometric of Dental Bridge Framework through Automatic Program

Authors: Rong-Yang Lai, Jia-Yu Wu, Chih-Han Chang, Yung-Chung Chen

Abstract:

The dental bridge is one of the clinical methods of the treatment for missing teeth. The dental bridge is generally designed for two layers, containing the inner layer of the framework(zirconia) and the outer layer of the porcelain-fused to framework restorations. The design of a conventional bridge is generally based on the antagonist tooth profile so that the framework evenly indented by an equal thickness from outer contour. All-ceramic dental bridge made of zirconia have well demonstrated remarkable potential to withstand a higher physiological occlusal load in posterior region, but it was found that there is still the risk of all-ceramic bridge failure in five years. Thus, how to reduce the incidence of failure is still a problem to be solved. Therefore, the objective of this study is to develop mechanical designs for all-ceramic dental bridges framework by reducing the stress and enhancing fracture resistance under given loading conditions by finite element method. In this study, dental design software is used to design dental bridge based on tooth CT images. After building model, Bi-directional Evolutionary Structural Optimization (BESO) Method algorithm implemented in finite element software was employed to analyze results of finite element software and determine the distribution of the materials in dental bridge; BESO searches the optimum distribution of two different materials, namely porcelain and zirconia. According to the previous calculation of the stress value of each element, when the element stress value is higher than the threshold value, the element would be replaced by the framework material; besides, the difference of maximum stress peak value is less than 0.1%, calculation is complete. After completing the design of dental bridge, the stress distribution of the whole structure is changed. BESO reduces the peak values of principle stress of 10% in outer-layer porcelain and avoids producing tensile stress failure.

Keywords: dental bridge, finite element analysis, framework, automatic program

Procedia PDF Downloads 269
604 Development of a Feedback Control System for a Lab-Scale Biomass Combustion System Using Programmable Logic Controller

Authors: Samuel O. Alamu, Seong W. Lee, Blaise Kalmia, Marc J. Louise Caballes, Xuejun Qian

Abstract:

The application of combustion technologies for thermal conversion of biomass and solid wastes to energy has been a major solution to the effective handling of wastes over a long period of time. Lab-scale biomass combustion systems have been observed to be economically viable and socially acceptable, but major concerns are the environmental impacts of the process and deviation of temperature distribution within the combustion chamber. Both high and low combustion chamber temperature may affect the overall combustion efficiency and gaseous emissions. Therefore, there is an urgent need to develop a control system which measures the deviations of chamber temperature from set target values, sends these deviations (which generates disturbances in the system) in the form of feedback signal (as input), and control operating conditions for correcting the errors. In this research study, major components of the feedback control system were determined, assembled, and tested. In addition, control algorithms were developed to actuate operating conditions (e.g., air velocity, fuel feeding rate) using ladder logic functions embedded in the Programmable Logic Controller (PLC). The developed control algorithm having chamber temperature as a feedback signal is integrated into the lab-scale swirling fluidized bed combustor (SFBC) to investigate the temperature distribution at different heights of the combustion chamber based on various operating conditions. The air blower rates and the fuel feeding rates obtained from automatic control operations were correlated with manual inputs. There was no observable difference in the correlated results, thus indicating that the written PLC program functions were adequate in designing the experimental study of the lab-scale SFBC. The experimental results were analyzed to study the effect of air velocity operating at 222-273 ft/min and fuel feeding rate of 60-90 rpm on the chamber temperature. The developed temperature-based feedback control system was shown to be adequate in controlling the airflow and the fuel feeding rate for the overall biomass combustion process as it helps to minimize the steady-state error.

Keywords: air flow, biomass combustion, feedback control signal, fuel feeding, ladder logic, programmable logic controller, temperature

Procedia PDF Downloads 116
603 Semi-pilot Biooxidation of Refractory Sulfide-Gold Ore Using Ferroplasma Acidophilum: D-(+)-Sucsore as a Booster and Columns Tests

Authors: Mohammad Hossein Karimi Darvanjooghi, Sara Magdouli, Satinder Kaur Brar

Abstract:

It has been reported that the microorganism’s attachment to the surfaces of ore samples is a key factor that influences the biooxidation in pretreatment for recovery of gold in sulfide-bearing ores. In this research, the implementation of D-(+)-Sucrose on the biooxidation of ore samples were studied in a semi-pilot experiment. The experiments were carried out in five separate jacketed columns (1 m height and 6 cm diameter) at a constant temperature of 37.5 ̊C and saturated humidity. The airflow rate and recycling solution flow rate were studied in the research and the optimum operating condition were reported. The ore sample (0.49 ppm gold grade) was obtained from the Hammond Reef mine site containing 15 wt.% of pyrite which included 98% of gold according to the results of micrograph images. The experiments were continued up to 100 days while air flow rates were chosen to be 0.5, 1, 1.5, 2, and 3 lit/min and the recycling solution (Containing 9K media and 0.4 wt.% D-(+)-Sucrose) flow rates were kept 5, 8, 15 ml/hr. The results indicated that the addition of D-(+)-Sucrose increased the bacterial activity due to the overproduction of extracellular polymeric substance (EPS) up to 95% and for the condition that the recycling solution and air flow rate were chosen to be 8 ml/hr and 2 lit/min, respectively, the maximum pyrite dissolution of 76% was obtained after 60 days. The results indicated that for the air flow rates of 0.5, 1, 1.5, 2, and 3 lit/min the ratio of daily pyrite dissolution per daily solution lost were found to be 0.025, 0.033, 0.031, 0.043, and 0.009 %-pyrite dissolution/ml-lost. The implementation of this microorganisms and the addition of D-(+)-Sucrose will enhance the efficiency of gold recovery through faster biooxidation process and leads to decrease in the time and energy of operation toward desired target; however, still other parameters including particle size distribution, agglomeration, aeration design, chemistry of recycling solution need to be controlled and monitored for reaching the optimum condition.

Keywords: column tests, biooxidation, gold recovery, Ferroplasma acidophilum, optimization

Procedia PDF Downloads 54
602 Control Strategy for a Solar Vehicle Race

Authors: Francois Defay, Martim Calao, Jean Francois Dassieu, Laurent Salvetat

Abstract:

Electrical vehicles are a solution for reducing the pollution using green energy. The shell Eco-Marathon provides rules in order to minimize the battery use for the race. The use of solar panel combined with efficient motor control and race strategy allow driving a 60kg vehicle with one pilot using only the solar energy in the best case. This paper presents a complete modelization of a solar vehicle used for the shell eco-marathon. This project called Helios is cooperation between non-graduated students, academic institutes, and industrials. The prototype is an ultra-energy-efficient vehicle based on one-meter square solar panel and an own-made brushless controller to optimize the electrical part. The vehicle is equipped with sensors and embedded system to provide all the data in real time in order to evaluate the best strategy for the course. A complete modelization with Matlab/Simulink is used to test the optimal strategy to increase the global endurance. Experimental results are presented to validate the different parts of the model: mechanical, aerodynamics, electrical, solar panel. The major finding of this study is to provide solutions to identify the model parameters (Rolling Resistance Coefficient, drag coefficient, motor torque coefficient, etc.) by means of experimental results combined with identification techniques. One time the coefficients are validated, the strategy to optimize the consumption and the average speed can be tested first in simulation before to be implanted for the race. The paper describes all the simulation and experimental parts and provides results in order to optimize the global efficiency of the vehicle. This works have been started four years ago and evolved many students for the experimental and theoretical parts and allow to increase the knowledge on electrical self-efficient vehicle.

Keywords: electrical vehicle, endurance, optimization, shell eco-marathon

Procedia PDF Downloads 244
601 Improvement in Drying Characteristics of Raisin by Carbonic Maceration– Process Optimization

Authors: Nursac Akyol, Merve S. Turan, Mustafa Ozcelik, Erdogan Kucukoner, Erkan Karacabey

Abstract:

Traditional raisin production is a long time drying process under sunlight. During this procedure, grapes are open to some environmental effects besides the adverse effects of the long drying period. Thus, there is a need to develop an alternative method being applicable instead of traditional one. To this extent, a combination of a potential pretreatment (carbonic maceration, CM) with convectional oven drying was examined. CM application was used in raisin production (grape drying) as a pretreatment process before oven drying. Pressure, temperature and time were examined as application parameters of CM. In conventional oven drying, the temperature is a process variable. The aim is to find out how CM and convectional drying processes affect the drying characteristics of grapes as well as their physical and chemical properties. For this purpose, the response surface method was used to determine both the effects of the variables and the optimum pretreatment and drying conditions. The optimum conditions of CM for raisin production were 0.3 MPa of pressure value, 4°C of application temperature and 8 hours of application time. The optimized drying temperature was 77°C. The results showed that the application of CM before the drying process improved the drying characteristics. Drying took only 389 minutes for grapes pretreated by CM under optimum conditions and 495 minutes for the control group dried only by the conventional drying process. According to these results, a decrease of 21% was achieved in the time requirement for raisin production. Also, it was observed that the samples dried under optimum conditions had similar physical properties as those the control group had. It was seen that raisin, which was dried under optimum conditions were in better condition in terms of some of the bioactive contents compared to control groups. In light of all results, it is seen that CM has an important potential in the industrial drying of grape samples. The current study was financially supported by TUBITAK, Turkey (Project no: 116R038).

Keywords: drying time, pretreatment, response surface methodlogy, total phenolic

Procedia PDF Downloads 115
600 Experimental Analyses of Thermoelectric Generator Behavior Using Two Types of Thermoelectric Modules for Marine Application

Authors: A. Nour Eddine, D. Chalet, L. Aixala, P. Chessé, X. Faure, N. Hatat

Abstract:

Thermal power technology such as the TEG (Thermo-Electric Generator) arouses significant attention worldwide for waste heat recovery. Despite the potential benefits of marine application due to the permanent heat sink from sea water, no significant studies on this application were to be found. In this study, a test rig has been designed and built to test the performance of the TEG on engine operating points. The TEG device is built from commercially available materials for the sake of possible economical application. Two types of commercial TEM (thermo electric module) have been studied separately on the test rig. The engine data were extracted from a commercial Diesel engine since it shares the same principle in terms of engine efficiency and exhaust with the marine Diesel engine. An open circuit water cooling system is used to replicate the sea water cold source. The characterization tests showed that the silicium-germanium alloys TEM proved a remarkable reliability on all engine operating points, with no significant deterioration of performance even under sever variation in the hot source conditions. The performance of the bismuth-telluride alloys was 100% better than the first type of TEM but it showed a deterioration in power generation when the air temperature exceeds 300 °C. The temperature distribution on the heat exchange surfaces revealed no useful combination of these two types of TEM with this tube length, since the surface temperature difference between both ends is no more than 10 °C. This study exposed the perspective of use of TEG technology for marine engine exhaust heat recovery. Although the results suggested non-sufficient power generation from the low cost commercial TEM used, it provides valuable information about TEG device optimization, including the design of heat exchanger and the types of thermo-electric materials.

Keywords: internal combustion engine application, Seebeck, thermo-electricity, waste heat recovery

Procedia PDF Downloads 227
599 Optimizing PharmD Education: Quantifying Curriculum Complexity to Address Student Burnout and Cognitive Overload

Authors: Frank Fan

Abstract:

PharmD (Doctor of Pharmacy) education has confronted an increasing challenge — curricular overload, a phenomenon resulting from the expansion of curricular requirements, as PharmD education strives to produce graduates who are practice-ready. The aftermath of the global pandemic has amplified the need for healthcare professionals, leading to a growing trend of assigning more responsibilities to them to address the global healthcare shortage. For instance, the pharmacist’s role has expanded to include not only compounding and distributing medication but also providing clinical services, including minor ailments management, patient counselling and vaccination. Consequently, PharmD programs have responded by continually expanding their curricula adding more requirements. While these changes aim to enhance the education and training of future professionals, they have also led to unintended consequences, including curricular overload, student burnout, and a potential decrease in program quality. To address the issue and ensure program quality, there is a growing need for evidence-based curriculum reforms. My research seeks to integrate Cognitive Load Theory, emerging machine learning algorithms within artificial intelligence (AI), and statistical approaches to develop a quantitative framework for optimizing curriculum design within the PharmD program at the University of Toronto, the largest PharmD program within Canada, to provide quantification and measurement of issues that currently are only discussed in terms of anecdote rather than data. This research will serve as a guide for curriculum planners, administrators, and educators, aiding in the comprehension of how the pharmacy degree program compares to others within and beyond the field of pharmacy. It will also shed light on opportunities to reduce the curricular load while maintaining its quality and rigor. Given that pharmacists constitute the third-largest healthcare workforce, their education shares similarities and challenges with other health education programs. Therefore, my evidence-based, data-driven curriculum analysis framework holds significant potential for training programs in other healthcare professions, including medicine, nursing, and physiotherapy.

Keywords: curriculum, curriculum analysis, health professions education, reflective writing, machine learning

Procedia PDF Downloads 47
598 Optimization of Acid Treatments by Assessing Diversion Strategies in Carbonate and Sandstone Formations

Authors: Ragi Poyyara, Vijaya Patnana, Mohammed Alam

Abstract:

When acid is pumped into damaged reservoirs for damage removal/stimulation, distorted inflow of acid into the formation occurs caused by acid preferentially traveling into highly permeable regions over low permeable regions, or (in general) into the path of least resistance. This can lead to poor zonal coverage and hence warrants diversion to carry out an effective placement of acid. Diversion is desirably a reversible technique of temporarily reducing the permeability of high perm zones, thereby forcing the acid into lower perm zones. The uniqueness of each reservoir can pose several challenges to engineers attempting to devise optimum and effective diversion strategies. Diversion techniques include mechanical placement and/or chemical diversion of treatment fluids, further sub-classified into ball sealers, bridge plugs, packers, particulate diverters, viscous gels, crosslinked gels, relative permeability modifiers (RPMs), foams, and/or the use of placement techniques, such as coiled tubing (CT) and the maximum pressure difference and injection rate (MAPDIR) methodology. It is not always realized that the effectiveness of diverters greatly depends on reservoir properties, such as formation type, temperature, reservoir permeability, heterogeneity, and physical well characteristics (e.g., completion type, well deviation, length of treatment interval, multiple intervals, etc.). This paper reviews the mechanisms by which each variety of diverter functions and discusses the effect of various reservoir properties on the efficiency of diversion techniques. Guidelines are recommended to help enhance productivity from zones of interest by choosing the best methods of diversion while pumping an optimized amount of treatment fluid. The success of an overall acid treatment often depends on the effectiveness of the diverting agents.

Keywords: diversion, reservoir, zonal coverage, carbonate, sandstone

Procedia PDF Downloads 410
597 Biorefinery Annexed to South African Sugar Mill: Energy Sufficiency Analysis

Authors: S. Farzad, M. Ali Mandegari, J. F. Görgens

Abstract:

The South African Sugar Industry, which has a significant impact on the national economy, is currently facing problems due to increasing energy price and low global sugar price. The available bagasse is already combusted in low-efficiency boilers of the sugar mills while bagasse is generally recognized as a promising feedstock for second generation bioethanol production. Establishment of biorefinery annexed to the existing sugar mills, as an alternative for the revitalization of sugar industry producing biofuel and electricity has been proposed and considered in this study. Since the scale is an important issue in the feasibility of the technology, this study has taken into account a typical sugar mill with 300 ton/hr sugar cane capacity. The biorefinery simulation is carried out using Aspen PlusTM V8.6, in which the sugar mill’s power and steam demand has been considered. Hence, sugar mills in South Africa can be categorized as highly efficient, efficient, and not efficient with steam consumption of 33, 40, and 60 tons of steam per ton of cane and electric power demand of 10 MW; three different scenarios are studied. The sugar cane bagasse and tops/trash are supplied to the biorefinery process and the wastes/residues (mostly lignin) from the process are burnt in the CHP plant in order to produce steam and electricity for the biorefinery and sugar mill as well. Considering the efficient sugar mill, the CHP plant has generated 5 MW surplus electric powers, but the obtained energy is not enough for self-sufficiency of the plant (Biorefinery and Sugar mill) due to lack of 34 MW heat. One of the advantages of second generation biorefinery is its low impact on the environment and carbon footprint, thus the plant should be self-sufficient in energy without using fossil fuels. For this reason, a portion of fresh bagasse should be sent to the CHP plant to meet the energy requirements. An optimization procedure was carried out to find out the appropriate portion to be burnt in the combustor. As a result, 20% of the bagasse is re-routed to the combustor which leads to 5 tons of LP Steam and 8.6 MW electric power surpluses.

Keywords: biorefinery, sugarcane bagasse, sugar mill, energy analysis, bioethanol

Procedia PDF Downloads 462
596 Extraction, Recovery and Bioactivities of Chlorogenic Acid from Unripe Green Coffee Cherry Waste of Coffee Processing Industry

Authors: Akkasit Jongjareonrak, Supansa Namchaiya

Abstract:

Unripe green coffee cherry (UGCC) accounting about 5 % of total raw material weight receiving to the coffee bean production process and is, in general, sorting out and dump as waste. The UGCC is known to rich in phenolic compounds such as caffeoylquinic acids, feruloylquinic acids, chlorogenic acid (CGA), etc. CGA is one of the potent bioactive compounds using in the nutraceutical and functional food industry. Therefore, this study aimed at optimization the extraction condition of CGA from UGCC using Accelerated Solvent Extractor (ASE). The ethanol/water mixture at various ethanol concentrations (50, 60 and 70 % (v/v)) was used as an extraction solvent at elevated pressure (10.34 MPa) and temperatures (90, 120 and 150 °C). The recovery yield of UGCC crude extract, total phenolic content, CGA content and some bioactivities of UGCC extract were investigated. Using of ASE at lower temperature with higher ethanol concentration provided higher CGA content in the UGCC crude extract. The maximum CGA content was observed at the ethanol concentration of 70% ethanol and 90 °C. The further purification of UGCC crude extract gave a higher purity of CGA with a purified CGA yield of 4.28 % (w/w, of dried UGCC sample) containing 72.52 % CGA equivalent. The antioxidant activity and antimicrobial activity of purified CGA extract were determined. The purified CGA exhibited the 2,2-Diphenyl-1-picrylhydrazyl (DPPH) radical scavenging activity at 0.88 mg Trolox equivalent/mg purified CGA sample. The antibacterial activity against Escherichia coli was observed with the minimum inhibitory concentration (MIC) at 3.12 mg/ml and minimum bactericidal concentration (MBC) at 12.5 mg/ml. These results suggested that using of high concentration of ethanol and low temperature under elevated pressure of ASE condition could accelerate the extraction of CGA from UGCC. The purified CGA extract could be a promising alternative source of bioactive compound using for nutraceutical and functional food industry.

Keywords: bioactive, chlorogenic acid, coffee, extraction

Procedia PDF Downloads 244
595 Development of a Data-Driven Method for Diagnosing the State of Health of Battery Cells, Based on the Use of an Electrochemical Aging Model, with a View to Their Use in Second Life

Authors: Desplanches Maxime

Abstract:

Accurate estimation of the remaining useful life of lithium-ion batteries for electronic devices is crucial. Data-driven methodologies encounter challenges related to data volume and acquisition protocols, particularly in capturing a comprehensive range of aging indicators. To address these limitations, we propose a hybrid approach that integrates an electrochemical model with state-of-the-art data analysis techniques, yielding a comprehensive database. Our methodology involves infusing an aging phenomenon into a Newman model, leading to the creation of an extensive database capturing various aging states based on non-destructive parameters. This database serves as a robust foundation for subsequent analysis. Leveraging advanced data analysis techniques, notably principal component analysis and t-Distributed Stochastic Neighbor Embedding, we extract pivotal information from the data. This information is harnessed to construct a regression function using either random forest or support vector machine algorithms. The resulting predictor demonstrates a 5% error margin in estimating remaining battery life, providing actionable insights for optimizing usage. Furthermore, the database was built from the Newman model calibrated for aging and performance using data from a European project called Teesmat. The model was then initialized numerous times with different aging values, for instance, with varying thicknesses of SEI (Solid Electrolyte Interphase). This comprehensive approach ensures a thorough exploration of battery aging dynamics, enhancing the accuracy and reliability of our predictive model. Of particular importance is our reliance on the database generated through the integration of the electrochemical model. This database serves as a crucial asset in advancing our understanding of aging states. Beyond its capability for precise remaining life predictions, this database-driven approach offers valuable insights for optimizing battery usage and adapting the predictor to various scenarios. This underscores the practical significance of our method in facilitating better decision-making regarding lithium-ion battery management.

Keywords: Li-ion battery, aging, diagnostics, data analysis, prediction, machine learning, electrochemical model, regression

Procedia PDF Downloads 54
594 Aerodynamic Design and Optimization of Vertical Take-Off and Landing Type Unmanned Aerial Vehicles

Authors: Enes Gunaltili, Burak Dam

Abstract:

The airplane history started with the Wright brothers' aircraft and improved day by day. With the help of this advancements, big aircrafts replace with small and unmanned air vehicles, so in this study we design this type of air vehicles. First of all, aircrafts mainly divided into two main parts in our day as a rotary and fixed wing aircrafts. The fixed wing aircraft generally use for transport, cargo, military and etc. The rotary wing aircrafts use for same area but there are some superiorities from each other. The rotary wing aircraft can take off vertically from the ground, and it can use restricted area. On the other hand, rotary wing aircrafts generally can fly lower range than fixed wing aircraft. There are one kind of aircraft consist of this two types specifications. It is named as VTOL (vertical take-off and landing) type aircraft. VTOLs are able to takeoff and land vertically and fly horizontally. The VTOL aircrafts generally can fly higher range from the rotary wings but can fly lower range from the fixed wing aircraft but it gives beneficial range between them. There are many other advantages of VTOL aircraft from the rotary and fixed wing aircraft. Because of that, VTOLs began to use for generally military, cargo, search, rescue and mapping areas. Within this framework, this study answers the question that how can we design VTOL as a small unmanned aircraft systems for search and rescue application for benefiting the advantages of fixed wing and rotary wing aircrafts by eliminating the disadvantages of them. To answer that question and design VTOL aircraft, multidisciplinary design optimizations (MDO), some theoretical terminologies, formulations, simulations and modelling systems based on CFD (Computational Fluid Dynamics) is used in same time as design methodology to determine design parameters and steps. As a conclusion, based on tests and simulations depend on design steps, suggestions on how the VTOL aircraft designed and advantages, disadvantages, and observations for design parameters are listed, then VTOL is designed and presented with the design parameters, advantages, and usage areas.

Keywords: airplane, rotary, fixed, VTOL, CFD

Procedia PDF Downloads 269