Search results for: full energy peak efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 16083

Search results for: full energy peak efficiency

543 Digital Transformation in Fashion System Design: Tools and Opportunities

Authors: Margherita Tufarelli, Leonardo Giliberti, Elena Pucci

Abstract:

The fashion industry's interest in virtuality is linked, on the one hand, to the emotional and immersive possibilities of digital resources and the resulting languages and, on the other, to the greater efficiency that can be achieved throughout the value chain. The interaction between digital innovation and deep-rooted manufacturing traditions today translates into a paradigm shift for the entire fashion industry where, for example, the traditional values of industrial secrecy and know-how give way to experimentation in an open as well as participatory way, and the complete emancipation of virtual reality from actual 'reality'. The contribution aims to investigate the theme of digitisation in the Italian fashion industry, analysing its opportunities and the criticalities that have hindered its diffusion. There are two reasons why the most common approach in the fashion sector is still analogue: (i) the fashion product lives in close contact with the human body, so the sensory perception of materials plays a central role in both the use and the design of the product, but current technology is not able to restore the sense of touch; (ii) volumes are obtained by stitching flat surfaces that once assembled, given the flexibility of the material, can assume almost infinite configurations. Managing the fit and styling of virtual garments involves a wide range of factors, including mechanical simulation, collision detection, and user interface techniques for garment creation. After briefly reviewing some of the salient historical milestones in the resolution of problems related to the digital simulation of deformable materials and the user interface for the procedures for the realisation of the clothing system, the paper will describe the operation and possibilities offered today by the latest generation of specialised software. Parametric avatars and digital sartorial approach; drawing tools optimised for pattern making; materials both from the point of view of simulated physical behaviour and of aesthetic performance, tools for checking wearability, renderings, but also tools and procedures useful to companies both for dialogue with prototyping software and machinery and for managing the archive and the variants to be made. The article demonstrates how developments in technology and digital procedures now make it possible to intervene in different stages of design in the fashion industry. An integrated and additive process in which the constructed 3D models are usable both in the prototyping and communication of physical products and in the possible exclusively digital uses of 3D models in the new generation of virtual spaces. Mastering such tools requires the acquisition of specific digital skills and, at the same time, traditional skills for the design of the clothing system, but the benefits are manifold and applicable to different business dimensions. We are only at the beginning of the global digital transformation: the emergence of new professional figures and design dynamics leaves room for imagination, but in addition to applying digital tools to traditional procedures, traditional fashion know-how needs to be transferred into emerging digital practices to ensure the continuity of the technical-cultural heritage beyond the transformation.

Keywords: digital fashion, digital technology and couture, digital fashion communication, 3D garment simulation

Procedia PDF Downloads 74
542 Migration as a Trigger Causing Change to the Levant Literary Modernism

Authors: Aathira Peedikaparambil Somasundaran

Abstract:

The beginning of the 20th century marked the perios when a new generation of Lebanese radicals sowed the seeds for the second phase of Levant literary modernism, situated in the Levant. Beirut, during this era popularly fit every radical writer’s criterion owing to its weakened censorship and political control, despite the absence of a protective womb for the development of literary modernism, caused by the natively prevalent political unsettlement. The third stage of literary modernization, in which scholars used Western-inspired critical techniques to better understand their own cultures, coincides with the time period examined in this paper, which involved the international-inspired critical analysis of native cultural stimulants, which raised questions among Arab freethinking intellectuals. Locals who ventured outside recognised the difference between the West's progress and their own nations' stagnation. The awareness of such ‘gap of success’ aroused an ambition from journalists, authors, and proletarian revolutionaries who had studied in Europe, and finally developed enlightened ideas. Some Middle Eastern authors and artists only adopted current social and political frameworks after discovering western modernity. After learning about the upheavals that were happening in the West, these thinkers aspired to bring about equally broad drastic developments in their own country's social, political, and cultural milieu. These occurrences illustrate the increased power of migration to alter the cultural and literary scene in the Levant. The paper intends to discuss the different effects of migration that contributed to Levant literary modernism. The exploration of these factors as causes begins with addressing the politically influenced activism, that has always been a relevant part of Beirut, and then diving into the psychological effects of migration in the individuals of the society, which might have induced an accommodability to alien thoughts and ideas over time, as a coping mechanism. Nature or environmental stimuli, a common trigger for any creative output, often having the highest influence during travel will be identified and analysed to inspect the extent of its impact on the exchange of ideas that resulted in Levant modernism. The efficiency of both the stimulating component of travel and the diaspora of the indigenous, a by-product of travel in catalysing modernism in the Levant has to be proven in order to understand how migration indirectly affected the transmission and adoption of ideas in Levant literature. The paper will revisit the events revolving around these key players and platforms like Shir, to understand how the Lebanese literature, tied down in poetry drastically mutated under the leadership of Adonis, Yusuf et Khal, and other pioneers of Levant literary modernism. The conclision will identify the triggers that helped authors overcome personal and geographical barriers to unite the West and the Levant, and investigate the extent to which the bi-directional migration prompted a transformation in the local poetry. Consequently, the paper aims to shed light into the unique factor that provoked the shift in the literary scene of Twentieth century in the Middle East.

Keywords: literature, modernism, Middle East, levant, Beirut

Procedia PDF Downloads 81
541 The Impact of Physical Activity for Recovering Cancer Patients

Authors: Martyn Queen, Diane Crone, Andrew Parker, Saul Bloxham

Abstract:

Rationale: There is a growing body of evidence that supports the use of physical activity during and after cancer treatment. However, activity levels for patients remain low. As more cancer patients are treated successfully, and treatment costs continue to escalate, physical activity may be a promising adjunct to a person-centred healthcare approach to recovery. Aim: The aim was to further understand how physical activity may enhance the recovery process for a group of mixed-site cancer patients. Objectives: The research investigated longitudinal changes in physical activity and perceived the quality of life between two and six month’s post-exercise interventions. It also investigated support systems that enabled patients to sustain these perceived changes. Method: The respondent cohort comprised 14 mixed-site cancer patients aged 43-70 (11 women, 3 men), who participated in a two-phase physical activity intervention that took place at a university in the South West of England. Phase 1 consisted of an eight-week structured physical activity programme; Phase 2 consisted of four months of non-supervised physical activity. Semi-structured interviews took place three times over six months with each participant. Grounded theory informed the data collection and analysis which, in turn, facilitated theoretical development. Findings: Our findings propose three theories on the impact of physical activity for recovering cancer patients: 1) Knowledge gained through a structured exercise programme can enable recovering cancer patients to independently sustain physical activity to four-month follow-up. 2) Sustaining physical activity for six months promotes positive changes in the quality of life indicators of chronic fatigue, self-efficacy, the ability to self-manage and energy levels. 3) Peer support from patients facilitates adherence to a structured exercise programme and support from a spouse, or life partner facilitates independently sustained physical activity to four-month follow-up. Conclusions: This study demonstrates that qualitative research can provide an evidence base that could be used to support future care plans for cancer patients. Findings also demonstrate that a physical activity intervention can be effective at helping cancer patients recover from the side effects of their treatment, and recommends that physical activity should become an adjunct therapy alongside traditional cancer treatments.

Keywords: physical activity, health, cancer recovery, quality of life, support systems, qualitative, grounded theory, person-centred healthcare

Procedia PDF Downloads 294
540 Pushover Analysis of Masonry Infilled Reinforced Concrete Frames for Performance Based Design for near Field Earthquakes

Authors: Alok Madan, Ashok Gupta, Arshad K. Hashmi

Abstract:

Non-linear dynamic time history analysis is considered as the most advanced and comprehensive analytical method for evaluating the seismic response and performance of multi-degree-of-freedom building structures under the influence of earthquake ground motions. However, effective and accurate application of the method requires the implementation of advanced hysteretic constitutive models of the various structural components including masonry infill panels. Sophisticated computational research tools that incorporate realistic hysteresis models for non-linear dynamic time-history analysis are not popular among the professional engineers as they are not only difficult to access but also complex and time-consuming to use. And, commercial computer programs for structural analysis and design that are acceptable to practicing engineers do not generally integrate advanced hysteretic models which can accurately simulate the hysteresis behavior of structural elements with a realistic representation of strength degradation, stiffness deterioration, energy dissipation and ‘pinching’ under cyclic load reversals in the inelastic range of behavior. In this scenario, push-over or non-linear static analysis methods have gained significant popularity, as they can be employed to assess the seismic performance of building structures while avoiding the complexities and difficulties associated with non-linear dynamic time-history analysis. “Push-over” or non-linear static analysis offers a practical and efficient alternative to non-linear dynamic time-history analysis for rationally evaluating the seismic demands. The present paper is based on the analytical investigation of the effect of distribution of masonry infill panels over the elevation of planar masonry infilled reinforced concrete (R/C) frames on the seismic demands using the capacity spectrum procedures implementing nonlinear static analysis (pushover analysis) in conjunction with the response spectrum concept. An important objective of the present study is to numerically evaluate the adequacy of the capacity spectrum method using pushover analysis for performance based design of masonry infilled R/C frames for near-field earthquake ground motions.

Keywords: nonlinear analysis, capacity spectrum method, response spectrum, seismic demand, near-field earthquakes

Procedia PDF Downloads 405
539 Modeling of the Fermentation Process of Enzymatically Extracted Annona muricata L. Juice

Authors: Calister Wingang Makebe, Wilson Agwanande Ambindei, Zangue Steve Carly Desobgo, Abraham Billu, Emmanuel Jong Nso, P. Nisha

Abstract:

Traditional liquid-state fermentation processes of Annona muricata L. juice can result in fluctuating product quality and quantity due to difficulties in control and scale up. This work describes a laboratory-scale batch fermentation process to produce a probiotic Annona muricata L. enzymatically extracted juice, which was modeled using the Doehlert design with independent extraction factors being incubation time, temperature, and enzyme concentration. It aimed at a better understanding of the traditional process as an initial step for future optimization. Annona muricata L. juice was fermented with L. acidophilus (NCDC 291) (LA), L. casei (NCDC 17) (LC), and a blend of LA and LC (LCA) for 72 h at 37 °C. Experimental data were fitted into mathematical models (Monod, Logistic and Luedeking and Piret models) using MATLAB software, to describe biomass growth, sugar utilization, and organic acid production. The optimal fermentation time was obtained based on cell viability, which was 24 h for LC and 36 h for LA and LCA. The model was particularly effective in estimating biomass growth, reducing sugar consumption, and lactic acid production. The values of the determination coefficient, R2, were 0.9946, 0.9913 and 0.9946, while the residual sum of square error, SSE, was 0.2876, 0.1738 and 0.1589 for LC, LA and LCA, respectively. The growth kinetic parameters included the maximum specific growth rate, µm, which was 0.2876 h-1, 0.1738 h-1 and 0.1589 h-1, as well as the substrate saturation, Ks, with 9.0680 g/L, 9.9337 g/L and 9.0709 g/L respectively for LC, LA and LCA. For the stoichiometric parameters, the yield of biomass based on utilized substrate (YXS) was 50.7932, 3.3940 and 61.0202, and the yield of product based on utilized substrate (YPS) was 2.4524, 0.2307 and 0.7415 for LC, LA, and LCA, respectively. In addition, the maintenance energy parameter (ms) was 0.0128, 0.0001 and 0.0004 with respect to LC, LA and LCA. With the kinetic model proposed by Luedeking and Piret for lactic acid production rate, the growth associated and non-growth associated coefficients were determined as 1.0028 and 0.0109, respectively. The model was demonstrated for batch growth of LA, LC, and LCA in Annona muricata L. juice. The present investigation validates the potential of Annona muricata L. based medium for heightened economical production of a probiotic medium.

Keywords: L. acidophilus, L. casei, fermentation, modelling, kinetics

Procedia PDF Downloads 68
538 Optimization of Heat Insulation Structure and Heat Flux Calculation Method of Slug Calorimeter

Authors: Zhu Xinxin, Wang Hui, Yang Kai

Abstract:

Heat flux is one of the most important test parameters in the ground thermal protection test. Slug calorimeter is selected as the main sensor measuring heat flux in arc wind tunnel test due to the convenience and low cost. However, because of excessive lateral heat transfer and the disadvantage of the calculation method, the heat flux measurement error of the slug calorimeter is large. In order to enhance measurement accuracy, the heat insulation structure and heat flux calculation method of slug calorimeter were improved. The heat transfer model of the slug calorimeter was built according to the energy conservation principle. Based on the heat transfer model, the insulating sleeve of the hollow structure was designed, which helped to greatly decrease lateral heat transfer. And the slug with insulating sleeve of hollow structure was encapsulated using a package shell. The improved insulation structure reduced heat loss and ensured that the heat transfer characteristics were almost the same when calibrated and tested. The heat flux calibration test was carried out in arc lamp system for heat flux sensor calibration, and the results show that test accuracy and precision of slug calorimeter are improved greatly. In the meantime, the simulation model of the slug calorimeter was built. The heat flux values in different temperature rise time periods were calculated by the simulation model. The results show that extracting the data of the temperature rise rate as soon as possible can result in a smaller heat flux calculation error. Then the different thermal contact resistance affecting calculation error was analyzed by the simulation model. The contact resistance between the slug and the insulating sleeve was identified as the main influencing factor. The direct comparison calibration correction method was proposed based on only heat flux calibration. The numerical calculation correction method was proposed based on the heat flux calibration and simulation model of slug calorimeter after the simulation model was solved by solving the contact resistance between the slug and the insulating sleeve. The simulation and test results show that two methods can greatly reduce the heat flux measurement error. Finally, the improved slug calorimeter was tested in the arc wind tunnel. And test results show that the repeatability accuracy of improved slug calorimeter is less than 3%. The deviation of measurement value from different slug calorimeters is less than 3% in the same fluid field. The deviation of measurement value between slug calorimeter and Gordon Gage is less than 4% in the same fluid field.

Keywords: correction method, heat flux calculation, heat insulation structure, heat transfer model, slug calorimeter

Procedia PDF Downloads 118
537 Mathematical Modelling of Biogas Dehumidification by Using of Counterflow Heat Exchanger

Authors: Staņislavs Gendelis, Andris Jakovičs, Jānis Ratnieks, Aigars Laizāns, Dāvids Vardanjans

Abstract:

Dehumidification of biogas at the biomass plants is very important to provide the energy efficient burning of biomethane at the outlet. A few methods are widely used to reduce the water content in biogas, e.g. chiller/heat exchanger based cooling, usage of different adsorbents like PSA, or the combination of such approaches. A quite different method of biogas dehumidification is offered and analyzed in this paper. The main idea is to direct the flow of biogas from the plant around it downwards; thus, creating additional insulation layer. As the temperature in gas shell layer around the plant will decrease from ~ 38°C to 20°C in the summer or even to 0°C in the winter, condensation of water vapor occurs. The water from the bottom of the gas shell can be collected and drain away. In addition, another upward shell layer is created after the condensate drainage place on the outer side to further reducing heat losses. Thus, counterflow biogas heat exchanger is created around the biogas plant. This research work deals with the numerical modelling of biogas flow, taking into account heat exchange and condensation on cold surfaces. Different kinds of boundary conditions (air and ground temperatures in summer/winter) and various physical properties of constructions (insulation between layers, wall thickness) are included in the model to make it more general and useful for different biogas flow conditions. The complexity of this problem is fact, that the temperatures in both channels are conjugated in case of low thermal resistance between layers. MATLAB programming language is used for multiphysical model development, numerical calculations and result visualization. Experimental installation of a biogas plant’s vertical wall with an additional 2 layers of polycarbonate sheets with the controlled gas flow was set up to verify the modelling results. Gas flow at inlet/outlet, temperatures between the layers and humidity were controlled and measured during a number of experiments. Good correlation with modelling results for vertical wall section allows using of developed numerical model for an estimation of parameters for the whole biogas dehumidification system. Numerical modelling of biogas counterflow heat exchanger system placed on the plant’s wall for various cases allows optimizing of thickness for gas layers and insulation layer to ensure necessary dehumidification of the gas under different climatic conditions. Modelling of system’s defined configuration with known conditions helps to predict the temperature and humidity content of the biogas at the outlet.

Keywords: biogas dehumidification, numerical modelling, condensation, biogas plant experimental model

Procedia PDF Downloads 550
536 Digimesh Wireless Sensor Network-Based Real-Time Monitoring of ECG Signal

Authors: Sahraoui Halima, Dahani Ameur, Tigrine Abedelkader

Abstract:

DigiMesh technology represents a pioneering advancement in wireless networking, offering cost-effective and energy-efficient capabilities. Its inherent simplicity and adaptability facilitate the seamless transfer of data between network nodes, extending the range and ensuring robust connectivity through autonomous self-healing mechanisms. In light of these advantages, this study introduces a medical platform harnessed with DigiMesh wireless network technology characterized by low power consumption, immunity to interference, and user-friendly operation. The primary application of this platform is the real-time, long-distance monitoring of Electrocardiogram (ECG) signals, with the added capacity for simultaneous monitoring of ECG signals from multiple patients. The experimental setup comprises key components such as Raspberry Pi, E-Health Sensor Shield, and Xbee DigiMesh modules. The platform is composed of multiple ECG acquisition devices labeled as Sensor Node 1 and Sensor Node 2, with a Raspberry Pi serving as the central hub (Sink Node). Two communication approaches are proposed: Single-hop and multi-hop. In the Single-hop approach, ECG signals are directly transmitted from a sensor node to the sink node through the XBee3 DigiMesh RF Module, establishing peer-to-peer connections. This approach was tested in the first experiment to assess the feasibility of deploying wireless sensor networks (WSN). In the multi-hop approach, two sensor nodes communicate with the server (Sink Node) in a star configuration. This setup was tested in the second experiment. The primary objective of this research is to evaluate the performance of both Single-hop and multi-hop approaches in diverse scenarios, including open areas and obstructed environments. Experimental results indicate the DigiMesh network's effectiveness in Single-hop mode, with reliable communication over distances of approximately 300 meters in open areas. In the multi-hop configuration, the network demonstrated robust performance across approximately three floors, even in the presence of obstacles, without the need for additional router devices. This study offers valuable insights into the capabilities of DigiMesh wireless technology for real-time ECG monitoring in healthcare applications, demonstrating its potential for use in diverse medical scenarios.

Keywords: DigiMesh protocol, ECG signal, real-time monitoring, medical platform

Procedia PDF Downloads 81
535 Effects of Dietary Supplementation with Fermented Feed Mulberry(Morus alba L.) on Reproductive Performance and Fecal M Icro Biota of Pregnant Sows

Authors: Yuping Zhang, Teng Ma, Nadia Everaert, Hongfu Zhang

Abstract:

Supplying dietary fiber during gestation is known to improve the welfare of feed-restricted sows. However, whether high fiber supplementation during pregnancy can improve the performance of sows and their offspring depends on the type, amount, source, etc., in which the solubility plays a key important role. Insoluble fibers have been shown to increase feed intake of sows in lactation, meet the needs of sows for milk production, reduce sow’s weight and backfat loss, and thus improve the performance of sows and their offspring. In this study, we investigated the effect of the addition of fermented feed mulberry (FFM), rich in insoluble fiber, during the whole gestation on the performance of sows and their offspring and explored possible mechanisms by determining serum hormones and fecal microbiota. The FFM-diet contained 25.5% FFM (on dry matter basis) and was compared with the control–diet (CON, corn, and soybean meal diet). The insoluble fiber content of the FFM and CON diet are respectively 29.3% and 19.1%. both groups were allocated 20 multiparous sows, and they are fed different feed allowance to make sure all the sows get the same digestible energy for each day. After farrowing, all sows were fed the same lactation diet ad libitum. The serum estradiol, progesterone concentration, blood glucose, and insulin levels at gestation day 0, 20, and 60 were tested. And also, the composition and differences fecal microbiota at day 60 of gestation were analyzed. Fecal consistency was determined with Bristol stool scale method, those with a score below 3 were counted as constipation The results showed that there was no impact of the FFM treatment on sows’ backfat, bodyweight changes, blood glucose, serum estradiol, and progesterone concentration, litter size, and performance of the offspring(p > 0.05), Except significant decrease in the concentration of insulin in sows’ serum at 60 days of gestation were observed in the FFM group compare to the CON group (P < 0.01). FFM diet also significantly increased feed intake on the first, third, and 21st days of sow lactation. (p < 0.01); The α- and β- diversity and abundance of the microbiota were significant increased (p < 0.01) compared with the CON group, The abundance of Firmicutes and Bacteroidetes were significantly increased, meanwhile the abundances of Spirochetes, Proteobacteria, and Euryarchaeota, were significantly reduced in the feces of the FFM group. We also analyzed the fecal microbiota of constipated sows vs non-constipated sows and found that the diversity and abundance did also differ between these two groups. FFM and CON group < 0.01). The relationship between sow’s constipation and microbiota merits further investigation.

Keywords: fermented feed mulberry, reproductive performance, fecal flora, sow

Procedia PDF Downloads 155
534 Exploring the Potential of Bio-Inspired Lattice Structures for Dynamic Applications in Design

Authors: Axel Thallemer, Aleksandar Kostadinov, Abel Fam, Alex Teo

Abstract:

For centuries, the forming processes in nature served as a source of inspiration for both architects and designers. It seems as most human artifacts are based on ideas which stem from the observation of the biological world and its principles of growth. As a fact, in the cultural history of Homo faber, materials have been mostly used in their solid state: From hand axe to computer mouse, the principle of employing matter has not changed ever since the first creation. In the scope of history only recently and by the help of additive-generative fabrication processes through Computer Aided Design (CAD), designers were enabled to deconstruct solid artifacts into an outer skin and an internal lattice structure. The intention behind this approach is to create a new topology which reduces resources and integrates functions into an additively manufactured component. However, looking at the currently employed lattice structures, it is very clear that those lattice structure geometries have not been thoroughly designed, but rather taken out of basic-geometry libraries which are usually provided by the CAD. In the here presented study, a group of 20 industrial design students created new and unique lattice structures using natural paragons as their models. The selected natural models comprise both the animate and inanimate world, with examples ranging from the spiraling of narwhal tusks, off-shooting of mangrove roots, minimal surfaces of soap bubbles, up to the rhythmical arrangement of molecular geometry, like in the case of SiOC (Carbon-Rich Silicon Oxicarbide). This ideation process leads to a design of a geometric cell, which served as a basic module for the lattice structure, whereby the cell was created in visual analogy to its respective natural model. The spatial lattices were fabricated additively in mostly [X]3 by [Y]3 by [Z]3 units’ volumes using selective powder bed melting in polyamide with (z-axis) 50 mm and 100 µm resolution and subdued to mechanical testing of their elastic zone in a biomedical laboratory. The results demonstrate that additively manufactured lattice structures can acquire different properties when they are designed in analogy to natural models. Several of the lattices displayed the ability to store and return kinetic energy, while others revealed a structural failure which can be exploited for purposes where a controlled collapse of a structure is required. This discovery allows for various new applications of functional lattice structures within industrially created objects.

Keywords: bio-inspired, biomimetic, lattice structures, additive manufacturing

Procedia PDF Downloads 150
533 Creation of a Test Machine for the Scientific Investigation of Chain Shot

Authors: Mark McGuire, Eric Shannon, John Parmigiani

Abstract:

Timber harvesting increasingly involves mechanized equipment. This has increased the efficiency of harvesting, but has also introduced worker-safety concerns. One such concern arises from the use of harvesters. During operation, harvesters subject saw chain to large dynamic mechanical stresses. These stresses can, under certain conditions, cause the saw chain to fracture. The high speed of harvester saw chain can cause the resulting open chain loop to fracture a second time due to the dynamic loads placed upon it as it travels through space. If a second fracture occurs, it can result in a projectile consisting of one-to-several chain links. This projectile is referred to as a chain shot. It has speeds similar to a bullet but typically has greater mass and is a significant safety concern. Numerous examples exist of chain shots penetrating bullet-proof barriers and causing severe injury and death. Improved harvester-cab barriers can help prevent injury however a comprehensive scientific understanding of chain shot is required to consistently reduce or prevent it. Obtaining this understanding requires a test machine with the capability to cause chain shot to occur under carefully controlled conditions and accurately measure the response. Worldwide few such test machine exist. Those that do focus on validating the ability of barriers to withstand a chain shot impact rather than obtaining a scientific understanding of the chain shot event itself. The purpose of this paper is to describe the design, fabrication, and use of a test machine capable of a comprehensive scientific investigation of chain shot. The capabilities of this machine are to test all commercially-available saw chains and bars at chain tensions and speeds meeting and exceeding those typically encountered in harvester use and accurately measure the corresponding key technical parameters. The test machine was constructed inside of a standard shipping container. This provides space for both an operator station and a test chamber. In order to contain the chain shot under any possible test conditions, the test chamber was lined with a base layer of AR500 steel followed by an overlay of HDPE. To accommodate varying bar orientations and fracture-initiation sites, the entire saw chain drive unit and bar mounting system is modular and capable of being located anywhere in the test chamber. The drive unit consists of a high-speed electric motor with a flywheel. Standard Ponsse harvester head components are used to bar mounting and chain tensioning. Chain lubrication is provided by a separate peristaltic pump. Chain fracture is initiated through ISO standard 11837. Measure parameters include shaft speed, motor vibration, bearing temperatures, motor temperature, motor current draw, hydraulic fluid pressure, chain force at fracture, and high-speed camera images. Results show that the machine is capable of consistently causing chain shot. Measurement output shows fracture location and the force associated with fracture as a function of saw chain speed and tension. Use of this machine will result in a scientific understanding of chain shot and consequently improved products and greater harvester operator safety.

Keywords: chain shot, safety, testing, timber harvesters

Procedia PDF Downloads 153
532 Physical, Chemical and Mechanical Properties of Different Varieties of Jatropha curcas Cultivated in Pakistan

Authors: Mehmood Ali, Attaullah Khan, Md. Abul Kalam

Abstract:

Petroleum crude oil reserves are going to deplete in future due to the consumption of fossil fuels in transportation and energy generating sector. Thus, increasing the fossil fuel prices and also causing environmental degradation issues such as climate change and global warming due to air pollution. Therefore, to tackle these issues the environmentally friendly fuels are the potential substitute with lower emissions of toxic gases. A non-edible vegetable oilseed crop, Jatropha curcas, from different origins such as Malaysia, Thailand and India were cultivated in Pakistan. The harvested seeds physical, chemical and mechanical properties were measured, having an influence on the post-harvesting machines design parameters for dehulling, storing bins, drying, oil extraction from seeds with a screw expeller and in-situ transesterification reaction to produce biodiesel fuel. The seed variety from Thailand was found better in comparison of its properties with other varieties from Malaysia and India. The seed yield from these three varieties i.e. Malaysia, Thailand and India were 829, 943 and 735 kg/ acre/ year respectively. While the oil extraction yield from Thailand variety seed was found higher (i.e. 32.61 % by wt.) as compared to other two varieties from Malaysia and India were 27.96 and 24.96 % by wt respectively. The physical properties investigated showed the geometric mean diameter of seeds from three varieties Malaysia, Thailand and India were 11.350, 10.505 and 11.324 mm, while the sphericity of seeds were found 0.656, 0.664 and 0.655. The bulk densities of the powdered seeds from three varieties Malaysia, Thailand and India, were found as 0.9697, 0.9932 and 0.9601 g/cm³ and % passing was obtained with sieve test were 78.7, 87.1 and 79.3 respectively. The densities of the extracted oil from three varieties Malaysia, Thailand and India were found 0.902, 0.898 and 0.902 g/ mL with corresponding kinematic viscosities 54.50, 49.18 and 48.16 mm2/sec respectively. The higher heating values (HHV) of extracted oil from Malaysia, Thailand and India seed varieties were measured as 40.29, 36.41 and 34.27 MJ/ kg, while the HHV of de-oiled cake from these varieties were 21.23, 20.78 and 17.31 MJ/kg respectively. The de-oiled cake can be used as compost with nutrients and carbon content to enhance soil fertility to grow future Jatropha curcas oil seed crops and also can be used as a fuel for heating and cooking purpose. Moreover, the mechanical parameter micro Vickers hardness of Malaysia seed was found lowest 16.30 HV measured with seed in a horizontal position to the loading in comparison to other two varieties as 25.2 and 18.7 HV from Thailand and India respectively. The fatty acid composition of three varieties of seed oil showed the presence of C8-C22, required to produce good quality biodiesel fuel. In terms of physicochemical properties of seeds and its extracted oil, the variety from Thailand was found better as compared to the other two varieties.

Keywords: biodiesel, Jatropha curcas, mechanical property, physico-chemical properties

Procedia PDF Downloads 141
531 Optimizing Production Yield Through Process Parameter Tuning Using Deep Learning Models: A Case Study in Precision Manufacturing

Authors: Tolulope Aremu

Abstract:

This paper is based on the idea of using deep learning methodology for optimizing production yield by tuning a few key process parameters in a manufacturing environment. The study was explicitly on how to maximize production yield and minimize operational costs by utilizing advanced neural network models, specifically Long Short-Term Memory and Convolutional Neural Networks. These models were implemented using Python-based frameworks—TensorFlow and Keras. The targets of the research are the precision molding processes in which temperature ranges between 150°C and 220°C, the pressure ranges between 5 and 15 bar, and the material flow rate ranges between 10 and 50 kg/h, which are critical parameters that have a great effect on yield. A dataset of 1 million production cycles has been considered for five continuous years, where detailed logs are present showing the exact setting of parameters and yield output. The LSTM model would model time-dependent trends in production data, while CNN analyzed the spatial correlations between parameters. Models are designed in a supervised learning manner. For the model's loss, an MSE loss function is used, optimized through the Adam optimizer. After running a total of 100 training epochs, 95% accuracy was achieved by the models recommending optimal parameter configurations. Results indicated that with the use of RSM and DOE traditional methods, there was an increase in production yield of 12%. Besides, the error margin was reduced by 8%, hence consistent quality products from the deep learning models. The monetary value was annually around $2.5 million, the cost saved from material waste, energy consumption, and equipment wear resulting from the implementation of optimized process parameters. This system was deployed in an industrial production environment with the help of a hybrid cloud system: Microsoft Azure, for data storage, and the training and deployment of their models were performed on Google Cloud AI. The functionality of real-time monitoring of the process and automatic tuning of parameters depends on cloud infrastructure. To put it into perspective, deep learning models, especially those employing LSTM and CNN, optimize the production yield by fine-tuning process parameters. Future research will consider reinforcement learning with a view to achieving further enhancement of system autonomy and scalability across various manufacturing sectors.

Keywords: production yield optimization, deep learning, tuning of process parameters, LSTM, CNN, precision manufacturing, TensorFlow, Keras, cloud infrastructure, cost saving

Procedia PDF Downloads 34
530 Seismic Impact and Design on Buried Pipelines

Authors: T. Schmitt, J. Rosin, C. Butenweg

Abstract:

Seismic design of buried pipeline systems for energy and water supply is not only important for plant and operational safety, but in particular for the maintenance of supply infrastructure after an earthquake. Past earthquakes have shown the vulnerability of pipeline systems. After the Kobe earthquake in Japan in 1995 for instance, in some regions the water supply was interrupted for almost two months. The present paper shows special issues of the seismic wave impacts on buried pipelines, describes calculation methods, proposes approaches and gives calculation examples. Buried pipelines are exposed to different effects of seismic impacts. This paper regards the effects of transient displacement differences and resulting tensions within the pipeline due to the wave propagation of the earthquake. Other effects are permanent displacements due to fault rupture displacements at the surface, soil liquefaction, landslides and seismic soil compaction. The presented model can also be used to calculate fault rupture induced displacements. Based on a three-dimensional Finite Element Model parameter studies are performed to show the influence of several parameters such as incoming wave angle, wave velocity, soil depth and selected displacement time histories. In the computer model, the interaction between the pipeline and the surrounding soil is modeled with non-linear soil springs. A propagating wave is simulated affecting the pipeline punctually independently in time and space. The resulting stresses mainly are caused by displacement differences of neighboring pipeline segments and by soil-structure interaction. The calculation examples focus on pipeline bends as the most critical parts. Special attention is given to the calculation of long-distance heat pipeline systems. Here, in regular distances expansion bends are arranged to ensure movements of the pipeline due to high temperature. Such expansion bends are usually designed with small bending radii, which in the event of an earthquake lead to high bending stresses at the cross-section of the pipeline. Therefore, Karman's elasticity factors, as well as the stress intensity factors for curved pipe sections, must be taken into account. The seismic verification of the pipeline for wave propagation in the soil can be achieved by observing normative strain criteria. Finally, an interpretation of the results and recommendations are given taking into account the most critical parameters.

Keywords: buried pipeline, earthquake, seismic impact, transient displacement

Procedia PDF Downloads 188
529 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia

Authors: Tanima Chatterjee, Maitree Bhattacharyya

Abstract:

The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.

Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes

Procedia PDF Downloads 131
528 The Impact of Climate Change on Sustainable Aquaculture Production

Authors: Peyman Mosberian-Tanha, Mona Rezaei

Abstract:

Aquaculture sector is the fastest growing food sector with annual growth rate of about 10%. The sustainability of aquaculture production, however, has been debated mainly in relation to the feed ingredients used for farmed fish. The industry has been able to decrease its dependency on marine-based ingredients in line with policies for more sustainable production. As a result, plant-based ingredients have increasingly been incorporated in aquaculture feeds, especially in feeds for popular carnivorous species, salmonids. The effect of these ingredients on salmonids’ health and performance has been widely studied. In most cases, plant-based diets are associated with varying degrees of health and performance issues across salmonids, partly depending on inclusion levels of plant ingredients and the species in question. However, aquaculture sector is facing another challenge of concern. Environmental challenges in association with climate change is another issue the aquaculture sector must deal with. Data from trials in salmonids subjected to environmental challenges of various types show adverse physiological responses, partly in relation to stress. To date, there are only a limited number of studies reporting the interactive effects of adverse environmental conditions and dietary regimens on salmonids. These studies have shown that adverse environmental conditions exacerbate the detrimental effect of plant-based diets on digestive function and health in salmonids. This indicates an additional challenge for the aquaculture sector to grow in a sustainable manner. The adverse environmental conditions often studied in farmed fish is the change in certain water quality parameters such as oxygen and/or temperature that are typically altered in response to climate change and, more specifically, global warming. In a challenge study, we observed that the in the fish fed a plant-based diet, the fish’s ability to absorb dietary energy was further reduced when reared under low oxygen level. In addition, gut health in these fish was severely impaired. Some other studies also confirm the adverse effect of environmental challenge on fish’s gut health. These effects on the digestive function and gut health of salmonids may result in less resistance to diseases and weaker performance with significant economic and ethical implications. Overall, various findings indicate the multidimensional negative effects of climate change, as a major environmental issue, in different sectors, including aquaculture production. Therefore, a comprehensive evaluation of different ways to cope with climate change is essential for planning more sustainable strategies in aquaculture sector.

Keywords: aquaculture, climate change, sustainability, salmonids

Procedia PDF Downloads 189
527 An Empirical Study of Determinants Influencing Telemedicine Services Acceptance by Healthcare Professionals: Case of Selected Hospitals in Ghana

Authors: Jonathan Kissi, Baozhen Dai, Wisdom W. K. Pomegbe, Abdul-Basit Kassim

Abstract:

Protecting patient’s digital information is a growing concern for healthcare institutions as people nowadays perpetually live their lives through telemedicine services. These telemedicine services have been confronted with several determinants that hinder their successful implementations, especially in developing countries. Identifying such determinants that influence the acceptance of telemedicine services is also a problem for healthcare professionals. Despite the tremendous increase in telemedicine services, its adoption, and use has been quite slow in some healthcare settings. Generally, it is accepted in today’s globalizing world that the success of telemedicine services relies on users’ satisfaction. Satisfying health professionals and patients are one of the crucial objectives of telemedicine success. This study seeks to investigate the determinants that influence health professionals’ intention to utilize telemedicine services in clinical activities in a sub-Saharan African country in West Africa (Ghana). A hybridized model comprising of health adoption models, including technology acceptance theory, diffusion of innovation theory, and protection of motivation theory, were used to investigate these quandaries. The study was carried out in four government health institutions that apply and regulate telemedicine services in their clinical activities. A structured questionnaire was developed and used for data collection. Purposive and convenience sampling methods were used in the selection of healthcare professionals from different medical fields for the study. The collected data were analyzed based on structural equation modeling (SEM) approach. All selected constructs showed a significant relationship with health professional’s behavioral intention in the direction expected from prior literature including perceived usefulness, perceived ease of use, management strategies, financial sustainability, communication channels, patients security threat, patients privacy risk, self efficacy, actual service use, user satisfaction, and telemedicine services systems securities threat. Surprisingly, user characteristics and response efficacy of health professionals were not significant in the hybridized model. The findings and insights from this research show that health professionals are pragmatic when making choices for technology applications and also their willingness to use telemedicine services. They are, however, anxious about its threats and coping appraisals. The identified significant constructs in the study may help to increase efficiency, quality of services, quality patient care delivery, and satisfactory user satisfaction among healthcare professionals. The implantation and effective utilization of telemedicine services in the selected hospitals will aid as a strategy to eradicate hardships in healthcare services delivery. The service will help attain universal health access coverage to all populace. This study contributes to empirical knowledge by identifying the vital factors influencing health professionals’ behavioral intentions to adopt telemedicine services. The study will also help stakeholders of healthcare to formulate better policies towards telemedicine service usage.

Keywords: telemedicine service, perceived usefulness, perceived ease of use, management strategies, security threats

Procedia PDF Downloads 142
526 Rapid Plasmonic Colorimetric Glucose Biosensor via Biocatalytic Enlargement of Gold Nanostars

Authors: Masauso Moses Phiri

Abstract:

Frequent glucose monitoring is essential to the management of diabetes. Plasmonic enzyme-based glucose biosensors have the advantages of greater specificity, simplicity and rapidity. The aim of this study was to develop a rapid plasmonic colorimetric glucose biosensor based on biocatalytic enlargement of AuNS guided by GOx. Gold nanoparticles of 18 nm in diameter were synthesized using the citrate method. Using these as seeds, a modified seeded method for the synthesis of monodispersed gold nanostars was followed. Both the spherical and star-shaped nanoparticles were characterized using ultra-violet visible spectroscopy, agarose gel electrophoresis, dynamic light scattering, high-resolution transmission electron microscopy and energy-dispersive X-ray spectroscopy. The feasibility of a plasmonic colorimetric assay through growth of AuNS by silver coating in the presence of hydrogen peroxide was investigated by several control and optimization experiments. Conditions for excellent sensing such as the concentration of the detection solution in the presence of 20 µL AuNS, 10 mM of 2-(N-morpholino) ethanesulfonic acid (MES), ammonia and hydrogen peroxide were optimized. Using the optimized conditions, the glucose assay was developed by adding 5mM of GOx to the solution and varying concentrations of glucose to it. Kinetic readings, as well as color changes, were observed. The results showed that the absorbance values of the AuNS were blue shifting and increasing as the concentration of glucose was elevated. Control experiments indicated no growth of AuNS in the absence of GOx, glucose or molecular O₂. Increased glucose concentration led to an enhanced growth of AuNS. The detection of glucose was also done by naked-eye. The color development was near complete in ± 10 minutes. The kinetic readings which were monitored at 450 and 560 nm showed that the assay could discriminate between different concentrations of glucose by ± 50 seconds and near complete at ± 120 seconds. A calibration curve for the qualitative measurement of glucose was derived. The magnitude of wavelength shifts and absorbance values increased concomitantly with glucose concentrations until 90 µg/mL. Beyond that, it leveled off. The lowest amount of glucose that could produce a blue shift in the localized surface plasmon resonance (LSPR) absorption maxima was found to be 10 – 90 µg/mL. The limit of detection was 0.12 µg/mL. This enabled the construction of a direct sensitivity plasmonic colorimetric detection of glucose using AuNS that was rapid, sensitive and cost-effective with naked-eye detection. It has great potential for transfer of technology for point-of-care devices.

Keywords: colorimetric, gold nanostars, glucose, glucose oxidase, plasmonic

Procedia PDF Downloads 153
525 Improving Contributions to the Strengthening of the Legislation Regarding Road Infrastructure Safety Management in Romania, Case Study: Comparison Between the Initial Regulations and the Clarity of the Current Regulations - Trends Regarding the Efficiency

Authors: Corneliu-Ioan Dimitriu, Gheorghe Frățilă

Abstract:

Romania and Bulgaria have high rates of road deaths per million inhabitants. Directive (EU) 2019/1936, known as the RISM Directive, has been transposed into national law by each Member State. The research focuses on the amendments made to Romanian legislation through Government Ordinance no. 3/2022, which aims to improve road safety management on infrastructure. The aim of the research is two-fold: to sensitize the Romanian Government and decision-making entities to develop an integrated and competitive management system and to establish a safe and proactive mobility system that ensures efficient and safe roads. The research includes a critical analysis of European and Romanian legislation, as well as subsequent normative acts related to road infrastructure safety management. Public data from European Union and national authorities, as well as data from the Romanian Road Authority-ARR and Traffic Police database, are utilized. The research methodology involves comparative analysis, criterion analysis, SWOT analysis, and the use of GANTT and WBS diagrams. The Excel tool is employed to process the road accident databases of Romania and Bulgaria. Collaboration with Bulgarian specialists is established to identify common road infrastructure safety issues. The research concludes that the legislative changes have resulted in a relaxation of road safety management in Romania, leading to decreased control over certain management procedures. The amendments to primary and secondary legislation do not meet the current safety requirements for road infrastructure. The research highlights the need for legislative changes and strengthened administrative capacity to enhance road safety. Regional cooperation and the exchange of best practices are emphasized for effective road infrastructure safety management. The research contributes to the theoretical understanding of road infrastructure safety management by analyzing legislative changes and their impact on safety measures. It highlights the importance of an integrated and proactive approach in reducing road accidents and achieving the "zero deaths" objective set by the European Union. Data collection involves accessing public data from relevant authorities and using information from the Romanian Road Authority-ARR and Traffic Police database. Analysis procedures include critical analysis of legislation, comparative analysis of transpositions, criterion analysis, and the use of various diagrams and tools such as SWOT, GANTT, WBS, and Excel. The research addresses the effectiveness of legislative changes in road infrastructure safety management in Romania and the impact on control over management procedures. It also explores the need for strengthened administrative capacity and regional cooperation in addressing road safety issues. The research concludes that the legislative changes made in Romania have not strengthened road safety management and emphasize the need for immediate action, legislative amendments, and enhanced administrative capacity. Collaboration with Bulgarian specialists and the exchange of best practices are recommended for effective road infrastructure safety management. The research contributes to the theoretical understanding of road safety management and provides valuable insights for policymakers and decision-makers in Romania.

Keywords: management, road infrastructure safety, legislation, amendments, collaboration

Procedia PDF Downloads 85
524 A Low-Cost Memristor Based on Hybrid Structures of Metal-Oxide Quantum Dots and Thin Films

Authors: Amir Shariffar, Haider Salman, Tanveer Siddique, Omar Manasreh

Abstract:

According to the recent studies on metal-oxide memristors, researchers tend to improve the stability, endurance, and uniformity of resistive switching (RS) behavior in memristors. Specifically, the main challenge is to prevent abrupt ruptures in the memristor’s filament during the RS process. To address this problem, we are proposing a low-cost hybrid structure of metal oxide quantum dots (QDs) and thin films to control the formation of filaments in memristors. We aim to use metal oxide quantum dots because of their unique electronic properties and quantum confinement, which may improve the resistive switching behavior. QDs have discrete energy spectra due to electron confinement in three-dimensional space. Because of Coulomb repulsion between electrons, only a few free electrons are contained in a quantum dot. This fact might guide the growth direction for the conducting filaments in the metal oxide memristor. As a result, it is expected that QDs can improve the endurance and uniformity of RS behavior in memristors. Moreover, we use a hybrid structure of intrinsic n-type quantum dots and p-type thin films to introduce a potential barrier at the junction that can smooth the transition between high and low resistance states. A bottom-up approach is used for fabricating the proposed memristor using different types of metal-oxide QDs and thin films. We synthesize QDs including, zinc oxide, molybdenum trioxide, and nickel oxide combined with spin-coated thin films of titanium dioxide, copper oxide, and hafnium dioxide. We employ fluorine-doped tin oxide (FTO) coated glass as the substrate for deposition and bottom electrode. Then, the active layer composed of one type of quantum dots, and the opposite type of thin films is spin-coated onto the FTO. Lastly, circular gold electrodes are deposited with a shadow mask by using electron-beam (e-beam) evaporation at room temperature. The fabricated devices are characterized using a probe station with a semiconductor parameter analyzer. The current-voltage (I-V) characterization is analyzed for each device to determine the conduction mechanism. We evaluate the memristor’s performance in terms of stability, endurance, and retention time to identify the optimal memristive structure. Finally, we assess the proposed hypothesis before we proceed to the optimization process for fabricating the memristor.

Keywords: memristor, quantum dot, resistive switching, thin film

Procedia PDF Downloads 125
523 A Proposal of a Strategic Framework for the Development of Smart Cities: The Argentinian Case

Authors: Luis Castiella, Mariano Rueda, Catalina Palacio

Abstract:

The world’s rapid urbanisation represents an excellent opportunity to implement initiatives that are oriented towards a country’s general development. However, this phenomenon has created considerable pressure on current urban models, pushing them nearer to a crisis. As a result, several factors usually associated with underdevelopment have been steadily rising. Moreover, actions taken by public authorities have not been able to keep up with the speed of urbanisation, which has impeded them from meeting the demands of society, responding with reactionary policies instead of with coordinated, organised efforts. In contrast, the concept of a Smart City which emerged around two decades ago, in principle, represents a city that utilises innovative technologies to remedy the everyday issues of the citizen, empowering them with the newest available technology and information. This concept has come to adopt a wider meaning, including human and social capital, as well as productivity, economic growth, quality of life, environment and participative governance. These developments have also disrupted the management of institutions such as academia, which have become key in generating scientific advancements that can solve pressing problems, and in forming a specialised class that is able to follow up on these breakthroughs. In this light, the Ministry of Modernisation of the Argentinian Nation has created a model that is rooted in the concept of a ‘Smart City’. This effort considered all the dimensions that are at play in an urban environment, with careful monitoring of each sub-dimensions in order to establish the government’s priorities and improving the effectiveness of its operations. In an attempt to ameliorate the overall efficiency of the country’s economic and social development, these focused initiatives have also encouraged citizen participation and the cooperation of the private sector: replacing short-sighted policies with some that are coherent and organised. This process was developed gradually. The first stage consisted in building the model’s structure; the second, at applying the method created on specific case studies and verifying that the mechanisms used respected the desired technical and social aspects. Finally, the third stage consists in the repetition and subsequent comparison of this experiment in order to measure the effects on the ‘treatment group’ over time. The first trial was conducted on 717 municipalities and evaluated the dimension of Governance. Results showed that levels of governmental maturity varied sharply with relation to size: cities with less than 150.000 people had a strikingly lower level of governmental maturity than cities with more than 150.000 people. With the help of this analysis, some important trends and target population were made apparent, which enabled the public administration to focus its efforts and increase its probability of being successful. It also permitted to cut costs, time, and create a dynamic framework in tune with the population’s demands, improving quality of life with sustained efforts to develop social and economic conditions within the territorial structure.

Keywords: composite index, comprehensive model, smart cities, strategic framework

Procedia PDF Downloads 178
522 Apatite Flotation Using Fruits' Oil as Collector and Sorghum as Depressant

Authors: Elenice Maria Schons Silva, Andre Carlos Silva

Abstract:

The crescent demand for raw material has increased mining activities. Mineral industry faces the challenge of process more complexes ores, with very small particles and low grade, together with constant pressure to reduce production costs and environment impacts. Froth flotation deserves special attention among the concentration methods for mineral processing. Besides its great selectivity for different minerals, flotation is a high efficient method to process fine particles. The process is based on the minerals surficial physicochemical properties and the separation is only possible with the aid of chemicals such as collectors, frothers, modifiers, and depressants. In order to use sustainable and eco-friendly reagents, oils extracted from three different vegetable species (pequi’s pulp, macauba’s nut and pulp, and Jatropha curcas) were studied and tested as apatite collectors. Since the oils are not soluble in water, an alkaline hydrolysis (or saponification), was necessary before their contact with the minerals. The saponification was performed at room temperature. The tests with the new collectors were carried out at pH 9 and Flotigam 5806, a synthetic mix of fatty acids industrially adopted as apatite collector manufactured by Clariant, was used as benchmark. In order to find a feasible replacement for cornstarch the flour and starch of a graniferous variety of sorghum was tested as depressant. Apatite samples were used in the flotation tests. XRF (X-ray fluorescence), XRD (X-ray diffraction), and SEM/EDS (Scanning Electron Microscopy with Energy Dispersive Spectroscopy) were used to characterize the apatite samples. Zeta potential measurements were performed in the pH range from 3.5 to 12.5. A commercial cornstarch was used as depressant benchmark. Four depressants dosages and pH values were tested. A statistical test was used to verify the pH, dosage, and starch type influence on the minerals recoveries. For dosages equal or higher than 7.5 mg/L, pequi oil recovered almost all apatite particles. In one hand, macauba’s pulp oil showed excellent results for all dosages, with more than 90% of apatite recovery, but in the other hand, with the nut oil, the higher recovery found was around 84%. Jatropha curcas oil was the second best oil tested and more than 90% of the apatite particles were recovered for the dosage of 7.5 mg/L. Regarding the depressant, the lower apatite recovery with sorghum starch were found for a dosage of 1,200 g/t and pH 11, resulting in a recovery of 1.99%. The apatite recovery for the same conditions as 1.40% for sorghum flour (approximately 30% lower). When comparing with cornstarch at the same conditions sorghum flour produced an apatite recovery 91% lower.

Keywords: collectors, depressants, flotation, mineral processing

Procedia PDF Downloads 153
521 Integration Process and Analytic Interface of different Environmental Open Data Sets with Java/Oracle and R

Authors: Pavel H. Llamocca, Victoria Lopez

Abstract:

The main objective of our work is the comparative analysis of environmental data from Open Data bases, belonging to different governments. This means that you have to integrate data from various different sources. Nowadays, many governments have the intention of publishing thousands of data sets for people and organizations to use them. In this way, the quantity of applications based on Open Data is increasing. However each government has its own procedures to publish its data, and it causes a variety of formats of data sets because there are no international standards to specify the formats of the data sets from Open Data bases. Due to this variety of formats, we must build a data integration process that is able to put together all kind of formats. There are some software tools developed in order to give support to the integration process, e.g. Data Tamer, Data Wrangler. The problem with these tools is that they need data scientist interaction to take part in the integration process as a final step. In our case we don’t want to depend on a data scientist, because environmental data are usually similar and these processes can be automated by programming. The main idea of our tool is to build Hadoop procedures adapted to data sources per each government in order to achieve an automated integration. Our work focus in environment data like temperature, energy consumption, air quality, solar radiation, speeds of wind, etc. Since 2 years, the government of Madrid is publishing its Open Data bases relative to environment indicators in real time. In the same way, other governments have published Open Data sets relative to the environment (like Andalucia or Bilbao). But all of those data sets have different formats and our solution is able to integrate all of them, furthermore it allows the user to make and visualize some analysis over the real-time data. Once the integration task is done, all the data from any government has the same format and the analysis process can be initiated in a computational better way. So the tool presented in this work has two goals: 1. Integration process; and 2. Graphic and analytic interface. As a first approach, the integration process was developed using Java and Oracle and the graphic and analytic interface with Java (jsp). However, in order to open our software tool, as second approach, we also developed an implementation with R language as mature open source technology. R is a really powerful open source programming language that allows us to process and analyze a huge amount of data with high performance. There are also some R libraries for the building of a graphic interface like shiny. A performance comparison between both implementations was made and no significant differences were found. In addition, our work provides with an Official Real-Time Integrated Data Set about Environment Data in Spain to any developer in order that they can build their own applications.

Keywords: open data, R language, data integration, environmental data

Procedia PDF Downloads 315
520 Kinematic Modelling and Task-Based Synthesis of a Passive Architecture for an Upper Limb Rehabilitation Exoskeleton

Authors: Sakshi Gupta, Anupam Agrawal, Ekta Singla

Abstract:

An exoskeleton design for rehabilitation purpose encounters many challenges, including ergonomically acceptable wearing technology, architectural design human-motion compatibility, actuation type, human-robot interaction, etc. In this paper, a passive architecture for upper limb exoskeleton is proposed for assisting in rehabilitation tasks. Kinematic modelling is detailed for task-based kinematic synthesis of the wearable exoskeleton for self-feeding tasks. The exoskeleton architecture possesses expansion and torsional springs which are able to store and redistribute energy over the human arm joints. The elastic characteristics of the springs have been optimized to minimize the mechanical work of the human arm joints. The concept of hybrid combination of a 4-bar parallelogram linkage and a serial linkage were chosen, where the 4-bar parallelogram linkage with expansion spring acts as a rigid structure which is used to provide the rotational degree-of-freedom (DOF) required for lowering and raising of the arm. The single linkage with torsional spring allows for the rotational DOF required for elbow movement. The focus of the paper is kinematic modelling, analysis and task-based synthesis framework for the proposed architecture, keeping in considerations the essential tasks of self-feeding and self-exercising during rehabilitation of partially healthy person. Rehabilitation of primary functional movements (activities of daily life, i.e., ADL) is routine activities that people tend to every day such as cleaning, dressing, feeding. We are focusing on the feeding process to make people independent in respect of the feeding tasks. The tasks are focused to post-surgery patients under rehabilitation with less than 40% weakness. The challenges addressed in work are ensuring to emulate the natural movement of the human arm. Human motion data is extracted through motion-sensors for targeted tasks of feeding and specific exercises. Task-based synthesis procedure framework will be discussed for the proposed architecture. The results include the simulation of the architectural concept for tracking the human-arm movements while displaying the kinematic and static study parameters for standard human weight. D-H parameters are used for kinematic modelling of the hybrid-mechanism, and the model is used while performing task-based optimal synthesis utilizing evolutionary algorithm.

Keywords: passive mechanism, task-based synthesis, emulating human-motion, exoskeleton

Procedia PDF Downloads 138
519 Finite Element Study of Coke Shape Deep Beam to Column Moment Connection Subjected to Cyclic Loading

Authors: Robel Wondimu Alemayehu, Sihwa Jung, Manwoo Park, Young K. Ju

Abstract:

Following the aftermath of the 1994 Northridge earthquake, intensive research on beam to column connections is conducted, leading to the current design basis. The current design codes require the use of either a prequalified connection or a connection that passes the requirements of large-scale cyclic qualification test prior to use in intermediate or special moment frames. The second alternative is expensive both in terms of money and time. On the other hand, the maximum beam depth in most of the prequalified connections is limited to 900mm due to the reduced rotation capacity of deeper beams. However, for long span beams the need to use deeper beams may arise. In this study, a beam to column connection detail suitable for deep beams is presented. The connection detail comprises of thicker-tapered beam flange adjacent to the beam to column connection. Within the thicker-tapered flange region, two reduced beam sections are provided with the objective of forming two plastic hinges within the tapered-thicker flange region. In addition, the length, width, and thickness of the tapered-thicker flange region are proportioned in such a way that a third plastic hinge forms at the end of the tapered-thicker flange region. As a result, the total rotation demand is distributed over three plastic zones. Making it suitable for deeper beams that have lower rotation capacity at one plastic hinge. The effectiveness of this connection detail is studied through finite element analysis. For the study, a beam that has a depth of 1200mm is used. Additionally, comparison with welded unreinforced flange-welded web (WUF-W) moment connection and reduced beam section moment connection is made. The results show that the rotation capacity of a WUF-W moment connection is increased from 2.0% to 2.2% by applying the proposed moment connection detail. Furthermore, the maximum moment capacity, energy dissipation capacity and stiffness of the WUF-W moment connection is increased up to 58%, 49%, and 32% respectively. In contrast, applying the reduced beam section detail to the same WUF-W moment connection reduced the rotation capacity from 2.0% to 1.50% plus the maximum moment capacity and stiffness of the connection is reduced by 22% and 6% respectively. The proposed connection develops three plastic hinge regions as intended and it shows improved performance compared to both WUF-W moment connection and reduced beam section moment connection. Moreover, the achieved rotation capacity satisfies the minimum required for use in intermediate moment frames.

Keywords: connections, finite element analysis, seismic design, steel intermediate moment frame

Procedia PDF Downloads 166
518 Rigorous Photogrammetric Push-Broom Sensor Modeling for Lunar and Planetary Image Processing

Authors: Ahmed Elaksher, Islam Omar

Abstract:

Accurate geometric relation algorithms are imperative in Earth and planetary satellite and aerial image processing, particularly for high-resolution images that are used for topographic mapping. Most of these satellites carry push-broom sensors. These sensors are optical scanners equipped with linear arrays of CCDs. These sensors have been deployed on most EOSs. In addition, the LROC is equipped with two push NACs that provide 0.5 meter-scale panchromatic images over a 5 km swath of the Moon. The HiRISE carried by the MRO and the HRSC carried by MEX are examples of push-broom sensor that produces images of the surface of Mars. Sensor models developed in photogrammetry relate image space coordinates in two or more images with the 3D coordinates of ground features. Rigorous sensor models use the actual interior orientation parameters and exterior orientation parameters of the camera, unlike approximate models. In this research, we generate a generic push-broom sensor model to process imageries acquired through linear array cameras and investigate its performance, advantages, and disadvantages in generating topographic models for the Earth, Mars, and the Moon. We also compare and contrast the utilization, effectiveness, and applicability of available photogrammetric techniques and softcopies with the developed model. We start by defining an image reference coordinate system to unify image coordinates from all three arrays. The transformation from an image coordinate system to a reference coordinate system involves a translation and three rotations. For any image point within the linear array, its image reference coordinates, the coordinates of the exposure center of the array in the ground coordinate system at the imaging epoch (t), and the corresponding ground point coordinates are related through the collinearity condition that states that all these three points must be on the same line. The rotation angles for each CCD array at the epoch t are defined and included in the transformation model. The exterior orientation parameters of an image line, i.e., coordinates of exposure station and rotation angles, are computed by a polynomial interpolation function in time (t). The parameter (t) is the time at a certain epoch from a certain orbit position. Depending on the types of observations, coordinates, and parameters may be treated as knowns or unknowns differently in various situations. The unknown coefficients are determined in a bundle adjustment. The orientation process starts by extracting the sensor position and, orientation and raw images from the PDS. The parameters of each image line are then estimated and imported into the push-broom sensor model. We also define tie points between image pairs to aid the bundle adjustment model, determine the refined camera parameters, and generate highly accurate topographic maps. The model was tested on different satellite images such as IKONOS, QuickBird, and WorldView-2, HiRISE. It was found that the accuracy of our model is comparable to those of commercial and open-source software, the computational efficiency of the developed model is high, the model could be used in different environments with various sensors, and the implementation process is much more cost-and effort-consuming.

Keywords: photogrammetry, push-broom sensors, IKONOS, HiRISE, collinearity condition

Procedia PDF Downloads 63
517 Predictive Maintenance: Machine Condition Real-Time Monitoring and Failure Prediction

Authors: Yan Zhang

Abstract:

Predictive maintenance is a technique to predict when an in-service machine will fail so that maintenance can be planned in advance. Analytics-driven predictive maintenance is gaining increasing attention in many industries such as manufacturing, utilities, aerospace, etc., along with the emerging demand of Internet of Things (IoT) applications and the maturity of technologies that support Big Data storage and processing. This study aims to build an end-to-end analytics solution that includes both real-time machine condition monitoring and machine learning based predictive analytics capabilities. The goal is to showcase a general predictive maintenance solution architecture, which suggests how the data generated from field machines can be collected, transmitted, stored, and analyzed. We use a publicly available aircraft engine run-to-failure dataset to illustrate the streaming analytics component and the batch failure prediction component. We outline the contributions of this study from four aspects. First, we compare the predictive maintenance problems from the view of the traditional reliability centered maintenance field, and from the view of the IoT applications. When evolving to the IoT era, predictive maintenance has shifted its focus from ensuring reliable machine operations to improve production/maintenance efficiency via any maintenance related tasks. It covers a variety of topics, including but not limited to: failure prediction, fault forecasting, failure detection and diagnosis, and recommendation of maintenance actions after failure. Second, we review the state-of-art technologies that enable a machine/device to transmit data all the way through the Cloud for storage and advanced analytics. These technologies vary drastically mainly based on the power source and functionality of the devices. For example, a consumer machine such as an elevator uses completely different data transmission protocols comparing to the sensor units in an environmental sensor network. The former may transfer data into the Cloud via WiFi directly. The latter usually uses radio communication inherent the network, and the data is stored in a staging data node before it can be transmitted into the Cloud when necessary. Third, we illustrate show to formulate a machine learning problem to predict machine fault/failures. By showing a step-by-step process of data labeling, feature engineering, model construction and evaluation, we share following experiences: (1) what are the specific data quality issues that have crucial impact on predictive maintenance use cases; (2) how to train and evaluate a model when training data contains inter-dependent records. Four, we review the tools available to build such a data pipeline that digests the data and produce insights. We show the tools we use including data injection, streaming data processing, machine learning model training, and the tool that coordinates/schedules different jobs. In addition, we show the visualization tool that creates rich data visualizations for both real-time insights and prediction results. To conclude, there are two key takeaways from this study. (1) It summarizes the landscape and challenges of predictive maintenance applications. (2) It takes an example in aerospace with publicly available data to illustrate each component in the proposed data pipeline and showcases how the solution can be deployed as a live demo.

Keywords: Internet of Things, machine learning, predictive maintenance, streaming data

Procedia PDF Downloads 387
516 Spectrogram Pre-Processing to Improve Isotopic Identification to Discriminate Gamma and Neutrons Sources

Authors: Mustafa Alhamdi

Abstract:

Industrial application to classify gamma rays and neutron events is investigated in this study using deep machine learning. The identification using a convolutional neural network and recursive neural network showed a significant improvement in predication accuracy in a variety of applications. The ability to identify the isotope type and activity from spectral information depends on feature extraction methods, followed by classification. The features extracted from the spectrum profiles try to find patterns and relationships to present the actual spectrum energy in low dimensional space. Increasing the level of separation between classes in feature space improves the possibility to enhance classification accuracy. The nonlinear nature to extract features by neural network contains a variety of transformation and mathematical optimization, while principal component analysis depends on linear transformations to extract features and subsequently improve the classification accuracy. In this paper, the isotope spectrum information has been preprocessed by finding the frequencies components relative to time and using them as a training dataset. Fourier transform implementation to extract frequencies component has been optimized by a suitable windowing function. Training and validation samples of different isotope profiles interacted with CdTe crystal have been simulated using Geant4. The readout electronic noise has been simulated by optimizing the mean and variance of normal distribution. Ensemble learning by combing voting of many models managed to improve the classification accuracy of neural networks. The ability to discriminate gamma and neutron events in a single predication approach using deep machine learning has shown high accuracy using deep learning. The paper findings show the ability to improve the classification accuracy by applying the spectrogram preprocessing stage to the gamma and neutron spectrums of different isotopes. Tuning deep machine learning models by hyperparameter optimization of neural network models enhanced the separation in the latent space and provided the ability to extend the number of detected isotopes in the training database. Ensemble learning contributed significantly to improve the final prediction.

Keywords: machine learning, nuclear physics, Monte Carlo simulation, noise estimation, feature extraction, classification

Procedia PDF Downloads 151
515 Enhanced Stability of Piezoelectric Crystalline Phase of Poly(Vinylidene Fluoride) (PVDF) and Its Copolymer upon Epitaxial Relationships

Authors: Devi Eka Septiyani Arifin, Jrjeng Ruan

Abstract:

As an approach to manipulate the performance of polymer thin film, epitaxy crystallization within polymer blends of poly(vinylidene fluoride) (PVDF) and its copolymer poly(vinylidene fluoride-trifluoroethylene) P(VDF-TrFE) was studied in this research, which involves the competition between phase separation and crystal growth of constitutive semicrystalline polymers. The unique piezoelectric feature of poly(vinylidene fluoride) crystalline phase is derived from the packing of molecular chains in all-trans conformation, which spatially arranges all the substituted fluorene atoms on one side of the molecular chain and hydrogen atoms on the other side. Therefore, the net dipole moment is induced across the lateral packing of molecular chains. Nevertheless, due to the mutual repulsion among fluorene atoms, this all-trans molecular conformation is not stable, and ready to change above curie temperature, where thermal energy is sufficient to cause segmental rotation. This research attempts to explore whether the epitaxial interactions between piezoelectric crystals and crystal lattice of hexamethylbenzene (HMB) crystalline platelet is able to stabilize this metastable all-trans molecular conformation or not. As an aromatic crystalline compound, the melt of HMB was surprisingly found able to dissolve the poly(vinylidene fluoride), resulting in homogeneous eutectic solution. Thus, after quenching this binary eutectic mixture to room temperature, subsequent heating or annealing processes were designed to explore the involve phase separation and crystallization behavior. The phase transition behaviors were observed in-situ by X-ray diffraction and differential scanning calorimetry (DSC). The molecular packing was observed via transmission electron microscope (TEM) and the principles of electron diffraction were brought to study the internal crystal structure epitaxially developed within thin films. Obtained results clearly indicated the occurrence of heteroepitaxy of PVDF/PVDF-TrFE on HMB crystalline platelet. Both the concentration of poly(vinylidene fluoride) and the mixing ratios of these two constitutive polymers have been adopted as the influential factors for studying the competition between the epitaxial crystallization of PVDF and P(VDF-TrFE) on HMB crystalline. Furthermore, the involved epitaxial relationship is to be deciphered and studied as a potential factor capable of guiding the wide spread of piezoelectric crystalline form.

Keywords: epitaxy, crystallization, crystalline platelet, thin film and mixing ratio

Procedia PDF Downloads 223
514 Association of Maternal Diet Quality Indices and Dietary Patterns during Lactation and the Growth of Exclusive Breastfed Infant

Authors: Leila Azadbakht, Maedeh Moradi, Mohammad Reza Merasi, Farzaneh Jahangir

Abstract:

Maternal dietary intake during lactation might affect the growth rate of an exclusive breastfed infant. The present study was conducted to evaluate the effect of maternal dietary patterns and quality during lactation on the growth of the exclusive breastfed infant. Methods: 484 healthy lactating mothers with their infant were enrolled in this study. Only exclusive breastfed infants were included in this study which was conducted in Iran. Dietary intake of lactating mothers was assessed using a validated and reliable semi-quantitative food frequency questionnaire. Diet quality indices such as alternative Healthy eating index (HEI), Dietary energy density (DED), and adherence to Mediterranean dietary pattern score, Nordic and dietary approaches to stop hypertension (DASH) eating pattern were created. Anthropometric features of infant (weight, height, and head circumference) were recorded at birth, two and four months. Results: Weight, length, weight for height and head circumference of infants at two months and four months age were mostly in the normal range among those that mothers adhered more to the HEI in lactation period (normal weight: 61%; normal height: 59%). The prevalence of stunting at four months of age among those whose mothers adhered more to the HEI was 31% lower than those with the least adherence to HEI. Mothers in the top tertiles of HEI score had the lowest frequency of having underweight infants (18% vs. 33%; P=0.03). Odds ratio of being overweight or obese at four months age was the lowest among those infants whose mothers adhered more to the HEI (OR: 0.67 vs 0.91; Ptrend=0.03). However, there was not any significant association between adherence of mothers to Mediterranean diet as well as DASH diet and Nordic eating pattern and the growth of infants (none of weight, height or head circumference). Infant weight, length, weight for height and head circumference at two months and four months did not show significant differences among different tertile categories of mothers’ DED. Conclusions: Higher diet quality indices and more adherence of lactating mother to HEI (as an indicator of diet quality) may be associated with better growth indices of the breastfed infant. However, it seems that DED of the lactating mother does not affect the growth of the breastfed infant. Adherence to the different dietary patterns such as Mediterranean, DASH or Nordic among mothers had no different effect on the growth indices of the infants. However, higher diet quality indices and more adherence of lactating mother to HEI may be associated with better growth indices of the breastfed infant. Breastfeeding is a complete way that is not affected much by the dietary patterns of the mother. However, better diet quality might be associated with better growth.

Keywords: breastfeeding, growth, infant, maternal diet

Procedia PDF Downloads 209