Search results for: power efficiency
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4991

Search results for: power efficiency

431 The Effects of Whole-Body Vibration Training on Jump Performance in Handball Athletes

Authors: Yen-Ting Wang, Shou-Jing Guo, Hsiu-Kuang Chang, Kenny Wen-Chyuan Chen, Alex J.Y. Lee

Abstract:

This study examined the effects of eight weeks of whole-body vibration training (WBVT) on vertical and decuple jump performance in handball athletes. Sixteen collegiate Level I handball athletes volunteered for this study. They were divided equally as control group and experimental group (EG). During the period of the study, all athletes underwent the same handball specific training, but the EG received additional WBVT (amplitude: 2 mm, frequency: 20 - 40 Hz) three time per week for eight consecutive weeks. The vertical jump performance was evaluated according to the maximum height of squat jump (SJ) and countermovement jump (CMJ). Single factor ANCOVA was used to examine the differences in each parameter between the groups after training with the pretest values as a covariate. The statistic significance was set at p < .05. After 8 weeks WBVT, the EG had significantly improved the maximal height of SJ (40.92 ± 2.96 cm vs. 48.40 ± 4.70 cm, F = 5.14, p < .05) and the maximal height CMJ (47.25 ± 7.48 cm vs. 52.20 ± 6.25 cm, F = 5.31, p < .05). 8 weeks of additional WBVT could improve the vertical and decuple jump performance in handball athletes. Enhanced motor unit synchronization and firing rates, facilitated muscular contraction stretch-shortening cycle, and improved lower extremity neuromuscular coordination could account for these enhancements.

Keywords: Muscle strength, explosive power, squat jump, and countermovement jump.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2108
430 Retrieving Extended High Dynamic Range from Digital Negative Image - An Experiment on Architectural Photo Imaging

Authors: See Zi Siang, Khairul Hazrin Hashim, Harold Thwaites, Lee Xia Sheng, Ooi Wooi Har

Abstract:

The paper explores the development of an optimization of method and apparatus for retrieving extended high dynamic range from digital negative image. Architectural photo imaging can benefit from high dynamic range imaging (HDRI) technique for preserving and presenting sufficient luminance in the shadow and highlight clipping image areas. The HDRI technique that requires multiple exposure images as the source of HDRI rendering may not be effective in terms of time efficiency during the acquisition process and post-processing stage, considering it has numerous potential imaging variables and technical limitations during the multiple exposure process. This paper explores an experimental method and apparatus that aims to expand the dynamic range from digital negative image in HDRI environment. The method and apparatus explored is based on a single source of RAW image acquisition for the use of HDRI post-processing. It will cater the optimization in order to avoid and minimize the conventional HDRI photographic errors caused by different physical conditions during the photographing process and the misalignment of multiple exposed image sequences. The study observes the characteristics and capabilities of RAW image format as digital negative used for the retrieval of extended high dynamic range process in HDRI environment.

Keywords: High Dynamic Range Image, Photography Workflow Optimization, Digital Negative Image, Architectural Image

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1606
429 Understanding the Selectional Preferences of the Twitter Mentions Network

Authors: R. Sudhesh Solomon, P. Y. K. L. Srinivas, Abhay Narayan, Amitava Das

Abstract:

Users in social networks either unicast or broadcast their messages. At mention is the popular way of unicasting for Twitter whereas general tweeting could be considered as broadcasting method. Understanding the information flow and dynamics within a Social Network and modeling the same is a promising and an open research area called Information Diffusion. This paper seeks an answer to a fundamental question - understanding if the at-mention network or the unicasting pattern in social media is purely random in nature or is there any user specific selectional preference? To answer the question we present an empirical analysis to understand the sociological aspects of Twitter mentions network within a social network community. To understand the sociological behavior we analyze the values (Schwartz model: Achievement, Benevolence, Conformity, Hedonism, Power, Security, Self-Direction, Stimulation, Traditional and Universalism) of all the users. Empirical results suggest that values traits are indeed salient cue to understand how the mention-based communication network functions. For example, we notice that individuals possessing similar values unicast among themselves more often than with other value type people. We also observe that traditional and self-directed people do not maintain very close relationship in the network with the people of different values traits.

Keywords: Social network analysis, information diffusion, personality and values, Twitter Mentions Network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 729
428 Suppression of Narrowband Interference in Impulse Radio Based High Data Rate UWB WPAN Communication System Using NLOS Channel Model

Authors: Bikramaditya Das, Susmita Das

Abstract:

Study on suppression of interference in time domain equalizers is attempted for high data rate impulse radio (IR) ultra wideband communication system. The narrow band systems may cause interference with UWB devices as it is having very low transmission power and the large bandwidth. SRAKE receiver improves system performance by equalizing signals from different paths. This enables the use of SRAKE receiver techniques in IRUWB systems. But Rake receiver alone fails to suppress narrowband interference (NBI). A hybrid SRake-MMSE time domain equalizer is proposed to overcome this by taking into account both the effect of the number of rake fingers and equalizer taps. It also combats intersymbol interference. A semi analytical approach and Monte-Carlo simulation are used to investigate the BER performance of SRAKEMMSE receiver on IEEE 802.15.3a UWB channel models. Study on non-line of sight indoor channel models (both CM3 and CM4) illustrates that bit error rate performance of SRake-MMSE receiver with NBI performs better than that of Rake receiver without NBI. We show that for a MMSE equalizer operating at high SNR-s the number of equalizer taps plays a more significant role in suppressing interference.

Keywords: IR-UWB, UWB, IEEE 802.15.3a, NBI, data rate, bit error rate.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1683
427 Structural Damage Detection via Incomplete Modal Data Using Output Data Only

Authors: Ahmed Noor Al-Qayyim, Barlas Ozden Caglayan

Abstract:

Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on to obtain very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. The study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using ‘Two Points Condensation (TPC) technique’. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices obtain from optimization the equation of motion using the measured test data. The current stiffness matrices compare with original (undamaged) stiffness matrices. The large percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element, where two cases consider. The method detects the damage and determines its location accurately in both cases. In addition, the results illustrate these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can be used also for big structures.

Keywords: Damage detection, two points–condensation, structural health monitoring, signals processing, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2686
426 Renewable Energy Industry Trends and Its Contributions to the Development of Energy Resilience in an Era of Accelerating Climate Change

Authors: A. T. Asutosh, J. Woo, M. Kouhirostami, M. Sam, A. Khantawang, C. Cuales, W. Ryor, C. Kibert

Abstract:

Climate change and global warming vortex have grown to alarming proportions. Therefore, the need for a shift in the conceptualization of energy production is paramount. Energy practices have been created in the current situation. Fossil fuels continue their prominence, at the expense of renewable sources. Despite this abundance, a large percentage of the world population still has no access to electricity but there have been encouraging signs in global movement from nonrenewable to renewable energy but means to reverse climate change have been elusive. Worldwide, organizations have put tremendous effort into innovation. Conferences and exhibitions act as a platform that allows a broad exchange of information regarding trends in the renewable energy field. The Solar Power International (SPI) conference and exhibition is a gathering of concerned activists, and probably the largest convention of its kind. This study investigates current development in the renewable energy field, analyzing means by which industry is being applied to the issue. In reviewing the 2019 SPI conference, it was found innovations in recycling and assessing the environmental impacts of the solar products that need critical attention. There is a huge movement in the electrical storage but there exists a large gap in the development of security systems. This research will focus on solar energy, but impacts will be relevant to the entire renewable energy market.

Keywords: Climate change, renewable energy, solar, trends, research, SPI.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1134
425 Simulation of Ammonia-Water Two Phase Flow in Bubble Pump

Authors: Jemai Rabeb, Benhmidene Ali, Hidouri Khaoula, Chaouachi Bechir

Abstract:

The diffusion-absorption refrigeration cycle consists of a generator bubble pump, an absorber, an evaporator and a condenser, and usually operates with ammonia/water/ hydrogen or helium as the working fluid. The aim of this paper is to study the stability problem a bubble pump. In fact instability can caused a reduction of bubble pump efficiency. To achieve this goal, we have simulated the behaviour of two-phase flow in a bubble pump by using a drift flow model. Equations of a drift flow model are formulated in the transitional regime, non-adiabatic condition and thermodynamic equilibrium between the liquid and vapour phases. Equations resolution allowed to define void fraction, and liquid and vapour velocities, as well as pressure and mixing enthalpy. Ammonia-water mixing is used as working fluid, where ammonia mass fraction in the inlet is 0.6. Present simulation is conducted out for a heating flux of 2 kW/m² to 5 kW/m² and bubble pump tube length of 1 m and 2.5 mm of inner diameter. Simulation results reveal oscillations of vapour and liquid velocities along time. Oscillations decrease with time and with heat flux. For sufficient time the steady state is established, it is characterised by constant liquid velocity and void fraction values. However, vapour velocity does not have the same behaviour, it increases for steady state too. On the other hand, pressure drop oscillations are studied.

Keywords: Bubble pump, drift flow model, instability, simulation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1073
424 Anomaly Detection in a Data Center with a Reconstruction Method Using a Multi-Autoencoders Model

Authors: Victor Breux, Jérôme Boutet, Alain Goret, Viviane Cattin

Abstract:

Early detection of anomalies in data centers is important to reduce downtimes and the costs of periodic maintenance. However, there is little research on this topic and even fewer on the fusion of sensor data for the detection of abnormal events. The goal of this paper is to propose a method for anomaly detection in data centers by combining sensor data (temperature, humidity, power) and deep learning models. The model described in the paper uses one autoencoder per sensor to reconstruct the inputs. The auto-encoders contain Long-Short Term Memory (LSTM) layers and are trained using the normal samples of the relevant sensors selected by correlation analysis. The difference signal between the input and its reconstruction is then used to classify the samples using feature extraction and a random forest classifier. The data measured by the sensors of a data center between January 2019 and May 2020 are used to train the model, while the data between June 2020 and May 2021 are used to assess it. Performances of the model are assessed a posteriori through F1-score by comparing detected anomalies with the data center’s history. The proposed model outperforms the state-of-the-art reconstruction method, which uses only one autoencoder taking multivariate sequences and detects an anomaly with a threshold on the reconstruction error, with an F1-score of 83.60% compared to 24.16%.

Keywords: Anomaly detection, autoencoder, data centers, deep learning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 719
423 Intelligent Assistive Methods for Diagnosis of Rheumatoid Arthritis Using Histogram Smoothing and Feature Extraction of Bone Images

Authors: SP. Chokkalingam, K. Komathy

Abstract:

Advances in the field of image processing envision a new era of evaluation techniques and application of procedures in various different fields. One such field being considered is the biomedical field for prognosis as well as diagnosis of diseases. This plethora of methods though provides a wide range of options to select from, it also proves confusion in selecting the apt process and also in finding which one is more suitable. Our objective is to use a series of techniques on bone scans, so as to detect the occurrence of rheumatoid arthritis (RA) as accurately as possible. Amongst other techniques existing in the field our proposed system tends to be more effective as it depends on new methodologies that have been proved to be better and more consistent than others. Computer aided diagnosis will provide more accurate and infallible rate of consistency that will help to improve the efficiency of the system. The image first undergoes histogram smoothing and specification, morphing operation, boundary detection by edge following algorithm and finally image subtraction to determine the presence of rheumatoid arthritis in a more efficient and effective way. Using preprocessing noises are removed from images and using segmentation, region of interest is found and Histogram smoothing is applied for a specific portion of the images. Gray level co-occurrence matrix (GLCM) features like Mean, Median, Energy, Correlation, Bone Mineral Density (BMD) and etc. After finding all the features it stores in the database. This dataset is trained with inflamed and noninflamed values and with the help of neural network all the new images are checked properly for their status and Rough set is implemented for further reduction.

Keywords: Computer Aided Diagnosis, Edge Detection, Histogram Smoothing, Rheumatoid Arthritis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2473
422 A Concept of Rational Water Management at Local Utilities – The Use of RO for Water Supply and Wastewater Treatment/Reuse

Authors: N. Matveev, A. Pervov

Abstract:

Local utilities often face problems of local industrial wastes, storm water disposal due to existing strict regulations. For many local industries, the problem of wastewater treatment and discharge into surface reservoirs can’t be solved through the use of conventional biological treatment techniques. Current discharge standards require very strict removal of a number of impurities such as ammonia, nitrates, phosphate, etc. To reach this level of removal, expensive reagents and sorbents are used. The modern concept of rational water resources management requires the development of new efficient techniques that provide wastewater treatment and reuse. As RO membranes simultaneously reject all dissolved impurities such as BOD, TDS, ammonia, phosphates etc., they become very attractive for the direct treatment of wastewater without biological stage. To treat wastewater, specially designed membrane "open channel" modules are used that do not possess "dead areas" that cause fouling or require pretreatment. A solution to RO concentrate disposal problem is presented that consists of reducing of initial wastewater volume by 100 times. Concentrate is withdrawn from membrane unit as sludge moisture. The efficient use of membrane RO techniques is connected with a salt balance in water system. Thus, to provide high ecological efficiency of developed techniques, all components of water supply and wastewater discharge systems should be accounted for.

Keywords: Reverse osmosis, stormwater treatment, openchannel module, wastewater reuse.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1951
421 Characterization and Predictors of Community Integration of People with Psychiatric Problems: Comparisons with the General Population

Authors: J. Cabral, C. Barreto Carvalho, C. da Motta, M. Sousa

Abstract:

Community integration is a construct that an increasing body of research has shown to have a significant impact on the wellbeing and recovery of people with psychiatric problems. However, there are few studies that explore which factors can be associated and predict community integration. Moreover, community integration has been mostly studied in minority groups, and current literature on the definition and manifestation of community integration in the general population is scarcer. Thus, the current study aims to characterize community integration and explore possible predictor variables in a sample of participants with psychiatric problems (PP, N=183) and a sample of participants from the general population (GP, N=211). Results show that people with psychiatric problems present above average values of community integration, but are significantly lower than their healthy counterparts. It was also possible to observe that community integration does not vary in terms of the sociodemographic characteristics of both groups in this study. Correlation and multiple regression showed that, among several variables that literature present as relevant in the community integration process, only three variables emerged as having the most explanatory value in community integration of both groups: sense of community, basic needs satisfaction and submission. These results also shown that those variables have increased explanatory power in the PP sample, which leads us to emphasize the need to address this issue in future studies and increase the understanding of the factors that can be involved in the promotion of community integration, in order to devise more effective interventions in this field.

Keywords: Community integration, mental illness, predictors.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1824
420 A Study of the Planning and Designing of the Built Environment under the Green Transit-Oriented Development

Authors: Wann-Ming Wey

Abstract:

In recent years, the problems of global climate change and natural disasters have induced the concerns and attentions of environmental sustainability issues for the public. Aside from the environmental planning efforts done for human environment, Transit-Oriented Development (TOD) has been widely used as one of the future solutions for the sustainable city development. In order to be more consistent with the urban sustainable development, the development of the built environment planning based on the concept of Green TOD which combines both TOD and Green Urbanism is adapted here. The connotation of the urban development under the green TOD including the design toward environment protect, the maximum enhancement resources and the efficiency of energy use, use technology to construct green buildings and protected areas, natural ecosystems and communities linked, etc. Green TOD is not only to provide the solution to urban traffic problems, but to direct more sustainable and greener consideration for future urban development planning and design. In this study, we use both the TOD and Green Urbanism concepts to proceed to the study of the built environment planning and design. Fuzzy Delphi Technique (FDT) is utilized to screen suitable criteria of the green TOD. Furthermore, Fuzzy Analytic Network Process (FANP) and Quality Function Deployment (QFD) were then developed to evaluate the criteria and prioritize the alternatives. The study results can be regarded as the future guidelines of the built environment planning and designing under green TOD development in Taiwan.

Keywords: Green transit-oriented development, built environment, fuzzy Delphi technique, quality function deployment, fuzzy analytic network process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1508
419 Hospital Waste Management Practices: A Case Study in Iran

Authors: M. Farzadkia, S. Jorfi

Abstract:

Hospital waste is a category of waste consisting of infectious and non-infectious waste, which pose environmental and health risks. Therefore, special planning and management is required, due to the potential hazards of them. The lack of valid and comprehensive information regarding the generation and management of hospital waste in Iran is one of the most important problems in this field. This research aimed to evaluate hospital waste management efficiency in Karaj city, Iran. The four greatest hospitals in Karaj city had been selected in this cross-sectional study. Site observations and interviews with employees were implemented. The data was gathered based on the hospital waste management questionnaire which was designed by World Health Organization for developing countries. Collected Data had been analyzed using SPSS software. The average of solid waste which was generated per bed was 2.78 kg, which included 90% of domestic waste and 10% of infectious waste. Based on the quantitative analysis of general and infectious waste in these hospitals, the highest contributors of general waste were consisting of food waste (37.39%), while textile (28.06%) were the highest contributors of the infectious waste. According to the information contained in the questionnaires, the main defects of waste management in these hospitals were; inadequate staff in waste management sector, poorly disinfection of solid waste containers and temporary storage locations, and a lack of proper infectious waste treatment. According to the results of this research, waste management in these hospitals were far from optimum conditions. In order to improve the existing conditions, mentioned problems must be solved quickly, and planning for continuous monitoring in the waste management field in these hospitals should be established.

Keywords: Waste management, hospital wastes, solid wastes, Iran.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2144
418 Use of Chlorophyll Meters to Assess In-Season Wheat Nitrogen Fertilizer Requirements in the Southern San Joaquin Valley

Authors: Brian H. Marsh

Abstract:

Nitrogen fertilizer is the most used and often the most mismanaged nutrient input. Nitrogen management has tremendous implications on crop productivity, quality and environmental stewardship. Sufficient nitrogen is needed to optimum yield and quality. Soil and in-season plant tissue testing for nitrogen status are a time consuming and expensive process. Real time sensing of plant nitrogen status can be a useful tool in managing nitrogen inputs. The objectives of this project were to assess the reliability of remotely sensed non-destructive plant nitrogen measurements compared to wet chemistry data from sampled plant tissue, develop in-season nitrogen recommendations based on remotely sensed data for improved nitrogen use efficiency and assess the potential for determining yield and quality from remotely sensed data. Very good correlations were observed between early-season remotely sensed crop nitrogen status and plant nitrogen concentrations and subsequent in-season fertilizer recommendations. The transmittance/absorbance type meters gave the most accurate readings. Early in-season fertilizer recommendation would be to apply 40 kg nitrogen per hectare plus 15 kg nitrogen per hectare for each unit difference measured with the SPAD meter between the crop and reference area or 25 kg plus 13 kg per hectare for each unit difference measured with the CCM 200. Once the crop was sufficiently fertilized meter readings became inconclusive and were of no benefit for determining nitrogen status, silage yield and quality and grain yield and protein.

Keywords: Wheat, nitrogen fertilization, chlorophyll meter.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2192
417 Biometrical Comparison of Artemia urmiana Günther, 1899 (Crustacea: Anostraca) Cysts between Rainy and Drought Years (1994-2003/4) from Urmia Lake, Iran

Authors: A. Asem, N. Rastegar-Pouyani, P. De Los Rios, R. Manaffar, F. Mohebbi

Abstract:

Nowadays, biometrical characterizations of Artemia cysts are used as one of the most important factors in the study of Artemia populations and intraspecific particularity; meanwhile these characters can be used as economical indices. For example, typically high hatching efficiency is possible due to the small diameter of cysts (high number per gram); therefore small diameter of cysts show someway high quality of cysts. This study was performed during a ten year period, including two different ecological conditions: rainy and drought. It is important from two different aspects because it covers alteration of A. urmiana during ten years also its variation in the best and worst environmental situations in which salinity increased from 173.8 ppt in 1994 to 280.8 ppt in 2003/4. In this study the biometrical raw data of Artemia urmiana cysts at seven stations from the Urmia Lake in 1994 and their seven identical locations at 26 studied stations in 2003/4 were reanalyzed again and compared together. Biometrical comparison of untreated and decapsulated cysts in each of the seven similar stations showed a highly significant variation between 1994 and 2003/4. Based on this study, in whole stations the untreated and decapsulated cysts from 1994 were larger than cysts of 2003/4 without any exception. But there was no logical relationship between salinity and chorion thickness in the Urmia Lake. With regard to PCA analyses the stations of two different studied years certainly have been separated with factor 1 from each other. In conclusion, the interaction between genetic and environmental factors can determine and explain variation in the range of cysts diameter in Artemia.

Keywords: Artemia urmiana, Biometry, Cyst, Urmia Lake

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3670
416 Flipped Classroom in Bioethics Education: A Blended and Interactive Online Learning Courseware that Enhances Active Learning and Student Engagement

Authors: Molly P. M. Wong

Abstract:

In this study, a blended and interactive e-learning Courseware that our team developed will be introduced, and our team’s experiences on how the e-learning Courseware and the flipped classroom benefit student learning in bioethics in the medical program will be shared. This study is a continuation of the previously established study, which provides a summary of the well-developed e-learning Courseware in a blended learning approach and an update on its efficiency and efficacy. First, a collection of animated videos capturing selected topics of bioethics and related ethical issues and dilemma will be introduced. Next, a selection of problem-based learning videos (“simulated doctor-patient role play”) with pop-up questions and discussions will be further discussed. Our findings demonstrated that these activities launched by the Courseware strongly engaged students in bioethics education and enhanced students’ critical thinking and creativity. Moreover, the educational benefits of the online art exhibition, art jamming and competition will be discussed, through which students could express bioethics through arts and enrich their learning in medical research in an interactive, fun and entertaining way, strengthening their interests in bioethics. Furthermore, online survey questionnaires and focus group interviews were conducted. Our results indicated that implementing the e-learning Courseware with a flipped classroom in bioethics education enhanced both active learning and student engagement. In conclusion, our Courseware not only reinforces education in art, bioethics and medicine, but also benefits students in understanding and critical thinking in socio-ethical issues, and serves as a valuable learning tool in bioethics teaching and learning.

Keywords: Bioethics, courseware, e-learning, flipped classroom.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 481
415 Crashworthiness Optimization of an Automotive Front Bumper in Composite Material

Authors: S. Boria

Abstract:

In the last years, the crashworthiness of an automotive body structure can be improved, since the beginning of the design stage, thanks to the development of specific optimization tools. It is well known how the finite element codes can help the designer to investigate the crashing performance of structures under dynamic impact. Therefore, by coupling nonlinear mathematical programming procedure and statistical techniques with FE simulations, it is possible to optimize the design with reduced number of analytical evaluations. In engineering applications, many optimization methods which are based on statistical techniques and utilize estimated models, called meta-models, are quickly spreading. A meta-model is an approximation of a detailed simulation model based on a dataset of input, identified by the design of experiments (DOE); the number of simulations needed to build it depends on the number of variables. Among the various types of meta-modeling techniques, Kriging method seems to be excellent in accuracy, robustness and efficiency compared to other ones when applied to crashworthiness optimization. Therefore the application of such meta-model was used in this work, in order to improve the structural optimization of a bumper for a racing car in composite material subjected to frontal impact. The specific energy absorption represents the objective function to maximize and the geometrical parameters subjected to some design constraints are the design variables. LS-DYNA codes were interfaced with LS-OPT tool in order to find the optimized solution, through the use of a domain reduction strategy. With the use of the Kriging meta-model the crashworthiness characteristic of the composite bumper was improved.

Keywords: Composite material, crashworthiness, finite element analysis, optimization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1115
414 Silver Modified TiO2/Halloysite Thin Films for Decontamination of Target Pollutants

Authors: Dionisios Panagiotaras, Elias Stathatos, Dimitrios Papoulis

Abstract:

 Sol-gel method has been used to fabricate nanocomposite films on glass substrates composed halloysite clay mineral and nanocrystalline TiO2. The methodology for the synthesis involves a simple chemistry method utilized nonionic surfactant molecule as pore directing agent along with the acetic acid-based solgel route with the absence of water molecules. The thermal treatment of composite films at 450oC ensures elimination of organic material and lead to the formation of TiO2 nanoparticles onto the surface of the halloysite nanotubes. Microscopy techniques and porosimetry methods used in order to delineate the structural characteristics of the materials. The nanocomposite films produced have no cracks and active anatase crystal phase with small crystallite size were deposited on halloysite nanotubes. The photocatalytic properties for the new materials were examined for the decomposition of the Basic Blue 41 azo dye in solution. These, nanotechnology based composite films show high efficiency for dye’s discoloration in spite of different halloysite quantities and small amount of halloysite/TiO2 catalyst immobilized onto glass substrates. Moreover, we examined the modification of the halloysite/TiO2 films with silver particles in order to improve the photocatalytic properties of the films. Indeed, the presence of silver nanoparticles enhances the discoloration rate of the Basic Blue 41 compared to the efficiencies obtained for unmodified films.

Keywords: Clay mineral, nanotubular Halloysite, Photocatalysis, Titanium Dioxide, Silver modification.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2521
413 Method of Estimating Absolute Entropy of Municipal Solid Waste

Authors: Francis Chinweuba Eboh, Peter Ahlström, Tobias Richards

Abstract:

Entropy, as an outcome of the second law of thermodynamics, measures the level of irreversibility associated with any process. The identification and reduction of irreversibility in the energy conversion process helps to improve the efficiency of the system. The entropy of pure substances known as absolute entropy is determined at an absolute reference point and is useful in the thermodynamic analysis of chemical reactions; however, municipal solid waste (MSW) is a structurally complicated material with unknown absolute entropy. In this work, an empirical model to calculate the absolute entropy of MSW based on the content of carbon, hydrogen, oxygen, nitrogen, sulphur, and chlorine on a dry ash free basis (daf) is presented. The proposed model was derived from 117 relevant organic substances which represent the main constituents in MSW with known standard entropies using statistical analysis. The substances were divided into different waste fractions; namely, food, wood/paper, textiles/rubber and plastics waste and the standard entropies of each waste fraction and for the complete mixture were calculated. The correlation of the standard entropy of the complete waste mixture derived was found to be somsw= 0.0101C + 0.0630H + 0.0106O + 0.0108N + 0.0155S + 0.0084Cl (kJ.K-1.kg) and the present correlation can be used for estimating the absolute entropy of MSW by using the elemental compositions of the fuel within the range of 10.3%  C 95.1%, 0.0%  H  14.3%, 0.0%  O  71.1%, 0.0  N  66.7%, 0.0%  S  42.1%, 0.0%  Cl  89.7%. The model is also applicable for the efficient modelling of a combustion system in a waste-to-energy plant.

Keywords: Absolute entropy, irreversibility, municipal solid waste, waste-to-energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
412 Prediction of Product Size Distribution of a Vertical Stirred Mill Based on Breakage Kinetics

Authors: C. R. Danielle, S. Erik, T. Patrick, M. Hugh

Abstract:

In the last decade there has been an increase in demand for fine grinding due to the depletion of coarse-grained orebodies and an increase of processing fine disseminated minerals and complex orebodies. These ores have provided new challenges in concentrator design because fine and ultra-fine grinding is required to achieve acceptable recovery rates. Therefore, the correct design of a grinding circuit is important for minimizing unit costs and increasing product quality. The use of ball mills for grinding in fine size ranges is inefficient and, therefore, vertical stirred grinding mills are becoming increasingly popular in the mineral processing industry due to its already known high energy efficiency. This work presents a hypothesis of a methodology to predict the product size distribution of a vertical stirred mill using a Bond ball mill. The Population Balance Model (PBM) was used to empirically analyze the performance of a vertical mill and a Bond ball mill. The breakage parameters obtained for both grinding mills are compared to determine the possibility of predicting the product size distribution of a vertical mill based on the results obtained from the Bond ball mill. The biggest advantage of this methodology is that most of the minerals processing laboratories already have a Bond ball mill to perform the tests suggested in this study. Preliminary results show the possibility of predicting the performance of a laboratory vertical stirred mill using a Bond ball mill.

Keywords: Bond ball mill, population balance model, product size distribution, vertical stirred mill.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1139
411 Economic Assessment of Green House for Cultivation of Float Based Seedling Production in India

Authors: Srinath Ramakkrushnan, Aswathaman Vijayan

Abstract:

In conventional seedling production, the seedlings are being grown in the open field under natural conditions. Here they are susceptible to sudden changes in climate were their quality and yield is affected. Quality seedlings are essential for good growth and performance of crops in main field; they serve as a foundation for the economic returns to the farmer. Producing quality seedling demands usage of hybrid seeds as they have the ability to result in better yield, greater uniformity, improved color, disease resistance, and so forth. Hybrid seed production poses major operational challenge and its seed use efficiency plays an important role. Thus in order to overcome the difficulties currently present in conventional seedling production and to efficiently use hybrid seeds, ITC Limited Agri Business Divisions - Sustainability Cell as conceptualized a novel method of seedling production unit for farmers in West Godavari District of Andhra Pradesh. The “Green House based Float Seedling" methodology aims at a protected cultivation technique wherein the micro climate surrounding the plant/seedling body is controlled partially or fully as per the requirement of the species. This paper reports on the techno economic evaluation of green house for cultivation of float based seedling production with experimental results that was attained from the pilot implementation in West Godavari District, Rajahmundry region of India.

Keywords: Economic Assessment, Float Seedling, Green House, ITC Limited, Payback period.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4194
410 Performance Evaluation of Parallel Surface Modeling and Generation on Actual and Virtual Multicore Systems

Authors: Nyeng P. Gyang

Abstract:

Even though past, current and future trends suggest that multicore and cloud computing systems are increasingly prevalent/ubiquitous, this class of parallel systems is nonetheless underutilized, in general, and barely used for research on employing parallel Delaunay triangulation for parallel surface modeling and generation, in particular. The performances, of actual/physical and virtual/cloud multicore systems/machines, at executing various algorithms, which implement various parallelization strategies of the incremental insertion technique of the Delaunay triangulation algorithm, were evaluated. T-tests were run on the data collected, in order to determine whether various performance metrics differences (including execution time, speedup and efficiency) were statistically significant. Results show that the actual machine is approximately twice faster than the virtual machine at executing the same programs for the various parallelization strategies. Results, which furnish the scalability behaviors of the various parallelization strategies, also show that some of the differences between the performances of these systems, during different runs of the algorithms on the systems, were statistically significant. A few pseudo superlinear speedup results, which were computed from the raw data collected, are not true superlinear speedup values. These pseudo superlinear speedup values, which arise as a result of one way of computing speedups, disappear and give way to asymmetric speedups, which are the accurate kind of speedups that occur in the experiments performed.

Keywords: Cloud computing systems, multicore systems, parallel delaunay triangulation, parallel surface modeling and generation.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 871
409 Automatic Classification of Lung Diseases from CT Images

Authors: Abobaker Mohammed Qasem Farhan, Shangming Yang, Mohammed Al-Nehari

Abstract:

Pneumonia is a kind of lung disease that creates congestion in the chest. Such pneumonic conditions lead to loss of life due to the severity of high congestion. Pneumonic lung disease is caused by viral pneumonia, bacterial pneumonia, or COVID-19 induced pneumonia. The early prediction and classification of such lung diseases help reduce the mortality rate. We propose the automatic Computer-Aided Diagnosis (CAD) system in this paper using the deep learning approach. The proposed CAD system takes input from raw computerized tomography (CT) scans of the patient's chest and automatically predicts disease classification. We designed the Hybrid Deep Learning Algorithm (HDLA) to improve accuracy and reduce processing requirements. The raw CT scans are pre-processed first to enhance their quality for further analysis. We then applied a hybrid model that consists of automatic feature extraction and classification. We propose the robust 2D Convolutional Neural Network (CNN) model to extract the automatic features from the pre-processed CT image. This CNN model assures feature learning with extremely effective 1D feature extraction for each input CT image. The outcome of the 2D CNN model is then normalized using the Min-Max technique. The second step of the proposed hybrid model is related to training and classification using different classifiers. The simulation outcomes using the publicly available dataset prove the robustness and efficiency of the proposed model compared to state-of-art algorithms.

Keywords: CT scans, COVID-19, deep learning, image processing, pneumonia, lung disease.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 571
408 Precision Grinding of Titanium (Ti-6Al-4V) Alloy Using Nanolubrication

Authors: Ahmed A. D. Sarhan, Hong Wan Ping, M. Sayuti

Abstract:

In this current era of competitive machinery productions, the industries are designed to place more emphasis on the product quality and reduction of cost whilst abiding by the pollution-preventing policy. In attempting to delve into the concerns, the industries are aware that the effectiveness of existing lubrication systems must be improved to achieve power-efficient and pollution-preventing machining processes. As such, this research is targeted to study on a plausible solution to the issue in grinding titanium alloy (Ti-6Al-4V) by using nanolubrication, as an alternative to flood grinding. The aim of this research is to evaluate the optimum condition of grinding force and surface roughness using MQL lubricating system to deliver nano-oil at different level of weight concentration of Silicon Dioxide (SiO2) mixed normal mineral oil. Taguchi Design of Experiment (DoE) method is carried out using a standard Taguchi orthogonal array of L16(43) to find the optimized combination of weight concentration mixture of SiO2, nozzle orientation and pressure of MQL. Surface roughness and grinding force are also analyzed using signal-to-noise(S/N) ratio to determine the best level of each factor that are tested. Consequently, the best combination of parameters is tested for a period of time and the results are compared with conventional grinding method of dry and flood condition. The results show a positive performance of MQL nanolubrication.  

Keywords: Grinding, MQL, precision grinding, Taguchi optimization, titanium alloy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1874
407 Sleep Scheduling Schemes Based on Location of Mobile User in Sensor-Cloud

Authors: N. Mahendran, R. Priya

Abstract:

The mobile cloud computing (MCC) with wireless sensor networks (WSNs) technology gets more attraction by research scholars because its combines the sensors data gathering ability with the cloud data processing capacity. This approach overcomes the limitation of data storage capacity and computational ability of sensor nodes. Finally, the stored data are sent to the mobile users when the user sends the request. The most of the integrated sensor-cloud schemes fail to observe the following criteria: 1) The mobile users request the specific data to the cloud based on their present location. 2) Power consumption since most of them are equipped with non-rechargeable batteries. Mostly, the sensors are deployed in hazardous and remote areas. This paper focuses on above observations and introduces an approach known as collaborative location-based sleep scheduling (CLSS) scheme. Both awake and asleep status of each sensor node is dynamically devised by schedulers and the scheduling is done purely based on the of mobile users’ current location; in this manner, large amount of energy consumption is minimized at WSN. CLSS work depends on two different methods; CLSS1 scheme provides lower energy consumption and CLSS2 provides the scalability and robustness of the integrated WSN.

Keywords: Sleep scheduling, mobile cloud computing, wireless sensor network, integration, location, network lifetime.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 968
406 Saving Lives: Alternative Approaches to Reducing Gun Violence

Authors: Angie M. Wolf, Angie Del Prado Lippman, DeVone Boggan, Caroline Glesmann, Estivaliz Castro

Abstract:

This paper highlights an innovative and nontraditional violence prevention program that is making a noticeable impact in what was once one of the country’s most violent communities. With unique and tailored strategies, the Operation Peacemaker Fellowship, established in Richmond, California, combines components of evidence-based practices with a community-oriented focus on relationships and mentoring to fill a gap in services and increase community safety. In an effort to highlight these unique strategies and provide a blueprint for other communities with violent crime problems, the authors of this paper hope to clearly delineate how one community is moving forward with vanguard approaches to invest in the lives of young men who once were labeled their community’s most violent, even most deadly, youth. The impact of this program is evidenced through the fellows’ own voices as they illuminate the experience of being in the Fellowship. In interviews, fellows describe how participating in this program has transformed their lives and the lives of those they love. The authors of this article spent more than two years researching this Fellowship program in order to conduct an evaluation of it and, ultimately, to demonstrate how this program is a testament to the power of relationships and love combined with evidence-based practices, consequently enriching the lives of youth and the community that embraces them.

Keywords: Community violence, firearm violence, interventions for violent crime, violence prevention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1972
405 Application of Griddization Management to Construction Hazard Management

Authors: Lingzhi Li, Jiankun Zhang, Tiantian Gu

Abstract:

Hazard management that can prevent fatal accidents and property losses is a fundamental process during the buildings’ construction stage. However, due to lack of safety supervision resources and operational pressures, the conduction of hazard management is poor and ineffective in China. In order to improve the quality of construction safety management, it is critical to explore the use of information technologies to ensure that the process of hazard management is efficient and effective. After exploring the existing problems of construction hazard management in China, this paper develops the griddization management model for construction hazard management. First, following the knowledge grid infrastructure, the griddization computing infrastructure for construction hazards management is designed which includes five layers: resource entity layer, information management layer, task management layer, knowledge transformation layer and application layer. This infrastructure will be as the technical support for realizing grid management. Second, this study divides the construction hazards into grids through city level, district level and construction site level according to grid principles. Last, a griddization management process including hazard identification, assessment and control is developed. Meanwhile, all stakeholders of construction safety management, such as owners, contractors, supervision organizations and government departments, should take the corresponding responsibilities in this process. Finally, a case study based on actual construction hazard identification, assessment and control is used to validate the effectiveness and efficiency of the proposed griddization management model. The advantage of this designed model is to realize information sharing and cooperative management between various safety management departments.

Keywords: Construction hazard, grid management, griddization computing, process.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1567
404 A Family Cars- Life Cycle Cost (LCC)-Oriented Hybrid Modelling Approach Combining ANN and CBR

Authors: Xiaochuan Chen, Jianguo Yang, Beizhi Li

Abstract:

Design for cost (DFC) is a method that reduces life cycle cost (LCC) from the angle of designers. Multiple domain features mapping (MDFM) methodology was given in DFC. Using MDFM, we can use design features to estimate the LCC. From the angle of DFC, the design features of family cars were obtained, such as all dimensions, engine power and emission volume. At the conceptual design stage, cars- LCC were estimated using back propagation (BP) artificial neural networks (ANN) method and case-based reasoning (CBR). Hamming space was used to measure the similarity among cases in CBR method. Levenberg-Marquardt (LM) algorithm and genetic algorithm (GA) were used in ANN. The differences of LCC estimation model between CBR and artificial neural networks (ANN) were provided. ANN and CBR separately each method has its shortcomings. By combining ANN and CBR improved results accuracy was obtained. Firstly, using ANN selected some design features that affect LCC. Then using LCC estimation results of ANN could raise the accuracy of LCC estimation in CBR method. Thirdly, using ANN estimate LCC errors and correct errors in CBR-s estimation results if the accuracy is not enough accurate. Finally, economically family cars and sport utility vehicle (SUV) was given as LCC estimation cases using this hybrid approach combining ANN and CBR.

Keywords: case-based reasoning, life cycle cost (LCC), artificialneural networks (ANN), family cars

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1952
403 Assisted Prediction of Hypertension Based on Heart Rate Variability and Improved Residual Networks

Authors: Yong Zhao, Jian He, Cheng Zhang

Abstract:

Cardiovascular disease resulting from hypertension poses a significant threat to human health, and early detection of hypertension can potentially save numerous lives. Traditional methods for detecting hypertension require specialized equipment and are often incapable of capturing continuous blood pressure fluctuations. To address this issue, this study starts by analyzing the principle of heart rate variability (HRV) and introduces the utilization of sliding window and power spectral density (PSD) techniques to analyze both temporal and frequency domain features of HRV. Subsequently, a hypertension prediction network that relies on HRV is proposed, combining Resnet, attention mechanisms, and a multi-layer perceptron. The network leverages a modified ResNet18 to extract frequency domain features, while employing an attention mechanism to integrate temporal domain features, thus enabling auxiliary hypertension prediction through the multi-layer perceptron. The proposed network is trained and tested using the publicly available SHAREE dataset from PhysioNet. The results demonstrate that the network achieves a high prediction accuracy of 92.06% for hypertension, surpassing traditional models such as K Near Neighbor (KNN), Bayes, Logistic regression, and traditional Convolutional Neural Network (CNN).

Keywords: Feature extraction, heart rate variability, hypertension, residual networks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 166
402 Control of Airborne Aromatic Hydrocarbons over TiO2-Carbon Nanotube Composites

Authors: Joon Y. Lee, Seung H. Shin, Ho H. Chun, Wan K. Jo

Abstract:

Poly vinyl acetate (PVA)-based titania (TiO2)–carbon nanotube composite nanofibers (PVA-TCCNs) with various PVA-to-solvent ratios and PVA-based TiO2 composite nanofibers (PVA-TN) were synthesized using an electrospinning process, followed by thermal treatment. The photocatalytic activities of these nanofibers in the degradation of airborne monocyclic aromatics under visible-light irradiation were examined. This study focuses on the application of these photocatalysts to the degradation of the target compounds at sub-part-per-million indoor air concentrations. The characteristics of the photocatalysts were examined using scanning electron microscopy, X-ray diffraction, ultraviolet-visible spectroscopy, and Fourier-transform infrared spectroscopy. For all the target compounds, the PVA-TCCNs showed photocatalytic degradation efficiencies superior to those of the reference PVA-TN. Specifically, the average photocatalytic degradation efficiencies for benzene, toluene, ethyl benzene, and o-xylene (BTEX) obtained using the PVA-TCCNs with a PVA-to-solvent ratio of 0.3 (PVA-TCCN-0.3) were 11%, 59%, 89%, and 92%, respectively, whereas those observed using PVA-TNs were 5%, 9%, 28%, and 32%, respectively. PVA-TCCN-0.3 displayed the highest photocatalytic degradation efficiency for BTEX, suggesting the presence of an optimal PVA-to-solvent ratio for the synthesis of PVA-TCCNs. The average photocatalytic efficiencies for BTEX decreased from 11% to 4%, 59% to 18%, 89% to 37%, and 92% to 53%, respectively, when the flow rate was increased from 1.0 to 4.0 L min1. In addition, the average photocatalytic efficiencies for BTEX increased 11% to ~0%, 59% to 3%, 89% to 7%, and 92% to 13%, respectively, when the input concentration increased from 0.1 to 1.0 ppm. The prepared PVA-TCCNs were effective for the purification of airborne aromatics at indoor concentration levels, particularly when the operating conditions were optimized.

Keywords: Mixing ratio, nanofiber, polymer, reference photocatalyst.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2228