Search results for: cost optimization condition assessment
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6128

Search results for: cost optimization condition assessment

458 Evaluation of Gingival Hyperplasia Caused by Medications

Authors: Ilma Robo, Saimir Heta, Greta Plaka, Vera Ostreni

Abstract:

Purpose: Drug gingival hyperplasia is an uncommon pathology encountered during routine work in dental units. The purpose of this paper is to present the clinical appearance of gingival hyperplasia caused by medications. There are already three classes of medications that cause hyperplasia and based on data from the literature, the clinical cases encountered and included in this study have been compared. Materials and Methods: The study was conducted in a total of 311 patients, out of which 182 patients were included in our study, meeting the inclusion criteria. After each patient's history was recorded and it was found that patients were in their knowledge of chronic illness, undergoing treatment of gingivitis hypertrophic drugs was performed with a clinical examination of oral cavity and assessment by vertical and horizontal evaluation according to the periodontal indexes. Results: Of the data collected during the study, it was observed that 97% of patients with gingival hyperplasia are treated with nifedipine. 84% of patients treated with selected medicines and gingival hyperplasia in the oral cavity has been exposed at time period for more than 1 year and 1 month. According to the GOI, in the first rank of this index are about 21% of patients, in the second rank are 52%, in the third rank are 24% and in the fourth grade are 3%. According to the horizontal growth index of gingival hyperplasia, grade 1 included about 61% of patients and grade 2 included about 39% of patients with gingival hyperplasia. Bacterial index divides patients by degrees: grading 0 - 8.2%, grading 1 - 32.4%, grading 2 - 14% and grading 3 - 45.1%. Conclusions: The highest percentage of gingival hyperplasia caused by drugs is due to dosing of nifedipine for a duration of dosing and application for systemic healing for more than 1 year.

Keywords: Drug gingival hyperplasia, horizontal growth index, vertical growth index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 475
457 Assessment of Urban Heat Island through Remote Sensing in Nagpur Urban Area Using Landsat 7 ETM+ Satellite Images

Authors: Meenal Surawar, Rajashree Kotharkar

Abstract:

Urban Heat Island (UHI) is found more pronounced as a prominent urban environmental concern in developing cities. To study the UHI effect in the Indian context, the Nagpur urban area has been explored in this paper using Landsat 7 ETM+ satellite images through Remote Sensing and GIS techniques. This paper intends to study the effect of LU/LC pattern on daytime Land Surface Temperature (LST) variation, contributing UHI formation within the Nagpur Urban area. Supervised LU/LC area classification was carried to study urban Change detection using ENVI 5. Change detection has been studied by carrying Normalized Difference Vegetation Index (NDVI) to understand the proportion of vegetative cover with respect to built-up ratio. Detection of spectral radiance from the thermal band of satellite images was processed to calibrate LST. Specific representative areas on the basis of urban built-up and vegetation classification were selected for observation of point LST. The entire Nagpur urban area shows that, as building density increases with decrease in vegetation cover, LST increases, thereby causing the UHI effect. UHI intensity has gradually increased by 0.7°C from 2000 to 2006; however, a drastic increase has been observed with difference of 1.8°C during the period 2006 to 2013. Within the Nagpur urban area, the UHI effect was formed due to increase in building density and decrease in vegetative cover.

Keywords: Land use, land cover, land surface temperature, remote sensing, urban heat island.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2608
456 The Use of Palm Kernel Shell and Ash for Concrete Production

Authors: J. E. Oti, J. M. Kinuthia, R. Robinson, P. Davies

Abstract:

This work reports the potential of using Palm Kernel (PK) ash and shell as a partial substitute for Portland Cement (PC) and coarse aggregate in the development of mortar and concrete. PK ash and shell are agro-waste materials from palm oil mills, the disposal of PK ash and shell is an environmental problem of concern. The PK ash has pozzolanic properties that enables it as a partial replacement for cement and also plays an important role in the strength and durability of concrete, its use in concrete will alleviate the increasing challenges of scarcity and high cost of cement. In order to investigate the PC replacement potential of PK ash, three types of PK ash were produced at varying temperature (350-750C) and they were used to replace up to 50% PC. The PK shell was used to replace up to 100% coarse aggregate in order to study its aggregate replacement potential. The testing programme included material characterisation, the determination of compressive strength, tensile splitting strength and chemical durability in aggressive sulfatebearing exposure conditions. The 90 day compressive results showed a significant strength gain (up to 26.2 N/mm2). The Portland cement and conventional coarse aggregate has significantly higher influence in the strength gain compared to the equivalent PK ash and PK shell. The chemical durability results demonstrated that after a prolonged period of exposure, significant strength losses in all the concretes were observed. This phenomenon is explained, due to lower change in concrete morphology and inhibition of reaction species and the final disruption of the aggregate cement paste matrix.

Keywords: Sustainability, Concrete, mortar, Palm kernel shell, compressive strength, consistency.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 4612
455 Assessment of Drought Tolerance Maize Hybrids at Grain Growth Stage in Mediterranean Area

Authors: Ayman El Sabagh, Celaleddin Barutçular, Hirofumi Saneoka

Abstract:

Drought is one of the most serious problems posing a grave threat to cereals production including maize. Maize improvement in drought-stress tolerance poses a great challenge as the global need for food and bio-energy increases. Thus, the current study was planned to explore the variations and determine the performance of target traits of maize hybrids at grain growth stage under drought conditions during 2014 under Adana, Mediterranean climate conditions, Turkey. Maize hybrids (Sancia, Indaco, 71May69, Aaccel, Calgary, 70May82, 72May80) were evaluated under (irrigated and water stress). Results revealed that, grain yield and yield traits had a negative effects because of water stress conditions compared with the normal irrigation. As well as, based on the result under normal irrigation, the maximum biological yield and harvest index were recorded. According to the differences among hybrids were found that, significant differences were observed among hybrids with respect to yield and yield traits under current research. Based on the results, grain weight had more effect on grain yield than grain number during grain filling growth stage under water stress conditions. In this concern, according to low drought susceptibility index (less grain yield losses), the hybrid (Indaco) was more stable in grain number and grain weight. Consequently, it may be concluded that this hybrid would be recommended for use in the future breeding programs for production of drought tolerant hybrids.

Keywords: Drought susceptibility index, grain filling, grain yield, maize, water stress.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2456
454 On-Line Geometrical Identification of Reconfigurable Machine Tool using Virtual Machining

Authors: Alexandru Epureanu, Virgil Teodor

Abstract:

One of the main research directions in CAD/CAM machining area is the reducing of machining time. The feedrate scheduling is one of the advanced techniques that allows keeping constant the uncut chip area and as sequel to keep constant the main cutting force. They are two main ways for feedrate optimization. The first consists in the cutting force monitoring, which presumes to use complex equipment for the force measurement and after this, to set the feedrate regarding the cutting force variation. The second way is to optimize the feedrate by keeping constant the material removal rate regarding the cutting conditions. In this paper there is proposed a new approach using an extended database that replaces the system model. The feedrate scheduling is determined based on the identification of the reconfigurable machine tool, and the feed value determination regarding the uncut chip section area, the contact length between tool and blank and also regarding the geometrical roughness. The first stage consists in the blank and tool monitoring for the determination of actual profiles. The next stage is the determination of programmed tool path that allows obtaining the piece target profile. The graphic representation environment models the tool and blank regions and, after this, the tool model is positioned regarding the blank model according to the programmed tool path. For each of these positions the geometrical roughness value, the uncut chip area and the contact length between tool and blank are calculated. Each of these parameters are compared with the admissible values and according to the result the feed value is established. We can consider that this approach has the following advantages: in case of complex cutting processes the prediction of cutting force is possible; there is considered the real cutting profile which has deviations from the theoretical profile; the blank-tool contact length limitation is possible; it is possible to correct the programmed tool path so that the target profile can be obtained. Applying this method, there are obtained data sets which allow the feedrate scheduling so that the uncut chip area is constant and, as a result, the cutting force is constant, which allows to use more efficiently the machine tool and to obtain the reduction of machining time.

Keywords: Reconfigurable machine tool, system identification, uncut chip area, cutting conditions scheduling.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1449
453 A Perceptually Optimized Foveation Based Wavelet Embedded Zero Tree Image Coding

Authors: A. Bajit, M. Nahid, A. Tamtaoui, E. H. Bouyakhf

Abstract:

In this paper, we propose a Perceptually Optimized Foveation based Embedded ZeroTree Image Coder (POEFIC) that introduces a perceptual weighting to wavelet coefficients prior to control SPIHT encoding algorithm in order to reach a targeted bit rate with a perceptual quality improvement with respect to a given bit rate a fixation point which determines the region of interest ROI. The paper also, introduces a new objective quality metric based on a Psychovisual model that integrates the properties of the HVS that plays an important role in our POEFIC quality assessment. Our POEFIC coder is based on a vision model that incorporates various masking effects of human visual system HVS perception. Thus, our coder weights the wavelet coefficients based on that model and attempts to increase the perceptual quality for a given bit rate and observation distance. The perceptual weights for all wavelet subbands are computed based on 1) foveation masking to remove or reduce considerable high frequencies from peripheral regions 2) luminance and Contrast masking, 3) the contrast sensitivity function CSF to achieve the perceptual decomposition weighting. The new perceptually optimized codec has the same complexity as the original SPIHT techniques. However, the experiments results show that our coder demonstrates very good performance in terms of quality measurement.

Keywords: DWT, linear-phase 9/7 filter, Foveation Filtering, CSF implementation approaches, 9/7 Wavelet JND Thresholds and Wavelet Error Sensitivity WES, Luminance and Contrast masking, standard SPIHT, Objective Quality Measure, Probability Score PS.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1795
452 Evaluating the Appropriateness of Passive Techniques Used in Achieving Thermal Comfort in Buildings: A Case of Lautech College of Health Sciences, Ogbomoso

Authors: Ilelabayo I. Adebisi, Yetunde R. Okeyinka, Abdulrasaq K. Ayinla

Abstract:

Architectural design is a complex process especially when the issue of user’s comfort, building sustainability and energy efficiency needs to be addressed. The current energy challenge and the seek for an environment where users will have a more physiological and psychological comfort in this part of the world have led various researchers to constantly explore the concept of passive design techniques. Passive techniques are design strategies used in regulating building indoor climates and improving users comfort without the use of energy driven devices. This paper describes and analyses the significance of passive techniques on indoor climates and their impact on thermal comfort of building users using LAUTECH College of health sciences Ogbomoso as a case study. The study aims at assessing the appropriateness of the passive strategies used in achieving comfort in their buildings with a view to evaluate their adequacy and effectiveness and suggesting how comfortable their building users are. This assessment was carried out through field survey and questionnaires and findings revealed that strategies such as Orientation, Spacing, Courtyards, window positioning and choice of landscape adopted are inadequate while only fins and roof overhangs are adequate. The finding also revealed that 72% of building occupants feel hot discomfort in their various spaces and hence have the urge to get fresh air from outside during work hours. The Mahoney table was used to provide appropriate architectural design recommendations to guide future designers in the study area.

Keywords: Energy challenge, passive cooling, techniques, thermal comfort, users comfort.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 900
451 An Experimental Procedure for Design and Construction of Monocopter and Its Control Using Optical and GPS-Aided AHRS Sensors

Authors: A. Safaee, M. S. Mehrabani, M. B. Menhaj, V. Mousavi, S. Z. Moussavi

Abstract:

Monocopter is a single-wing rotary flying vehicle which has the capability of hovering. This flying vehicle includes two dynamic parts in which more efficiency can be expected rather than other Micro UAVs due to the extended area of wing compared to its fuselage. Low cost and simple mechanism in comparison to other vehicles such as helicopter are the most important specifications of this flying vehicle. In the previous paper we discussed the introduction of the final system but in this paper, the experimental design process of Monocopter and its control algorithm has been investigated in general. Also the editorial bugs in the previous article have been corrected and some translational ambiguities have been resolved. Initially by constructing several prototypes and carrying out many flight tests the main design parameters of this air vehicle were obtained by experimental measurements. Eventually the required main monocopter for this project was constructed. After construction of the monocopter in order to design, implementation and testing of control algorithms first a simple optic system used for determining the heading angle. After doing numerous tests on Test Stand, the control algorithm designed and timing of applying control inputs adjusted. Then other control parameters of system were tuned in flight tests. Eventually the final control system designed and implemented using the AHRS sensor and the final operational tests performed successfully.

Keywords: Monocopter, Flap, Heading Angle, AHRS, Cyclic, Photo Diode.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3434
450 Effective Dose and Size Specific Dose Estimation with and without Tube Current Modulation for Thoracic Computed Tomography Examinations: A Phantom Study

Authors: S. Gharbi, S. Labidi, M. Mars, M. Chelli, F. Ladeb

Abstract:

The purpose of this study is to reduce radiation dose for chest CT examination by including Tube Current Modulation (TCM) to a standard CT protocol. A scan of an anthropomorphic male Alderson phantom was performed on a 128-slice scanner. The estimation of effective dose (ED) in both scans with and without mAs modulation was done via multiplication of Dose Length Product (DLP) to a conversion factor. Results were compared to those measured with a CT-Expo software. The size specific dose estimation (SSDE) values were obtained by multiplication of the volume CT dose index (CTDIvol) with a conversion size factor related to the phantom’s effective diameter. Objective assessment of image quality was performed with Signal to Noise Ratio (SNR) measurements in phantom. SPSS software was used for data analysis. Results showed including CARE Dose 4D; ED was lowered by 48.35% and 51.51% using DLP and CT-expo, respectively. In addition, ED ranges between 7.01 mSv and 6.6 mSv in case of standard protocol, while it ranges between 3.62 mSv and 3.2 mSv with TCM. Similar results are found for SSDE; dose was higher without TCM of 16.25 mGy and was lower by 48.8% including TCM. The SNR values calculated were significantly different (p=0.03<0.05). The highest one is measured on images acquired with TCM and reconstructed with Filtered back projection (FBP). In conclusion, this study proves the potential of TCM technique in SSDE and ED reduction and in conserving image quality with high diagnostic reference level for thoracic CT examinations.

Keywords: Anthropomorphic phantom, computed tomography, CT-expo, radiation dose.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1464
449 Urban Search and Rescue and Rapid Field Assessment of Damaged and Collapsed Building Structures

Authors: Abid I. Abu-Tair, Gavin M. Wilde, John M. Kinuthia

Abstract:

Urban Search and Rescue (USAR) is a functional capability that has been developed to allow the United Kingdom Fire and Rescue Service to deal with ‘major incidents’ primarily involving structural collapse. The nature of the work undertaken by USAR means that staying out of a damaged or collapsed building structure is not usually an option for search and rescue personnel. As a result there is always a risk that they themselves could become victims. For this paper, a systematic and investigative review using desk research was undertaken to explore the role which structural engineering can play in assisting search and rescue personnel to conduct structural assessments when in the field. The focus is on how search and rescue personnel can assess damaged and collapsed building structures, not just in terms of structural damage that may been countered, but also in relation to structural stability. Natural disasters, accidental emergencies, acts of terrorism and other extreme events can vary significantly in nature and ferocity, and can cause a wide variety of damage to building structures. It is not possible or, even realistic, to provide search and rescue personnel with definitive guidelines and procedures to assess damaged and collapsed building structures as there are too many variables to consider. However, understanding what implications damage may have upon the structural stability of a building structure will enable search and rescue personnel to better judge and quantify risk from a life-safety standpoint. It is intended that this will allow search and rescue personnel to make informed decisions and ensure every effort is made to mitigate risk, so that they themselves do not become victims.

Keywords: Damaged and collapsed building structures, life safety, quantifying risk, search and rescue personnel, structural assessments in the field.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 3120
448 Bridging the Mental Gap between Convolution Approach and Compartmental Modeling in Functional Imaging: Typical Embedding of an Open Two-Compartment Model into the Systems Theory Approach of Indicator Dilution Theory

Authors: Gesine Hellwig

Abstract:

Functional imaging procedures for the non-invasive assessment of tissue microcirculation are highly requested, but require a mathematical approach describing the trans- and intercapillary passage of tracer particles. Up to now, two theoretical, for the moment different concepts have been established for tracer kinetic modeling of contrast agent transport in tissues: pharmacokinetic compartment models, which are usually written as coupled differential equations, and the indicator dilution theory, which can be generalized in accordance with the theory of lineartime- invariant (LTI) systems by using a convolution approach. Based on mathematical considerations, it can be shown that also in the case of an open two-compartment model well-known from functional imaging, the concentration-time course in tissue is given by a convolution, which allows a separation of the arterial input function from a system function being the impulse response function, summarizing the available information on tissue microcirculation. Due to this reason, it is possible to integrate the open two-compartment model into the system-theoretic concept of indicator dilution theory (IDT) and thus results known from IDT remain valid for the compartment approach. According to the long number of applications of compartmental analysis, even for a more general context similar solutions of the so-called forward problem can already be found in the extensively available appropriate literature of the seventies and early eighties. Nevertheless, to this day, within the field of biomedical imaging – not from the mathematical point of view – there seems to be a trench between both approaches, which the author would like to get over by exemplary analysis of the well-known model.

Keywords: Functional imaging, Tracer kinetic modeling, LTIsystem, Indicator dilution theory / convolution approach, Two-Compartment model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1418
447 Human Digital Twin for Personal Conversation Automation Using Supervised Machine Learning Approaches

Authors: Aya Salama

Abstract:

Digital Twin has emerged as a compelling research area, capturing the attention of scholars over the past decade. It finds applications across diverse fields, including smart manufacturing and healthcare, offering significant time and cost savings. Notably, it often intersects with other cutting-edge technologies such as Data Mining, Artificial Intelligence, and Machine Learning. However, the concept of a Human Digital Twin (HDT) is still in its infancy and requires further demonstration of its practicality. HDT takes the notion of Digital Twin a step further by extending it to living entities, notably humans, who are vastly different from inanimate physical objects. The primary objective of this research was to create an HDT capable of automating real-time human responses by simulating human behavior. To achieve this, the study delved into various areas, including clustering, supervised classification, topic extraction, and sentiment analysis. The paper successfully demonstrated the feasibility of HDT for generating personalized responses in social messaging applications. Notably, the proposed approach achieved an overall accuracy of 63%, a highly promising result that could pave the way for further exploration of the HDT concept. The methodology employed Random Forest for clustering the question database and matching new questions, while K-nearest neighbor was utilized for sentiment analysis.

Keywords: Human Digital twin, sentiment analysis, topic extraction, supervised machine learning, unsupervised machine learning, classification and clustering.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 188
446 Juxtaposition of the Past and the Present: A Pragmatic Stylistic Analysis of the Short Story “Too Much Happiness” by Alice Munro

Authors: Inas Hussein

Abstract:

Alice Munro is a Canadian short-story writer who has been regarded as one of the greatest writers of fiction. Owing to her great contribution to fiction, she was the first Canadian woman and the only short-story writer ever to be rewarded the Nobel Prize for Literature in 2013. Her literary works include collections of short stories and one book published as a novel. Her stories concentrate on the human condition and the human relationships as seen through the lens of daily life. The setting in most of her stories is her native Canada- small towns much similar to the one where she grew up. Her writing style is not only realistic but is also characterized by autobiographical, historical and regional features. The aim of this research is to analyze one of the key stylistic devices often adopted by Munro in her fictions: the juxtaposition of the past and the present, with reference to the title story in Munro's short story collection Too Much Happiness. The story under exploration is a brief biography of the Russian Mathematician and novelist Sophia Kovalevsky (1850 – 1891), the first woman to be appointed as a professor of Mathematics at a European University in Stockholm. Thus, the story has a historical protagonist and is set on the European continent. Munro dramatizes the severe historical and cultural constraints that hindered the career of the protagonist. A pragmatic stylistic framework is being adopted and the qualitative analysis is supported by textual reference. The stylistic analysis reveals that the juxtaposition of the past and the present is one of the distinctive features that characterize the author; in a typical Munrovian manner, the protagonist often moves between the units of time: the past, the present and, sometimes, the future. Munro's style is simple and direct but cleverly constructed and densely complicated by the presence of deeper layers and stories within the story. Findings of the research reveal that the story under investigation merits reading and analyzing. It is recommended that this story and other stories by Munro are analyzed to further explore the features of her art and style.

Keywords: Alice Munro, Too Much Happiness, juxtaposition of past and present, pragmatic stylistics.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1029
445 Toward Understanding and Testing Deep Learning Information Flow in Deep Learning-Based Android Apps

Authors: Jie Zhang, Qianyu Guo, Tieyi Zhang, Zhiyong Feng, Xiaohong Li

Abstract:

The widespread popularity of mobile devices and the development of artificial intelligence (AI) have led to the widespread adoption of deep learning (DL) in Android apps. Compared with traditional Android apps (traditional apps), deep learning based Android apps (DL-based apps) need to use more third-party application programming interfaces (APIs) to complete complex DL inference tasks. However, existing methods (e.g., FlowDroid) for detecting sensitive information leakage in Android apps cannot be directly used to detect DL-based apps as they are difficult to detect third-party APIs. To solve this problem, we design DLtrace, a new static information flow analysis tool that can effectively recognize third-party APIs. With our proposed trace and detection algorithms, DLtrace can also efficiently detect privacy leaks caused by sensitive APIs in DL-based apps. Additionally, we propose two formal definitions to deal with the common polymorphism and anonymous inner-class problems in the Android static analyzer. Using DLtrace, we summarize the non-sequential characteristics of DL inference tasks in DL-based apps and the specific functionalities provided by DL models for such apps. We conduct an empirical assessment with DLtrace on 208 popular DL-based apps in the wild and found that 26.0% of the apps suffered from sensitive information leakage. Furthermore, DLtrace outperformed FlowDroid in detecting and identifying third-party APIs. The experimental results demonstrate that DLtrace expands FlowDroid in understanding DL-based apps and detecting security issues therein.

Keywords: Mobile computing, deep learning apps, sensitive information, static analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 596
444 Mathematical Modeling of Wind Energy System for Designing Fault Tolerant Control

Authors: Patil Ashwini, Archana Thosar

Abstract:

This paper addresses the mathematical model of wind energy system useful for designing fault tolerant control. To serve the demand of power, large capacity wind energy systems are vital. These systems are installed offshore where non planned service is very costly. Whenever there is a fault in between two planned services, the system may stop working abruptly. This might even lead to the complete failure of the system. To enhance the reliability, the availability and reduce the cost of maintenance of wind turbines, the fault tolerant control systems are very essential. For designing any control system, an appropriate mathematical model is always needed. In this paper, the two-mass model is modified by considering the frequent mechanical faults like misalignments in the drive train, gears and bearings faults. These faults are subject to a wear process and cause frictional losses. This paper addresses these faults in the mathematics of the wind energy system. Further, the work is extended to study the variations of the parameters namely generator inertia constant, spring constant, viscous friction coefficient and gear ratio; on the pole-zero plot which is related with the physical design of the wind turbine. Behavior of the wind turbine during drive train faults are simulated and briefly discussed.

Keywords: Mathematical model of wind energy system, stability analysis, shaft stiffness, viscous friction coefficient, gear ratio, generator inertia, fault tolerant control.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1904
443 Environmental and Technical Modeling of Industrial Solid Waste Management Using Analytical Network Process; A Case Study: Gilan-IRAN

Authors: D. Nouri, M.R. Sabour, M. Ghanbarzadeh Lak

Abstract:

Proper management of residues originated from industrial activities is considered as one of the serious challenges faced by industrial societies due to their potential hazards to the environment. Common disposal methods for industrial solid wastes (ISWs) encompass various combinations of solely management options, i.e. recycling, incineration, composting, and sanitary landfilling. Indeed, the procedure used to evaluate and nominate the best practical methods should be based on environmental, technical, economical, and social assessments. In this paper an environmentaltechnical assessment model is developed using analytical network process (ANP) to facilitate the decision making practice for ISWs generated at Gilan province, Iran. Using the results of performed surveys on industrial units located at Gilan, the various groups of solid wastes in the research area were characterized, and four different ISW management scenarios were studied. The evaluation process was conducted using the above-mentioned model in the Super Decisions software (version 2.0.8) environment. The results indicates that the best ISW management scenario for Gilan province is consist of recycling the metal industries residues, composting the putrescible portion of ISWs, combustion of paper, wood, fabric and polymeric wastes as well as energy extraction in the incineration plant, and finally landfilling the rest of the waste stream in addition with rejected materials from recycling and compost production plants and ashes from the incineration unit.

Keywords: Analytical Network Process, Disposal Scenario, Gilan Province, Industrial Waste.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1953
442 Effect of Fill Material Density under Structures on Ground Motion Characteristics Due to Earthquake

Authors: Ahmed T. Farid, Khaled Z. Soliman

Abstract:

Due to limited areas and excessive cost of land for projects, backfilling process has become necessary. Also, backfilling will be done to overcome the un-leveling depths or raising levels of site construction, especially near the sea region. Therefore, backfilling soil materials used under the foundation of structures should be investigated regarding its effect on ground motion characteristics, especially at regions subjected to earthquakes. In this research, 60-meter thickness of sandy fill material was used above a fixed 240-meter of natural clayey soil underlying by rock formation to predict the modified ground motion characteristics effect at the foundation level. Comparison between the effect of using three different situations of fill material compaction on the recorded earthquake is studied, i.e. peak ground acceleration, time history, and spectra acceleration values. The three different densities of the compacted fill material used in the study were very loose, medium dense and very dense sand deposits, respectively. Shake computer program was used to perform this study. Strong earthquake records, with Peak Ground Acceleration (PGA) of 0.35 g, were used in the analysis. It was found that, higher compaction of fill material thickness has a significant effect on eliminating the earthquake ground motion properties at surface layer of fill material, near foundation level. It is recommended to consider the fill material characteristics in the design of foundations subjected to seismic motions. Future studies should be analyzed for different fill and natural soil deposits for different seismic conditions.

Keywords: Fill, material, density, compaction, earthquake, PGA.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 883
441 Simulation of an Auto-Tuning Bicycle Suspension Fork with Quick Releasing Valves

Authors: Y. C. Mao, G. S. Chen

Abstract:

Bicycle configuration is not as large as those of motorcycles or automobiles, while it indeed composes a complicated dynamic system. People-s requirements on comfortability, controllability and safety grow higher as the research and development technologies improve. The shock absorber affects the vehicle suspension performances enormously. The absorber takes the vibration energy and releases it at a suitable time, keeping the wheel under a proper contact condition with road surface, maintaining the vehicle chassis stability. Suspension design for mountain bicycles is more difficult than that of city bikes since it encounters dynamic variations on road and loading conditions. Riders need a stiff damper as they exert to tread on the pedals when climbing, while a soft damper when they descend downhill. Various switchable shock absorbers are proposed in markets, however riders have to manually switch them among soft, hard and lock positions. This study proposes a novel design of the bicycle shock absorber, which provides automatic smooth tuning of the damping coefficient, from a predetermined lower bound to theoretically unlimited. An automatic quick releasing valve is involved in this design so that it can release the peak pressure when the suspension fork runs into a square-wave type obstacle and prevent the chassis from damage, avoiding the rider skeleton from injury. This design achieves the automatic tuning process by innovative plunger valve and fluidic passage arrangements without any electronic devices. Theoretical modelling of the damper and spring are established in this study. Design parameters of the valves and fluidic passages are determined. Relations between design parameters and shock absorber performances are discussed in this paper. The analytical results give directions to the shock absorber manufacture.

Keywords: Modelling, Simulation, Bicycle, Shock Absorber, Damping, Releasing Valve

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2890
440 Opportunities for Precision Feed in Apiculture for Managing the Efficacy of Feed and Medicine

Authors: John Michael Russo

Abstract:

Honeybees are important to our food system and continue to suffer from high rates of colony loss. Precision feed has brought many benefits to livestock cultivation and these should transfer to apiculture. However, apiculture has unique challenges. The objective of this research is to understand how principles of precision agriculture, applied to apiculture and feed specifically, might effectively improve state-of-the-art cultivation. The methodology surveys apicultural practice to build a model for assessment. First, a review of apicultural motivators is made. Feed method is then evaluated. Finally, precision feed methods are examined as accelerants with potential to advance the effectiveness of feed practice. Six important motivators emerge: colony loss, disease, climate change, site variance, operational costs, and competition. Feed practice itself is used to compensate for environmental variables. The research finds that the current state-of-the-art in apiculture feed focuses on critical challenges in the management of feed schedules which satisfy requirements of the bees, preserve potency, optimize environmental variables, and manage costs. Many of the challenges are most acute when feed is used to dispense medication. Technology such as RNA treatments have even more rigorous demands. Precision feed solutions focus on strategies which accommodate specific needs of individual livestock. A major component is data; they integrate precise data with methods that respond to individual needs. There is enormous opportunity for precision feed to improve apiculture through the integration of precision data with policies to translate data into optimized action in the apiary, particularly through automation.

Keywords: Apiculture, precision apiculture, RNA varroa treatment, honeybee feed applications.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 234
439 Novel Use of a Quality Assurance Tool for Integrating Technology to HSE

Authors: Ragi Poyyara, Vivek V., Ashish Khaparde

Abstract:

The product development process (PDP) in the Technology group plays a very important role in the launch of any product. While a manufacturing process encourages the use of certain measures to reduce health, safety and environmental (HSE) risks on the shop floor, the PDP concentrates on the use of Geometric Dimensioning and Tolerancing (GD&T) to develop a flawless design. Furthermore, PDP distributes and coordinates activities between different departments such as marketing, purchasing, and manufacturing. However, it is seldom realized that PDP makes a significant contribution to developing a product that reduces HSE risks by encouraging the Technology group to use effective GD&T. The GD&T is a precise communication tool that uses a set of symbols, rules, and definitions to mathematically define parts to be manufactured. It is a quality assurance method widely used in the oil and gas sector. Traditionally it is used to ensure the interchangeability of a part without affecting its form, fit, and function. Parts that do not meet these requirements are rejected during quality audits. This paper discusses how the Technology group integrates this quality assurance tool into the PDP and how the tool plays a major role in helping the HSE department in its goal towards eliminating HSE incidents. The PDP involves a thorough risk assessment and establishes a method to address those risks during the design stage. An illustration shows how GD&T helped reduce safety risks by ergonomically improving assembling operations. A brief discussion explains how tolerances provided on a part help prevent finger injury. This tool has equipped Technology to produce fixtures, which are used daily in operations as well as manufacturing. By applying GD&T to create good fits, HSE risks are mitigated for operating personnel. Both customers and service providers benefit from reduced safety risks.

Keywords: HSE, PDP, GD&T, risks.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1834
438 Automated Monitoring System to Support Investigation of Contributing Factors of Work-Related Disorders and Accidents

Authors: Erika R. Chambriard, Sandro C. Izidoro, Davidson P. Mendes, Douglas E. V. Pires

Abstract:

Work-related illnesses and disorders have been a constant aspect of work. Although their nature has changed over time, from musculoskeletal disorders to illnesses related to psychosocial aspects of work, its impact on the life of workers remains significant. Despite significant efforts worldwide to protect workers, the disparity between changes in work legislation and actual benefit for workers’ health has been creating a significant economic burden for social security and health systems around the world. In this context, this study aims to propose, test and validate a modular prototype that allows for work environmental aspects to be assessed, monitored and better controlled. The main focus is also to provide a historical record of working conditions and the means for workers to obtain comprehensible and useful information regarding their work environment and legal limits of occupational exposure to different types of environmental variables, as means to improve prevention of work-related accidents and disorders. We show the developed prototype provides useful and accurate information regarding the work environmental conditions, validating them with standard occupational hygiene equipment. We believe the proposed prototype is a cost-effective and adequate approach to work environment monitoring that could help elucidate the links between work and occupational illnesses, and that different industry sectors, as well as developing countries, could benefit from its capabilities.

Keywords: Arduino prototyping, occupational health and hygiene, work environment, work-related disorders prevention.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1109
437 Operation Planning of Concrete Box Girder Bridge by 4D CAD Visualization Techniques

Authors: Mohammad Rohani, Gholamali Shafabakhsh, Abdolhosein Haddad, Ehsan Asnaashari

Abstract:

Visual simulation has emerged as a key planning tool in built environment because it enables architects, engineers and project managers to visualize construction process evolution before the project actual commences. This provides an efficient technology for reducing time and cost through planning and controlling resources, machines and materials. With the development of infrastructure projects and the massive civil constructions such as bridges, urban tunnels and highways as well as sensitivity of their construction operations, it is very necessary to apply proper planning methods. Implementation of visual techniques into management of construction projects can provide a fundamental foundation for projects with massive activities and duplicate items. So, the purpose of this paper is to develop visual simulation management techniques for infrastructure projects such as highways bridges by the use of Four-Dimensional Computer-Aided design Models. This project simulates operational assembly-line for Box-Girder Concrete Bridges which it would be able to optimize the sequence and interaction of project activities and on the other hand, it would minimize any unintended conflicts prior to project start. In this paper, after introducing the various planning methods by building information model and concrete bridges in highways, an executive case study is demonstrated and then a visual technique (4D CAD) will be applied for the case. In the final step, the user feedback for interacting by this system evaluated according to six criteria.

Keywords: 4D application area, Box-Girder concrete bridges, CAD model, visual planning.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1579
436 Memorabilia of Suan Sunandha through Interactive User Interface

Authors: Nalinee Sophatsathit

Abstract:

The objectives of memorabilia of Suan Sunandha are to develop a general knowledge presentation about the historical royal garden through interactive graphic simulation technique and to employ high-functionality context in enhancing interactive user navigation. The approach infers non-intrusive display of relevant history in response to situational context. User’s navigation runs through the virtual reality campus, consisting of new and restored buildings. A flash back presentation of information pertaining to the history in the form of photos, paintings, and textual descriptions are displayed along each passing-by building. To keep the presentation lively, graphical simulation is created in a serendipity game play so that the user can both learn and enjoy the educational tour. The benefits of this human-computer interaction development are two folds. First, lively presentation technique and situational context modeling are developed that entail a usable paradigm of knowledge and information presentation combinations. Second, cost effective training and promotion for both internal personnel and public visitors to learn and keep informed of this historical royal garden can be furnished without the need for a dedicated public relations service. Future improvement on graphic simulation and ability based display can extend this work to be more realistic, user-friendly, and informative for all.

Keywords: Interactive user navigation, high-functionality context, situational context, human-computer interaction.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1599
435 Bridge Health Monitoring: A Review

Authors: Mohammad Bakhshandeh

Abstract:

Structural Health Monitoring (SHM) is a crucial and necessary practice that plays a vital role in ensuring the safety and integrity of critical structures, and in particular, bridges. The continuous monitoring of bridges for signs of damage or degradation through Bridge Health Monitoring (BHM) enables early detection of potential problems, allowing for prompt corrective action to be taken before significant damage occurs. Although all monitoring techniques aim to provide accurate and decisive information regarding the remaining useful life, safety, integrity, and serviceability of bridges, understanding the development and propagation of damage is vital for maintaining uninterrupted bridge operation. Over the years, extensive research has been conducted on BHM methods, and experts in the field have increasingly adopted new methodologies. In this article, we provide a comprehensive exploration of the various BHM approaches, including sensor-based, non-destructive testing (NDT), model-based, and artificial intelligence (AI)-based methods. We also discuss the challenges associated with BHM, including sensor placement and data acquisition, data analysis and interpretation, cost and complexity, and environmental effects, through an extensive review of relevant literature and research studies. Additionally, we examine potential solutions to these challenges and propose future research ideas to address critical gaps in BHM.

Keywords: Structural health monitoring, bridge health monitoring, sensor-based methods, machine-learning algorithms, model-based techniques, sensor placement, data acquisition, data analysis.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 302
434 Assessment of Landfill Pollution Load on Hydroecosystem by Use of Heavy Metal Bioaccumulation Data in Fish

Authors: Gintarė Sauliutė, Gintaras Svecevičius

Abstract:

Landfill leachates contain a number of persistent pollutants, including heavy metals. They have the ability to spread in ecosystems and accumulate in fish which most of them are classified as top-consumers of trophic chains. Fish are freely swimming organisms; but perhaps, due to their species-specific ecological and behavioral properties, they often prefer the most suitable biotopes and therefore, did not avoid harmful substances or environments. That is why it is necessary to evaluate the persistent pollutant dispersion in hydroecosystem using fish tissue metal concentration. In hydroecosystems of hybrid type (e.g. river-pond-river) the distance from the pollution source could be a perfect indicator of such a kind of metal distribution. The studies were carried out in the Kairiai landfill neighboring hybrid-type ecosystem which is located 5 km east of the Šiauliai City. Fish tissue (gills, liver, and muscle) metal concentration measurements were performed on two types of ecologically-different fishes according to their feeding characteristics: benthophagous (Gibel carp, roach) and predatory (Northern pike, perch). A number of mathematical models (linear, non-linear, using log and other transformations) have been applied in order to identify the most satisfactorily description of the interdependence between fish tissue metal concentration and the distance from the pollution source. However, the only one log-multiple regression model revealed the pattern that the distance from the pollution source is closely and positively correlated with metal concentration in all predatory fish tissues studied (gills, liver, and muscle).

Keywords: Bioaccumulation in fish, heavy metals, hydroecosystem, landfill leachate, mathematical model.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1831
433 Evaluating the Sustainability of Agricultural by Indicator that Appropriate to the Area of Ban Phaeo District, Samut Sakorn Province, Thailand

Authors: N. Talisa, K. Rungsarid, P. Chakrit

Abstract:

The objectives of the research are to study the existing agricultural patterns, and to evaluate the sustainability of agricultural on economic, social and environmental aspects. The samplings were the representatives of the agriculturist group from Ban Paew district, Samut Sakorn province by purposive sampling method of 30 households. The tools being used were interview forms together with the Rapid Rural Appraisal (RRA) and the Participation Rural Appraisal (PRA). The information collected was analyzed with the principle of Content Analysis andusing Descriptive Statistics. After that all the information gotten was analyze the sustainability on the household level and village level. The research result can be concluded as follows: The agricultural Patterns: For most of the cultivation main crop was fruit trees planted and the supplement crop was around the patch or added other plants in the trenches. There were trenches for the cultivating water. The product distribution was by selling (97.5%) and the selling to middle man was the highest number (62.5%). Evaluating the sustainability of the agricultural by the indicators which were appropriate to the area: For the agricultural sustainability on the household level it was found that only one household had sustainable, others household had conditioned sustainable. For on the village level it was found that the sustainability on the issue of agricultural knowledge training had the lowest level (Sustainability index = 31.67%). Secondary was the acknowledging about soil information (Sustainability index = 35.0), and the household labors on agriculture, net return over cash cost (Sustainability index = 55.0%) respectively. Performance percentage is 48.81 %. It was brought to the conclusion that this area did not have the agricultural sustainability.

Keywords: Sustainability of agricultural, sustainability indicators, sustainability index.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1653
432 Analysis and Remediation of Fecal Coliform Bacteria Pollution in Selected Surface Water Bodies of Enugu State of Nigeria

Authors: Chime Charles C., Ikechukwu Alexander Okorie, Ekanem E.J., Kagbu J. A.

Abstract:

The assessment of surface waters in Enugu metropolis for fecal coliform bacteria was undertaken. Enugu urban was divided into three areas (A1, A2 and A3), and fecal coliform bacteria analysed in the surface waters found in these areas for four years (2005-2008). The plate count method was used for the analyses. Data generated were subjected to statistical tests involving; Normality test, Homogeneity of variance test, correlation test, and tolerance limit test. The influence of seasonality and pollution trends were investigated using time series plots. Results from the tolerance limit test at 95% coverage with 95% confidence, and with respect to EU maximum permissible concentration show that the three areas suffer from fecal coliform pollution. To this end, remediation procedure involving the use of saw-dust extracts from three woods namely; Chlorophora-Excelsa (C-Excelsa),Khayan-Senegalensis,(CSenegalensis) and Erythrophylum-Ivorensis (E-Ivorensis) in controlling the coliforms was studied. Results show that mixture of the acetone extracts of the woods show the most effective antibacterial inhibitory activities (26.00mm zone of inhibition) against E-coli. Methanol extract mixture of the three woods gave best inhibitory activity (26.00mm zone of inhibition) against S-areus, and 25.00mm zones of inhibition against E-Aerogenes. The aqueous extracts mixture gave acceptable zones of inhibitions against the three bacteria organisms.

Keywords: Coliform bacteria, Pollution, Remediation, Saw-dust

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 2041
431 Effect of Anion and Amino Functional Group on Resin for Lipase Immobilization with Adsorption-Cross Linking Method

Authors: Heri Hermansyah, Annisa Kurnia, A. Vania Anisya, Adi Surjosatyo, Yopi Sunarya, Rita Arbianti, Tania Surya Utami

Abstract:

Lipase is one of biocatalyst which is applied commercially for the process in industries, such as bioenergy, food, and pharmaceutical industry. Nowadays, biocatalysts are preferred in industries because they work in mild condition, high specificity, and reduce energy consumption (high pressure and temperature). But, the usage of lipase for industry scale is limited by economic reason due to the high price of lipase and difficulty of the separation system. Immobilization of lipase is one of the solutions to maintain the activity of lipase and reduce separation system in the process. Therefore, we conduct a study about lipase immobilization with the adsorption-cross linking method using glutaraldehyde because this method produces high enzyme loading and stability. Lipase is immobilized on different kind of resin with the various functional group. Highest enzyme loading (76.69%) was achieved by lipase immobilized on anion macroporous which have anion functional group (OH). However, highest activity (24,69 U/g support) through olive oil emulsion method was achieved by lipase immobilized on anion macroporous-chitosan which have amino (NH2) and anion (OH-) functional group. In addition, it also success to produce biodiesel until reach yield 50,6% through interesterification reaction and after 4 cycles stable 63.9% relative with initial yield. While for Aspergillus, niger lipase immobilized on anion macroporous-kitosan have unit activity 22,84 U/g resin and yield biodiesel higher than commercial lipase (69,1%) and after 4 cycles stable reach 70.6% relative from initial yield. This shows that optimum functional group on support for immobilization with adsorption-cross linking is the support that contains amino (NH2) and anion (OH-) functional group because they can react with glutaraldehyde and binding with enzyme prevent desorption of lipase from support through binding lipase with a functional group on support.

Keywords: Adsorption-Cross linking, lipase, resin, immobilization.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 793
430 Economic effects and Energy Use Efficiency of Incorporating Alfalfa and Fertilizer into Grass- Based Pasture Systems

Authors: M. Khakbazan, S. L. Scott, H. C. Block, C. D. Robins, W. P. McCaughey

Abstract:

A ten-year grazing study was conducted at the Agriculture and Agri-Food Canada Brandon Research Centre in Manitoba to study the effect of alfalfa inclusion and fertilizer (N, P, K, and S) addition on economics and efficiency of non-renewable energy use in meadow brome grass-based pasture systems for beef production. Fertilizing grass-only or alfalfa-grass pastures to full soil test recommendations improved pasture productivity, but did not improve profitability compared to unfertilized pastures. Fertilizing grass-only pastures resulted in the highest net loss of any pasture management strategy in this study. Adding alfalfa at the time of seeding, with no added fertilizer, was economically the best pasture improvement strategy in this study. Because of moisture limitations, adding commercial fertilizer to full soil test recommendations is probably not economically justifiable in most years, especially with the rising cost of fertilizer. Improving grass-only pastures by adding fertilizer and/or alfalfa required additional non-renewable energy inputs; however, the additional energy required for unfertilized alfalfa-grass pastures was minimal compared to the fertilized pastures. Of the four pasture management strategies, adding alfalfa to grass pastures without adding fertilizer had the highest efficiency of energy use. Based on energy use and economic performance, the unfertilized alfalfa-grass pasture was the most efficient and sustainable pasture system.

Keywords: Alfalfa, grass, fertilizer, pasture systems, economics, energy.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 1676
429 Optimized Brain Computer Interface System for Unspoken Speech Recognition: Role of Wernicke Area

Authors: Nassib Abdallah, Pierre Chauvet, Abd El Salam Hajjar, Bassam Daya

Abstract:

In this paper, we propose an optimized brain computer interface (BCI) system for unspoken speech recognition, based on the fact that the constructions of unspoken words rely strongly on the Wernicke area, situated in the temporal lobe. Our BCI system has four modules: (i) the EEG Acquisition module based on a non-invasive headset with 14 electrodes; (ii) the Preprocessing module to remove noise and artifacts, using the Common Average Reference method; (iii) the Features Extraction module, using Wavelet Packet Transform (WPT); (iv) the Classification module based on a one-hidden layer artificial neural network. The present study consists of comparing the recognition accuracy of 5 Arabic words, when using all the headset electrodes or only the 4 electrodes situated near the Wernicke area, as well as the selection effect of the subbands produced by the WPT module. After applying the articial neural network on the produced database, we obtain, on the test dataset, an accuracy of 83.4% with all the electrodes and all the subbands of 8 levels of the WPT decomposition. However, by using only the 4 electrodes near Wernicke Area and the 6 middle subbands of the WPT, we obtain a high reduction of the dataset size, equal to approximately 19% of the total dataset, with 67.5% of accuracy rate. This reduction appears particularly important to improve the design of a low cost and simple to use BCI, trained for several words.

Keywords: Brain-computer interface, speech recognition, electroencephalography EEG, Wernicke area, artificial neural network.

Procedia APA BibTeX Chicago EndNote Harvard JSON MLA RIS XML ISO 690 PDF Downloads 918