Search results for: EB thermal processing
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 6922

Search results for: EB thermal processing

1492 Hindi Speech Synthesis by Concatenation of Recognized Hand Written Devnagri Script Using Support Vector Machines Classifier

Authors: Saurabh Farkya, Govinda Surampudi

Abstract:

Optical Character Recognition is one of the current major research areas. This paper is focussed on recognition of Devanagari script and its sound generation. This Paper consists of two parts. First, Optical Character Recognition of Devnagari handwritten Script. Second, speech synthesis of the recognized text. This paper shows an implementation of support vector machines for the purpose of Devnagari Script recognition. The Support Vector Machines was trained with Multi Domain features; Transform Domain and Spatial Domain or Structural Domain feature. Transform Domain includes the wavelet feature of the character. Structural Domain consists of Distance Profile feature and Gradient feature. The Segmentation of the text document has been done in 3 levels-Line Segmentation, Word Segmentation, and Character Segmentation. The pre-processing of the characters has been done with the help of various Morphological operations-Otsu's Algorithm, Erosion, Dilation, Filtration and Thinning techniques. The Algorithm was tested on the self-prepared database, a collection of various handwriting. Further, Unicode was used to convert recognized Devnagari text into understandable computer document. The document so obtained is an array of codes which was used to generate digitized text and to synthesize Hindi speech. Phonemes from the self-prepared database were used to generate the speech of the scanned document using concatenation technique.

Keywords: Character Recognition (OCR), Text to Speech (TTS), Support Vector Machines (SVM), Library of Support Vector Machines (LIBSVM)

Procedia PDF Downloads 488
1491 Characterization of Ethanol-Air Combustion in a Constant Volume Combustion Bomb Under Cellularity Conditions

Authors: M. Reyes, R. Sastre, P. Gabana, F. V. Tinaut

Abstract:

In this work, an optical characterization of the ethanol-air laminar combustion is presented in order to investigate the origin of the instabilities developed during the combustion, the onset of the cellular structure and the laminar burning velocity. Experimental tests of ethanol-air have been developed in an optical cylindrical constant volume combustion bomb equipped with a Schlieren technique to record the flame development and the flame front surface wrinkling. With this procedure, it is possible to obtain the flame radius and characterize the time when the instabilities are visible through the cell's apparition and the cellular structure development. Ethanol is an aliphatic alcohol with interesting characteristics to be used as a fuel in Internal Combustion Engines and can be biologically synthesized from biomass. Laminar burning velocity is an important parameter used in simulations to obtain the turbulent flame speed, whereas the flame front structure and the instabilities developed during the combustion are important to understand the transition to turbulent combustion and characterize the increment in the flame propagation speed in premixed flames. The cellular structure is spontaneously generated by volume forces, diffusional-thermal and hydrodynamic instabilities. Many authors have studied the combustion of ethanol air and mixtures of ethanol with other fuels. However, there is a lack of works that investigate the instabilities and the development of a cellular structure in ethanol flames, a few works as characterized the ethanol-air combustion instabilities in spherical flames. In the present work, a parametrical study is made by varying the fuel/air equivalence ratio (0.8-1.4), initial pressure (0.15-0.3 MPa) and initial temperature (343-373K), using a design of experiments type I-optimal. In reach mixtures, it is possible to distinguish the cellular structure formed by the hydrodynamic effect and by from the thermo-diffusive. Results show that ethanol-air flames tend to stabilize as the equivalence ratio decreases in lean mixtures and develop a cellular structure with the increment of initial pressure and temperature.

Keywords: ethanol, instabilities, premixed combustion, schlieren technique, cellularity

Procedia PDF Downloads 62
1490 Applications of Drones in Infrastructures: Challenges and Opportunities

Authors: Jin Fan, M. Ala Saadeghvaziri

Abstract:

Unmanned aerial vehicles (UAVs), also referred to as drones, equipped with various kinds of advanced detecting or surveying systems, are effective and low-cost in data acquisition, data delivery and sharing, which can benefit the building of infrastructures. This paper will give an overview of applications of drones in planning, designing, construction and maintenance of infrastructures. The drone platform, detecting and surveying systems, and post-data processing systems will be introduced, followed by cases with details of the applications. Challenges from different aspects will be addressed. Opportunities of drones in infrastructure include but not limited to the following. Firstly, UAVs equipped with high definition cameras or other detecting equipment are capable of inspecting the hard to reach infrastructure assets. Secondly, UAVs can be used as effective tools to survey and map the landscape to collect necessary information before infrastructure construction. Furthermore, an UAV or multi-UVAs are useful in construction management. UVAs can also be used in collecting roads and building information by taking high-resolution photos for future infrastructure planning. UAVs can be used to provide reliable and dynamic traffic information, which is potentially helpful in building smart cities. The main challenges are: limited flight time, the robustness of signal, post data analyze, multi-drone collaboration, weather condition, distractions to the traffic caused by drones. This paper aims to help owners, designers, engineers and architects to improve the building process of infrastructures for higher efficiency and better performance.

Keywords: bridge, construction, drones, infrastructure, information

Procedia PDF Downloads 120
1489 Aircraft Components, Manufacturing and Design: Opportunities, Bottlenecks, and Challenges

Authors: Ionel Botef

Abstract:

Aerospace products operate in very aggressive environments characterized by high temperature, high pressure, large stresses on individual components, the presence of oxidizing and corroding atmosphere, as well as internally created or externally ingested particulate materials that induce erosion and impact damage. Consequently, during operation, the materials of individual components degrade. In addition, the impact of maintenance costs for both civil and military aircraft was estimated at least two to three times greater than initial purchase values, and this trend is expected to increase. As a result, for viable product realisation and maintenance, a spectrum of issues regarding novel processing technologies, innovation of new materials, performance, costs, and environmental impact must constantly be addressed. One of these technologies, namely the cold-gas dynamic-spray process has enabled a broad range of coatings and applications, including many that have not been previously possible or commercially practical, hence its potential for new aerospace applications. Therefore, the purpose of this paper is to summarise the state of the art of this technology alongside its theoretical and experimental studies, and explore how the cold-gas dynamic-spray process could be integrated within a framework that finally could lead to more efficient aircraft maintenance. Based on the paper's qualitative findings supported by authorities, evidence, and logic essentially it is argued that the cold-gas dynamic-spray manufacturing process should not be viewed in isolation, but should be viewed as a component of a broad framework that finally leads to more efficient aerospace operations.

Keywords: aerospace, aging aircraft, cold spray, materials

Procedia PDF Downloads 114
1488 Importance of Developing a Decision Support System for Diagnosis of Glaucoma

Authors: Murat Durucu

Abstract:

Glaucoma is a condition of irreversible blindness, early diagnosis and appropriate interventions to make the patients able to see longer time. In this study, it addressed that the importance of developing a decision support system for glaucoma diagnosis. Glaucoma occurs when pressure happens around the eyes it causes some damage to the optic nerves and deterioration of vision. There are different levels ranging blindness of glaucoma disease. The diagnosis at an early stage allows a chance for therapies that slows the progression of the disease. In recent years, imaging technology from Heidelberg Retinal Tomography (HRT), Stereoscopic Disc Photo (SDP) and Optical Coherence Tomography (OCT) have been used for the diagnosis of glaucoma. This better accuracy and faster imaging techniques in response technique of OCT have become the most common method used by experts. Although OCT images or HRT precision and quickness, especially in the early stages, there are still difficulties and mistakes are occurred in diagnosis of glaucoma. It is difficult to obtain objective results on diagnosis and placement process of the doctor's. It seems very important to develop an objective decision support system for diagnosis and level the glaucoma disease for patients. By using OCT images and pattern recognition systems, it is possible to develop a support system for doctors to make their decisions on glaucoma. Thus, in this recent study, we develop an evaluation and support system to the usage of doctors. Pattern recognition system based computer software would help the doctors to make an objective evaluation for their patients. It is intended that after development and evaluation processes of the software, the system is planning to be serve for the usage of doctors in different hospitals.

Keywords: decision support system, glaucoma, image processing, pattern recognition

Procedia PDF Downloads 295
1487 Experimental Study of the Efficacy and Emission Properties of a Compression Ignition Engine Running on Fuel Additives with Varying Engine Loads

Authors: Faisal Mahroogi, Mahmoud Bady, Yaser H. Alahmadi, Ahmed Alsisi, Sunny Narayan, Muhammad Usman Kaisan

Abstract:

The Kingdom of Saudi Arabia established Saudi Vision 2030, an initiative of the government with the goal of promoting more socioeconomic as well as cultural diversity. The kingdom, which is dedicated to sustainable development and clean energy, uses cutting-edge approaches to address energy-related issues, including the circular carbon economy (CCE) and a more varied energy mix. In order for Saudi Arabia to achieve its Vision 2030 goal of having a net zero future by 2060, sustainability is essential. By addressing the energy and climate issues of the modern world with responsibility and innovation, Vision 2030 is turning into a global role model for the transition to a sustainable future. As per the Ambitions of the National Environment Strategy of the Saudi Ministry of Environment, Agriculture, and Water (MEWA), raising environmental compliance across all sectors and reducing pollution and adverse environmental impacts are critical focus areas. As a result, the current study presents an experimental analysis of the performance and exhaust emissions of a diesel engine running mostly on waste cooking oil (WCO). A one-cylinder direct-injection diesel engine with constant speed and natural aspiration is the engine type utilized. Research was done on how the engine performed and emission parameters when fueled with a mixture of 10% butanol, 10% diesel, 10% WCO, and 10% diethyl ether (D70B10W10DD10). The study's findings demonstrated that engine emissions of nitrogen oxides (NOX) and carbon monoxide (CO) varied significantly depending on the load being applied. The brake thermal efficiency, cylinder pressure, and the brake power of the engine were all impacted by load change.

Keywords: ICE, waste cooking oil, fuel additives, butanol, combustion, emission characteristics

Procedia PDF Downloads 54
1486 The Effect of Deformation Activation Volume, Strain Rate Sensitivity and Processing Temperature of Grain Size Variants

Authors: P. B. Sob, A. A. Alugongo, T. B. Tengen

Abstract:

The activation volume of 6082T6 aluminum is investigated at different temperatures on grain size variants. The deformation activation volume was computed on the basis of the relationship between the Boltzmann’s constant k, the testing temperatures, the material strain rate sensitivity and the material yield stress of grain size variants. The material strain rate sensitivity is computed as a function of yield stress and strain rate of grain size variants. The effect of the material strain rate sensitivity and the deformation activation volume of 6082T6 aluminum at different temperatures of 3-D grain are discussed. It is shown that the strain rate sensitivities and activation volume are negative for the grain size variants during the deformation of nanostructured materials. It is also observed that the activation volume vary in different ways with the equivalent radius, semi minor axis radius, semi major axis radius and major axis radius. From the obtained results it is shown that the variation of activation volume increased and decreased with the testing temperature. It was revealed that, increased in strain rate sensitivity led to decrease in activation volume whereas increased in activation volume led to decrease in strain rate sensitivity.

Keywords: nanostructured materials, grain size variants, temperature, yield stress, strain rate sensitivity, activation volume

Procedia PDF Downloads 246
1485 Feature Analysis of Predictive Maintenance Models

Authors: Zhaoan Wang

Abstract:

Research in predictive maintenance modeling has improved in the recent years to predict failures and needed maintenance with high accuracy, saving cost and improving manufacturing efficiency. However, classic prediction models provide little valuable insight towards the most important features contributing to the failure. By analyzing and quantifying feature importance in predictive maintenance models, cost saving can be optimized based on business goals. First, multiple classifiers are evaluated with cross-validation to predict the multi-class of failures. Second, predictive performance with features provided by different feature selection algorithms are further analyzed. Third, features selected by different algorithms are ranked and combined based on their predictive power. Finally, linear explainer SHAP (SHapley Additive exPlanations) is applied to interpret classifier behavior and provide further insight towards the specific roles of features in both local predictions and global model behavior. The results of the experiments suggest that certain features play dominant roles in predictive models while others have significantly less impact on the overall performance. Moreover, for multi-class prediction of machine failures, the most important features vary with type of machine failures. The results may lead to improved productivity and cost saving by prioritizing sensor deployment, data collection, and data processing of more important features over less importance features.

Keywords: automated supply chain, intelligent manufacturing, predictive maintenance machine learning, feature engineering, model interpretation

Procedia PDF Downloads 125
1484 Study of Error Analysis and Sources of Uncertainty in the Measurement of Residual Stresses by the X-Ray Diffraction

Authors: E. T. Carvalho Filho, J. T. N. Medeiros, L. G. Martinez

Abstract:

Residual stresses are self equilibrating in a rigid body that acts on the microstructure of the material without application of an external load. They are elastic stresses and can be induced by mechanical, thermal and chemical processes causing a deformation gradient in the crystal lattice favoring premature failure in mechanicals components. The search for measurements with good reliability has been of great importance for the manufacturing industries. Several methods are able to quantify these stresses according to physical principles and the response of the mechanical behavior of the material. The diffraction X-ray technique is one of the most sensitive techniques for small variations of the crystalline lattice since the X-ray beam interacts with the interplanar distance. Being very sensitive technique is also susceptible to variations in measurements requiring a study of the factors that influence the final result of the measurement. Instrumental, operational factors, form deviations of the samples and geometry of analyzes are some variables that need to be considered and analyzed in order for the true measurement. The aim of this work is to analyze the sources of errors inherent to the residual stress measurement process by X-ray diffraction technique making an interlaboratory comparison to verify the reproducibility of the measurements. In this work, two specimens were machined, differing from each other by the surface finishing: grinding and polishing. Additionally, iron powder with particle size less than 45 µm was selected in order to be a reference (as recommended by ASTM E915 standard) for the tests. To verify the deviations caused by the equipment, those specimens were positioned and with the same analysis condition, seven measurements were carried out at 11Ψ tilts. To verify sample positioning errors, seven measurements were performed by positioning the sample at each measurement. To check geometry errors, measurements were repeated for the geometry and Bragg Brentano parallel beams. In order to verify the reproducibility of the method, the measurements were performed in two different laboratories and equipments. The results were statistically worked out and the quantification of the errors.

Keywords: residual stress, x-ray diffraction, repeatability, reproducibility, error analysis

Procedia PDF Downloads 175
1483 Dispersions of Carbon Black in Microemulsions

Authors: Mohamed Youssry, Dominique Guyomard, Bernard Lestriez

Abstract:

In order to enhance the energy and power densities of electrodes for energy storage systems, the formulation and processing of electrode slurries proved to be a critical issue in determining the electrode performance. In this study, we introduce novel approach to formulate carbon black slurries based on microemulsion and lyotropic liquid crystalline phases (namely, lamellar phase) composed of non-ionic surfactant (Triton X100), decanol and water. Simultaneous measurements of electrical properties of slurries under shear flow (rheology) have been conducted to elucidate the microstructure evolution with the surfactant concentration and decanol/water ratio at rest, as well as, the structural transition under steady-shear which has been confirmed by rheo-microscopy. Interestingly, the carbon black slurries at low decanol/water ratio are weak-gel (flowable) with higher electrical conductivity than those at higher ratio which behave strong-gel viscoelastic response. In addition, the slurries show recoverable electrical behaviour under shear flow in tandem with the viscosity trend. It is likely that oil-in-water microemulsion enhances slurries’ stability without affecting on the percolating network of carbon black. On the other hand, the oil-in-water analogous and bilayer structure of lamellar phase cause the slurries less conductive as a consequence of losing the network percolation. These findings are encouraging to formulate microemulsion-based electrodes for energy storage system (lithium-ion batteries).

Keywords: electrode slurries, microemulsion, microstructure transition, rheo-electrical properties

Procedia PDF Downloads 261
1482 Comparative Analysis of Yield before and after Access to Extension Services among Crop Farmers in Bauchi Local Government Area of Bauchi State, Nigeria

Authors: U. S. Babuga, A. H. Danwanka, A. Garba

Abstract:

The research was carried out to compare the yield of respondents before and after access to extension services on crop production technologies in the study area. Data were collected from the study area through questionnaires administered to seventy-five randomly selected respondents. Data were analyzed using descriptive statistics, t-test and regression models. The result disclosed that majority (97%) of the respondent attended one form of school or the other. The majority (78.67%) of the respondents had farm size ranging between 1-3 hectares. The majority of the respondent adopt improved variety of crops, plant spacing, herbicide, fertilizer application, land preparation, crop protection, crop processing and storage of farm produce. The result of the t-test between the yield of respondents before and after access to extension services shows that there was a significant (p<0.001) difference in yield before and after access to extension. It also indicated that farm size was significant (p<0.001) while household size, years of farming experience and extension contact were significant at (p<0.005). The major constraint to adoption of crop production technologies were shortage of extension agents, high cost of technology and lack of access to credit facility. The major pre-requisite for the improvement of extension service are employment of more extension agents or workers and adequate training. Adequate agricultural credit to farmers at low interest rates will enhance their adoption of crop production technologies.

Keywords: comparative, analysis, yield, access, extension

Procedia PDF Downloads 357
1481 Rheological and Computational Analysis of Crude Oil Transportation

Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh

Abstract:

Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.

Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity

Procedia PDF Downloads 440
1480 The Design of Smart Tactile Textiles for Therapeutic Applications

Authors: Karen Hong

Abstract:

Smart tactile textiles are a series of textile-based products that incorporates smart embedded technology to be utilized as tactile therapeutic applications for 2 main groups of target users. The first group of users will be children with sensory processing disorder who are suffering from tactile sensory dysfunction. Children with tactile sensory issues may have difficulty tolerating the sensations generated from the touch of certain textures on the fabrics. A series of smart tactile textiles, collectively known as ‘Tactile Toys’ are developed as tactile therapy play objects, exposing children to different types of touch sensations within textiles, enabling them to enjoy tactile experiences together with interactive play which will help them to overcome fear of certain touch sensations. The second group of users will be the elderly or geriatric patients who are suffering from deteriorating sense of touch. One of the common consequences of aging is suffering from deteriorating sense of touch and a decline in motoric function. With the focus in stimulating the sense of touch for this particular group of end users, another series of smart tactile textiles, collectively known as ‘Tactile Aids’ are developed also as tactile therapy. This range of products can help to maintain touch sensitivity and at the same time allowing the elderly to enjoy interactive play to practice their hand-eye coordination and enhancing their motor skills. These smart tactile textile products are being designed and tested out by the end users and have proofed their efficacy as tactile therapy enabling the users to lead a better quality of life.

Keywords: smart textiles, embedded technology, tactile therapy, tactile aids, tactile toys

Procedia PDF Downloads 172
1479 Lignin Phenol Formaldehyde Resole Resin: Synthesis and Characteristics

Authors: Masoumeh Ghorbania, Falk Liebnerb, Hendrikus W.G. van Herwijnenc, Johannes Konnertha

Abstract:

Phenol formaldehyde (PF) resins are widely used as wood adhesives for variety of industrial products such as plywood, laminated veneer lumber and others. Lignin as a main constituent of wood has become well-known as a potential substitute for phenol in PF adhesives because of their structural similarity. During the last decades numerous research approaches have been carried out to substitute phenol with pulping-derived lignin, whereby the lower reactivity of resins synthesized with shares of lignin seem to be one of the major challenges. This work reports about a systematic screening of different types of lignin (plant origin and pulping process) for their suitability to replace phenol in phenolic resins. Lignin from different plant sources (softwood, hardwood and grass) were used, as these should differ significantly in their reactivity towards formaldehyde of their reactive phenolic core units. Additionally a possible influence of the pulping process was addressed by using the different types of lignin from soda, kraft, and organosolv process and various lignosulfonates (sodium, ammonium, calcium, magnesium). To determine the influence of lignin on the adhesive performance beside others the rate of viscosity development, bond strength development of varying hot pressing time and other thermal properties were investigated. To evaluate the performance of the cured end product, a few selected properties were studied at the example of solid wood-adhesive bond joints, compact panels and plywood. As main results it was found that lignin significantly accelerates the viscosity development in adhesive synthesis. Bonding strength development during curing of adhesives decelerated for all lignin types, while this trend was least for pine kraft lignin and spruce sodium lignosulfonate. However, the overall performance of the products prepared with the latter adhesives was able to fulfill main standard requirements, even after exposing the products to harsh environmental conditions. Thus, a potential application can be considered for processes where reactivity is less critical but adhesive cost and product performance is essential.

Keywords: phenol formaldehyde resin, lignin phenol formaldehyde resin, ABES, DSC

Procedia PDF Downloads 233
1478 Effect of Injection Moulding Process Parameter on Tensile Strength of Using Taguchi Method

Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma

Abstract:

The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. So to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Here Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.

Keywords: injection moulding, tensile strength, poly-propylene, Taguchi

Procedia PDF Downloads 279
1477 Comparison of Bone Mineral Density of Lumbar Spines between High Level Cyclists and Sedentary

Authors: Mohammad Shabani

Abstract:

The physical activities depending on the nature of the mechanical stresses they induce on bone sometimes have brought about different results. The purpose of this study was to compare bone mineral density (BMD) of the lumbar spine between the high-level cyclists and sedentary. Materials and Methods: In the present study, 73 cyclists senior (age: 25.81 ± 4.35 years; height: 179.66 ± 6.31 cm; weight: 71.55 ± 6.31 kg) and 32 sedentary subjects (age: 28.28 ± 4.52 years; height: 176.56 ± 6.2 cm; weight: 74.47 ± 8.35 kg) participated voluntarily. All cyclists belonged to the different teams from the International Cycling Union and they trained competitively for 10 years. BMD of the lumbar spine of the subjects was measured using DXA X-ray (Lunar). Descriptive statistics calculations were performed using computer software data processing (Statview 5, SAS Institute Inc. USA). The comparison of two independent distributions (BMD high level cyclists and sedentary) was made by the Student T Test standard. Probability 0.05 (p≤0 / 05) was adopted as significance. Results: The result of this study showed that the BMD values of the lumbar spine of sedentary subjects were significantly higher for all measured segments. Conclusion and Discussion: Cycling is firstly a common sport and on the other hand endurance sport. It is now accepted that weight bearing exercises have an osteogenic effect compared to non-weight bearing exercises. Thus, endurance sports such as cycling, compared to the activities imposing intense force in short time, seem not to really be osteogenic. Therefore, it can be concluded that cycling provides low stimulates osteogenic because of specific biomechanical forces of the sport and its lack of impact.

Keywords: BMD, lumbar spine, high level cyclist, cycling

Procedia PDF Downloads 266
1476 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea

Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti

Abstract:

This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool.  Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.

Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey

Procedia PDF Downloads 234
1475 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector

Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy

Abstract:

In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.

Keywords: four quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency (PRF), Atmega 32 microcontrollers

Procedia PDF Downloads 381
1474 Copolymers of Epsilon-Caprolactam Received via Anionic Polymerization in the Presence of Polypropylene Glycol Based Polymeric Activators

Authors: Krasimira N. Zhilkova, Mariya K. Kyulavska, Roza P. Mateva

Abstract:

The anionic polymerization of -caprolactam (CL) with bifunctional activators has been extensively studied as an effective and beneficial method of improving chemical and impact resistances, elasticity and other mechanical properties of polyamide (PA6). In presence of activators or macroactivators (MAs) also called polymeric activators (PACs) the anionic polymerization of lactams proceeds rapidly at a temperature range of 130-180C, well below the melting point of PA-6 (220C) permitting thus the direct manufacturing of copolymer product together with desired modifications of polyamide properties. Copolymers of PA6 with an elastic polypropylene glycol (PPG) middle block into main chain were successfully synthesized via activated anionic ring opening polymerization (ROP) of CL. Using novel PACs based on PPG polyols (with differ molecular weight) the anionic ROP of CL was realized and investigated in the presence of a basic initiator sodium salt of CL (NaCL). The PACs were synthesized as N-carbamoyllactam derivatives of hydroxyl terminated PPG functionalized with isophorone diisocyanate [IPh, 5-Isocyanato-1-(isocyanatomethyl)-1,3,3-trimethylcyclohexane] and blocked then with CL units via an addition reaction. The block copolymers were analyzed and proved with 1H-NMR and FT-IR spectroscopy. The influence of the CL/PACs ratio in feed, the length of the PPG segments and polymerization conditions on the kinetics of anionic ROP, on average molecular weight, and on the structure of the obtained block copolymers were investigated. The structure and phase behaviour of the copolymers were explored with differential scanning calorimetry, wide-angle X-ray diffraction, thermogravimetric analysis and dynamic mechanical thermal analysis. The crystallinity dependence of PPG content incorporated into copolymers main backbone was estimate. Additionally, the mechanical properties of the obtained copolymers were studied by notched impact test. From the performed investigation in this study could be concluded that using PPG based PACs at the chosen ROP conditions leads to obtaining well-defined PA6-b-PPG-b-PA6 copolymers with improved impact resistance.

Keywords: anionic ring opening polymerization, caprolactam, polyamide copolymers, polypropylene glycol

Procedia PDF Downloads 407
1473 Potential of Sunflower (Helianthus annuus L.) for Phytoremediation of Soils Contaminated with Heavy Metals

Authors: Violina R. Angelova, Mariana N. Perifanova-Nemska, Galina P. Uzunova, Krasimir I. Ivanov, Huu Q. Lee

Abstract:

A field study was conducted to evaluate the efficacy of the sunflower (Helianthus annuus L.) for phytoremediation of contaminated soils. The experiment was performed on an agricultural field contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. Field experiments with a randomized, complete block design with five treatments (control, compost amendments added at 20 and 40 t/daa, and vemicompost amendments added at 20 and 40 t/daa) were carried out. The accumulation of heavy metals in the sunflower plant and the quality of the sunflower oil (heavy metals and fatty acid composition) were determined. The tested organic amendments significantly influenced the uptake of Pb, Zn and Cd by the sunflower plant. The incorporation of 40 t/decare of compost and 20 t/decare of vermicompost to the soil led to an increase in the ability of the sunflower to take up and accumulate Cd, Pb and Zn. Sunflower can be subjected to the accumulators of Pb, Zn and Cd and can be successfully used for phytoremediation of contaminated soils with heavy metals. The 40 t/daa compost treatment led to a decrease in heavy metal content in sunflower oil to below the regulated limits. Oil content and fatty acids composition were affected by compost and vermicompost amendment treatments. Adding compost and vermicompost increased the oil content in the seeds. Adding organic amendments increased the content of stearic, palmitoleic and oleic acids, and reduced the content of palmitic and gadoleic acids in sunflower oil. The possibility of further industrial processing of seeds to oil and use of the obtained oil will make sunflowers economically interesting crops for farmers of phytoremediation technology.

Keywords: heavy metals, phytoremediation, polluted soils, sunflower

Procedia PDF Downloads 228
1472 Life Cycle Assessment of Almond Processing: Off-ground Harvesting Scenarios

Authors: Jessica Bain, Greg Thoma, Marty Matlock, Jeyam Subbiah, Ebenezer Kwofie

Abstract:

The environmental impact and particulate matter emissions (PM) associated with the production and packaging of 1 kg of almonds were evaluated using life cycle assessment (LCA). The assessment began at the point of ready to harvest with a system boundary was a cradle-to-gate assessment of almond packaging in California. The assessment included three scenarios of off-ground harvesting of almonds. The three general off-ground harvesting scenarios with variations include the harvested almonds solar dried on a paper tarp in the orchard, the harvested almonds solar dried on the floor in a separate lot, and the harvested almonds dried mechanically. The life cycle inventory (LCI) data for almond production were based on previously published literature and data provided by Almond Board of California (ABC). The ReCiPe 2016 method was used to calculate the midpoint impacts. Using consequential LCA model, the global warming potential (GWP) for the three harvesting scenarios are 2.90, 2.86, and 3.09 kg CO2 eq/ kg of packaged almond for scenarios 1, 2a, and 3a, respectively. The global warming potential for conventional harvesting method was 2.89 kg CO2 eq/ kg of packaged almond. The particulate matter emissions for each scenario per hectare for each off-ground harvesting scenario is 77.14, 9.56, 66.86, and 8.75 for conventional harvesting and scenarios 1, 2, and 3, respectively. The most significant contributions to the overall emissions were from almond production. The farm gate almond production had a global warming potential of 2.12 kg CO2 eq/ kg of packaged almond, approximately 73% of the overall emissions. Based on comparisons between the GWP and PM emissions, scenario 2a was the best tradeoff between GHG and PM production.

Keywords: life cycle assessment, low moisture foods, sustainability, LCA

Procedia PDF Downloads 81
1471 Assessment of Environmental Quality of an Urban Setting

Authors: Namrata Khatri

Abstract:

The rapid growth of cities is transforming the urban environment and posing significant challenges for environmental quality. This study examines the urban environment of Belagavi in Karnataka, India, using geostatistical methods to assess the spatial pattern and land use distribution of the city and to evaluate the quality of the urban environment. The study is driven by the necessity to assess the environmental impact of urbanisation. Satellite data was utilised to derive information on land use and land cover. The investigation revealed that land use had changed significantly over time, with a drop in plant cover and an increase in built-up areas. High-resolution satellite data was also utilised to map the city's open areas and gardens. GIS-based research was used to assess public green space accessibility and to identify regions with inadequate waste management practises. The findings revealed that garbage collection and disposal techniques in specific areas of the city needed to be improved. Moreover, the study evaluated the city's thermal environment using Landsat 8 land surface temperature (LST) data. The investigation found that built-up regions had higher LST values than green areas, pointing to the city's urban heat island (UHI) impact. The study's conclusions have far-reaching ramifications for urban planners and politicians in Belgaum and other similar cities. The findings may be utilised to create sustainable urban planning strategies that address the environmental effect of urbanisation while also improving the quality of life for city dwellers. Satellite data and high-resolution satellite pictures were gathered for the study, and remote sensing and GIS tools were utilised to process and analyse the data. Ground truthing surveys were also carried out to confirm the accuracy of the remote sensing and GIS-based data. Overall, this study provides a complete assessment of Belgaum's environmental quality and emphasizes the potential of remote sensing and geographic information systems (GIS) approaches in environmental assessment and management.

Keywords: environmental quality, UEQ, remote sensing, GIS

Procedia PDF Downloads 78
1470 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea

Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama

Abstract:

Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.

Keywords: satellite, sea surface temperature, upwelling, wind stress

Procedia PDF Downloads 154
1469 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon

Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn

Abstract:

The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.

Keywords: land use and land cover change, change detection, image processing, support vector machines

Procedia PDF Downloads 126
1468 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 132
1467 Performance and Processing Evaluation of Solid Oxide Cells by Co-Sintering of GDC Buffer Layer and LSCF Air Electrode

Authors: Hyun-Jong Choi, Minjun Kwak, Doo-Won Seo, Sang-Kuk Woo, Sun-Dong Kim

Abstract:

Solid Oxide Cell(SOC) systems can contribute to the transition to the hydrogen society by utilized as a power and hydrogen generator by the electrochemical reaction with high efficiency at high operation temperature (>750 ℃). La1-xSrxCo1-yFeyO3, which is an air electrode, is occurred stability degradations due to reaction and delamination with yittria stabilized zirconia(YSZ) electrolyte in a water electrolysis mode. To complement this phenomenon SOCs need gadolinium doped ceria(GDC) buffer layer between electrolyte and air electrode. However, GDC buffer layer requires a high sintering temperature and it causes a reaction with YSZ electrolyte. This study carried out low temperature sintering of GDC layer by applying Cu-oxide as a sintering aid. The effect of a copper additive as a sintering aid to lower the sintering temperature for the construction of solid oxide fuel cells (SOFCs) was investigated. GDC buffer layer with 0.25-10 mol% CuO sintering aid was prepared by reacting GDC power and copper nitrate solution followed by heating at 600 ℃. The sintering of CuO-added GDC powder was optimized by investigating linear shrinkage, microstructure, grain size, ionic conductivity, and activation energy of CuO-GDC electrolytes at temperatures ranging from 1100 to 1400 ℃. The sintering temperature of the CuO-GDC electrolyte decreases from 1400 ℃ to 1100 ℃ by adding the CuO sintering aid. The ionic conductivity of the CuO-GDC electrolyte shows a maximum value at 0.5 mol% of CuO. However, the addition of CuO has no significant effects on the activation energy of GDC electrolyte. GDC-LSCF layers were co-sintering at 1050 and 1100 ℃ and button cell tests were carried out at 750 ℃.

Keywords: Co-Sintering, GDC-LSCF, Sintering Aid, solid Oxide Cells

Procedia PDF Downloads 240
1466 Tea (Camellia sinensis (L.) O. Kuntze) Typology in Kenya: A Review

Authors: Joseph Kimutai Langat

Abstract:

Tea typology is the science of classifying tea. This study was carried out between November 2023 and July 2024, whose main objective was to investigate the typological classification nomenclature of processed tea in the world, narrowing down to Kenya. Centres of origin, historical background, tea growing region, scientific naming system, market, fermentation levels, processing/ oxidation levels and cultural reasons are used to classify tea at present. Of these, the most common typology is by oxidation, and more specifically, by the production methods within the oxidation categories. While the Asian tea producing countries categorises tea products based on the decreasing oxidation levels during the manufacturing process: black tea, green tea, oolong tea and instant tea, Kenya’s tea typology system is based on the degree of fermentation process, i.e. black tea, purple tea, green tea and white tea. Tea is also classified into five categories: black tea, green tea, white tea, oolong tea, and dark tea. Black tea is the main tea processed and exported in Kenya, manufactured mainly by withering, rolling, or by use of cutting-tearing-curling (CTC) method that ensures efficient conversion of leaf herbage to made tea, oxidizing, and drying before being sorted into different grades. It is from these varied typological methods that this review paper concludes that different regions of the world use different classification nomenclature. Therefore, since tea typology is not standardized, it is recommended that a global tea regulator dealing in tea classification be created to standardize tea typology, with domestic in-country regulatory bodies in tea growing countries accredited to implement the global-wide typological agreements and resolutions.

Keywords: classification, fermentation, oxidation, tea, typology

Procedia PDF Downloads 32
1465 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications

Authors: Jongbae Lee, Seongsoo Lee

Abstract:

Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.

Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL

Procedia PDF Downloads 292
1464 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder

Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu

Abstract:

Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.

Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network

Procedia PDF Downloads 145
1463 Recovery of Draw Solution in Forward Osmosis by Direct Contact Membrane Distillation

Authors: Su-Thing Ho, Shiao-Shing Chen, Hung-Te Hsu, Saikat Sinha Ray

Abstract:

Forward osmosis (FO) is an emerging technology for direct and indirect potable water reuse application. However, successful implementation of FO is still hindered by the lack of draw solution recovery with high efficiency. Membrane distillation (MD) is a thermal separation process by using hydrophobic microporous membrane that is kept in sandwich mode between warm feed stream and cold permeate stream. Typically, temperature difference is the driving force of MD which attributed by the partial vapor pressure difference across the membrane. In this study, the direct contact membrane distillation (DCMD) system was used to recover diluted draw solution of FO. Na3PO4 at pH 9 and EDTA-2Na at pH 8 were used as the feed solution for MD since it produces high water flux and minimized salt leakage in FO process. At high pH, trivalent and tetravalent ions are much easier to remain at draw solution side in FO process. The result demonstrated that PTFE with pore size of 1 μm could achieve the highest water flux (12.02 L/m2h), followed by PTFE 0.45 μm (10.05 L/m2h), PTFE 0.1 μm (7.38 L/m2h) and then PP (7.17 L/m2h) while using 0.1 M Na3PO4 draw solute. The concentration of phosphate and conductivity in the PTFE (0.45 μm) permeate were low as 1.05 mg/L and 2.89 μm/cm respectively. Although PTFE with the pore size of 1 μm could obtain the highest water flux, but the concentration of phosphate in permeate was higher than other kinds of MD membranes. This study indicated that four kinds of MD membranes performed well and PTFE with the pore size of 0.45 μm was the best among tested membranes to achieve high water flux and high rejection of phosphate (99.99%) in recovery of diluted draw solution. Besides that, the results demonstrate that it can obtain high water flux and high rejection of phosphate when operated with cross flow velocity of 0.103 m/s with Tfeed of 60 ℃ and Tdistillate of 20 ℃. In addition to that, the result shows that Na3PO4 is more suitable for recovery than EDTA-2Na. Besides that, while recovering the diluted Na3PO4, it can obtain the high purity of permeate water. The overall performance indicates that, the utilization of DCMD is a promising technology to recover the diluted draw solution for FO process.

Keywords: membrane distillation, forward osmosis, draw solution, recovery

Procedia PDF Downloads 181