Search results for: Analytic Hierarchy Processing (AHP)
1198 Rheological and Computational Analysis of Crude Oil Transportation
Authors: Praveen Kumar, Satish Kumar, Jashanpreet Singh
Abstract:
Transportation of unrefined crude oil from the production unit to a refinery or large storage area by a pipeline is difficult due to the different properties of crude in various areas. Thus, the design of a crude oil pipeline is a very complex and time consuming process, when considering all the various parameters. There were three very important parameters that play a significant role in the transportation and processing pipeline design; these are: viscosity profile, temperature profile and the velocity profile of waxy crude oil through the crude oil pipeline. Knowledge of the Rheological computational technique is required for better understanding the flow behavior and predicting the flow profile in a crude oil pipeline. From these profile parameters, the material and the emulsion that is best suited for crude oil transportation can be predicted. Rheological computational fluid dynamic technique is a fast method used for designing flow profile in a crude oil pipeline with the help of computational fluid dynamics and rheological modeling. With this technique, the effect of fluid properties including shear rate range with temperature variation, degree of viscosity, elastic modulus and viscous modulus was evaluated under different conditions in a transport pipeline. In this paper, two crude oil samples was used, as well as a prepared emulsion with natural and synthetic additives, at different concentrations ranging from 1,000 ppm to 3,000 ppm. The rheological properties was then evaluated at a temperature range of 25 to 60 °C and which additive was best suited for transportation of crude oil is determined. Commercial computational fluid dynamics (CFD) has been used to generate the flow, velocity and viscosity profile of the emulsions for flow behavior analysis in crude oil transportation pipeline. This rheological CFD design can be further applied in developing designs of pipeline in the future.Keywords: surfactant, natural, crude oil, rheology, CFD, viscosity
Procedia PDF Downloads 4521197 The Design of Smart Tactile Textiles for Therapeutic Applications
Authors: Karen Hong
Abstract:
Smart tactile textiles are a series of textile-based products that incorporates smart embedded technology to be utilized as tactile therapeutic applications for 2 main groups of target users. The first group of users will be children with sensory processing disorder who are suffering from tactile sensory dysfunction. Children with tactile sensory issues may have difficulty tolerating the sensations generated from the touch of certain textures on the fabrics. A series of smart tactile textiles, collectively known as ‘Tactile Toys’ are developed as tactile therapy play objects, exposing children to different types of touch sensations within textiles, enabling them to enjoy tactile experiences together with interactive play which will help them to overcome fear of certain touch sensations. The second group of users will be the elderly or geriatric patients who are suffering from deteriorating sense of touch. One of the common consequences of aging is suffering from deteriorating sense of touch and a decline in motoric function. With the focus in stimulating the sense of touch for this particular group of end users, another series of smart tactile textiles, collectively known as ‘Tactile Aids’ are developed also as tactile therapy. This range of products can help to maintain touch sensitivity and at the same time allowing the elderly to enjoy interactive play to practice their hand-eye coordination and enhancing their motor skills. These smart tactile textile products are being designed and tested out by the end users and have proofed their efficacy as tactile therapy enabling the users to lead a better quality of life.Keywords: smart textiles, embedded technology, tactile therapy, tactile aids, tactile toys
Procedia PDF Downloads 1751196 Effect of Injection Moulding Process Parameter on Tensile Strength of Using Taguchi Method
Authors: Gurjeet Singh, M. K. Pradhan, Ajay Verma
Abstract:
The plastic industry plays very important role in the economy of any country. It is generally among the leading share of the economy of the country. Since metals and their alloys are very rarely available on the earth. So to produce plastic products and components, which finds application in many industrial as well as household consumer products is beneficial. Since 50% plastic products are manufactured by injection moulding process. For production of better quality product, we have to control quality characteristics and performance of the product. The process parameters plays a significant role in production of plastic, hence the control of process parameter is essential. In this paper the effect of the parameters selection on injection moulding process has been described. It is to define suitable parameters in producing plastic product. Selecting the process parameter by trial and error is neither desirable nor acceptable, as it is often tends to increase the cost and time. Hence optimization of processing parameter of injection moulding process is essential. The experiments were designed with Taguchi’s orthogonal array to achieve the result with least number of experiments. Here Plastic material polypropylene is studied. Tensile strength test of material is done on universal testing machine, which is produced by injection moulding machine. By using Taguchi technique with the help of MiniTab-14 software the best value of injection pressure, melt temperature, packing pressure and packing time is obtained. We found that process parameter packing pressure contribute more in production of good tensile plastic product.Keywords: injection moulding, tensile strength, poly-propylene, Taguchi
Procedia PDF Downloads 2851195 Comparison of Bone Mineral Density of Lumbar Spines between High Level Cyclists and Sedentary
Authors: Mohammad Shabani
Abstract:
The physical activities depending on the nature of the mechanical stresses they induce on bone sometimes have brought about different results. The purpose of this study was to compare bone mineral density (BMD) of the lumbar spine between the high-level cyclists and sedentary. Materials and Methods: In the present study, 73 cyclists senior (age: 25.81 ± 4.35 years; height: 179.66 ± 6.31 cm; weight: 71.55 ± 6.31 kg) and 32 sedentary subjects (age: 28.28 ± 4.52 years; height: 176.56 ± 6.2 cm; weight: 74.47 ± 8.35 kg) participated voluntarily. All cyclists belonged to the different teams from the International Cycling Union and they trained competitively for 10 years. BMD of the lumbar spine of the subjects was measured using DXA X-ray (Lunar). Descriptive statistics calculations were performed using computer software data processing (Statview 5, SAS Institute Inc. USA). The comparison of two independent distributions (BMD high level cyclists and sedentary) was made by the Student T Test standard. Probability 0.05 (p≤0 / 05) was adopted as significance. Results: The result of this study showed that the BMD values of the lumbar spine of sedentary subjects were significantly higher for all measured segments. Conclusion and Discussion: Cycling is firstly a common sport and on the other hand endurance sport. It is now accepted that weight bearing exercises have an osteogenic effect compared to non-weight bearing exercises. Thus, endurance sports such as cycling, compared to the activities imposing intense force in short time, seem not to really be osteogenic. Therefore, it can be concluded that cycling provides low stimulates osteogenic because of specific biomechanical forces of the sport and its lack of impact.Keywords: BMD, lumbar spine, high level cyclist, cycling
Procedia PDF Downloads 2671194 4D Modelling of Low Visibility Underwater Archaeological Excavations Using Multi-Source Photogrammetry in the Bulgarian Black Sea
Authors: Rodrigo Pacheco-Ruiz, Jonathan Adams, Felix Pedrotti
Abstract:
This paper introduces the applicability of underwater photogrammetric survey within challenging conditions as the main tool to enhance and enrich the process of documenting archaeological excavation through the creation of 4D models. Photogrammetry was being attempted on underwater archaeological sites at least as early as the 1970s’ and today the production of traditional 3D models is becoming a common practice within the discipline. Photogrammetry underwater is more often implemented to record exposed underwater archaeological remains and less so as a dynamic interpretative tool. Therefore, it tends to be applied in bright environments and when underwater visibility is > 1m, reducing its implementation on most submerged archaeological sites in more turbid conditions. Recent years have seen significant development of better digital photographic sensors and the improvement of optical technology, ideal for darker environments. Such developments, in tandem with powerful processing computing systems, have allowed underwater photogrammetry to be used by this research as a standard recording and interpretative tool. Using multi-source photogrammetry (5, GoPro5 Hero Black cameras) this paper presents the accumulation of daily (4D) underwater surveys carried out in the Early Bronze Age (3,300 BC) to Late Ottoman (17th Century AD) archaeological site of Ropotamo in the Bulgarian Black Sea under challenging conditions (< 0.5m visibility). It proves that underwater photogrammetry can and should be used as one of the main recording methods even in low light and poor underwater conditions as a way to better understand the complexity of the underwater archaeological record.Keywords: 4D modelling, Black Sea Maritime Archaeology Project, multi-source photogrammetry, low visibility underwater survey
Procedia PDF Downloads 2351193 Developing Laser Spot Position Determination and PRF Code Detection with Quadrant Detector
Authors: Mohamed Fathy Heweage, Xiao Wen, Ayman Mokhtar, Ahmed Eldamarawy
Abstract:
In this paper, we are interested in modeling, simulation, and measurement of the laser spot position with a quadrant detector. We enhance detection and tracking of semi-laser weapon decoding system based on microcontroller. The system receives the reflected pulse through quadrant detector and processes the laser pulses through a processing circuit, a microcontroller decoding laser pulse reflected by the target. The seeker accuracy will be enhanced by the decoding system, the laser detection time based on the receiving pulses number is reduced, a gate is used to limit the laser pulse width. The model is implemented based on Pulse Repetition Frequency (PRF) technique with two microcontroller units (MCU). MCU1 generates laser pulses with different codes. MCU2 decodes the laser code and locks the system at the specific code. The codes EW selected based on the two selector switches. The system is implemented and tested in Proteus ISIS software. The implementation of the full position determination circuit with the detector is produced. General system for the spot position determination was performed with the laser PRF for incident radiation and the mechanical system for adjusting system at different angles. The system test results show that the system can detect the laser code with only three received pulses based on the narrow gate signal, and good agreement between simulation and measured system performance is obtained.Keywords: four quadrant detector, pulse code detection, laser guided weapons, pulse repetition frequency (PRF), Atmega 32 microcontrollers
Procedia PDF Downloads 3861192 Potential of Sunflower (Helianthus annuus L.) for Phytoremediation of Soils Contaminated with Heavy Metals
Authors: Violina R. Angelova, Mariana N. Perifanova-Nemska, Galina P. Uzunova, Krasimir I. Ivanov, Huu Q. Lee
Abstract:
A field study was conducted to evaluate the efficacy of the sunflower (Helianthus annuus L.) for phytoremediation of contaminated soils. The experiment was performed on an agricultural field contaminated by the Non-Ferrous-Metal Works near Plovdiv, Bulgaria. Field experiments with a randomized, complete block design with five treatments (control, compost amendments added at 20 and 40 t/daa, and vemicompost amendments added at 20 and 40 t/daa) were carried out. The accumulation of heavy metals in the sunflower plant and the quality of the sunflower oil (heavy metals and fatty acid composition) were determined. The tested organic amendments significantly influenced the uptake of Pb, Zn and Cd by the sunflower plant. The incorporation of 40 t/decare of compost and 20 t/decare of vermicompost to the soil led to an increase in the ability of the sunflower to take up and accumulate Cd, Pb and Zn. Sunflower can be subjected to the accumulators of Pb, Zn and Cd and can be successfully used for phytoremediation of contaminated soils with heavy metals. The 40 t/daa compost treatment led to a decrease in heavy metal content in sunflower oil to below the regulated limits. Oil content and fatty acids composition were affected by compost and vermicompost amendment treatments. Adding compost and vermicompost increased the oil content in the seeds. Adding organic amendments increased the content of stearic, palmitoleic and oleic acids, and reduced the content of palmitic and gadoleic acids in sunflower oil. The possibility of further industrial processing of seeds to oil and use of the obtained oil will make sunflowers economically interesting crops for farmers of phytoremediation technology.Keywords: heavy metals, phytoremediation, polluted soils, sunflower
Procedia PDF Downloads 2311191 Life Cycle Assessment of Almond Processing: Off-ground Harvesting Scenarios
Authors: Jessica Bain, Greg Thoma, Marty Matlock, Jeyam Subbiah, Ebenezer Kwofie
Abstract:
The environmental impact and particulate matter emissions (PM) associated with the production and packaging of 1 kg of almonds were evaluated using life cycle assessment (LCA). The assessment began at the point of ready to harvest with a system boundary was a cradle-to-gate assessment of almond packaging in California. The assessment included three scenarios of off-ground harvesting of almonds. The three general off-ground harvesting scenarios with variations include the harvested almonds solar dried on a paper tarp in the orchard, the harvested almonds solar dried on the floor in a separate lot, and the harvested almonds dried mechanically. The life cycle inventory (LCI) data for almond production were based on previously published literature and data provided by Almond Board of California (ABC). The ReCiPe 2016 method was used to calculate the midpoint impacts. Using consequential LCA model, the global warming potential (GWP) for the three harvesting scenarios are 2.90, 2.86, and 3.09 kg CO2 eq/ kg of packaged almond for scenarios 1, 2a, and 3a, respectively. The global warming potential for conventional harvesting method was 2.89 kg CO2 eq/ kg of packaged almond. The particulate matter emissions for each scenario per hectare for each off-ground harvesting scenario is 77.14, 9.56, 66.86, and 8.75 for conventional harvesting and scenarios 1, 2, and 3, respectively. The most significant contributions to the overall emissions were from almond production. The farm gate almond production had a global warming potential of 2.12 kg CO2 eq/ kg of packaged almond, approximately 73% of the overall emissions. Based on comparisons between the GWP and PM emissions, scenario 2a was the best tradeoff between GHG and PM production.Keywords: life cycle assessment, low moisture foods, sustainability, LCA
Procedia PDF Downloads 811190 Satellite Statistical Data Approach for Upwelling Identification and Prediction in South of East Java and Bali Sea
Authors: Hary Aprianto Wijaya Siahaan, Bayu Edo Pratama
Abstract:
Sea fishery's potential to become one of the nation's assets which very contributed to Indonesia's economy. This fishery potential not in spite of the availability of the chlorophyll in the territorial waters of Indonesia. The research was conducted using three methods, namely: statistics, comparative and analytical. The data used include MODIS sea temperature data imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, MODIS data of chlorophyll-a imaging results in Aqua satellite with a resolution of 4 km in 2002-2015, and Imaging results data ASCAT on MetOp and NOAA satellites with 27 km resolution in 2002-2015. The results of the processing of the data show that the incidence of upwelling in the south of East Java Sea began to happen in June identified with sea surface temperature anomaly below normal, the mass of the air that moves from the East to the West, and chlorophyll-a concentrations are high. In July the region upwelling events are increasingly expanding towards the West and reached its peak in August. Chlorophyll-a concentration prediction using multiple linear regression equations demonstrate excellent results to chlorophyll-a concentrations prediction in 2002 until 2015 with the correlation of predicted chlorophyll-a concentration indicate a value of 0.8 and 0.3 with RMSE value. On the chlorophyll-a concentration prediction in 2016 indicate good results despite a decline in the value of the correlation, where the correlation of predicted chlorophyll-a concentration in the year 2016 indicate a value 0.6, but showed improvement in RMSE values with 0.2.Keywords: satellite, sea surface temperature, upwelling, wind stress
Procedia PDF Downloads 1561189 Change Detection Analysis on Support Vector Machine Classifier of Land Use and Land Cover Changes: Case Study on Yangon
Authors: Khin Mar Yee, Mu Mu Than, Kyi Lint, Aye Aye Oo, Chan Mya Hmway, Khin Zar Chi Winn
Abstract:
The dynamic changes of Land Use and Land Cover (LULC) changes in Yangon have generally resulted the improvement of human welfare and economic development since the last twenty years. Making map of LULC is crucially important for the sustainable development of the environment. However, the exactly data on how environmental factors influence the LULC situation at the various scales because the nature of the natural environment is naturally composed of non-homogeneous surface features, so the features in the satellite data also have the mixed pixels. The main objective of this study is to the calculation of accuracy based on change detection of LULC changes by Support Vector Machines (SVMs). For this research work, the main data was satellite images of 1996, 2006 and 2015. Computing change detection statistics use change detection statistics to compile a detailed tabulation of changes between two classification images and Support Vector Machines (SVMs) process was applied with a soft approach at allocation as well as at a testing stage and to higher accuracy. The results of this paper showed that vegetation and cultivated area were decreased (average total 29 % from 1996 to 2015) because of conversion to the replacing over double of the built up area (average total 30 % from 1996 to 2015). The error matrix and confidence limits led to the validation of the result for LULC mapping.Keywords: land use and land cover change, change detection, image processing, support vector machines
Procedia PDF Downloads 1351188 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics
Authors: L. Freeborn
Abstract:
Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.Keywords: neuroimaging studies, research design, second language acquisition, task validity
Procedia PDF Downloads 1371187 Performance and Processing Evaluation of Solid Oxide Cells by Co-Sintering of GDC Buffer Layer and LSCF Air Electrode
Authors: Hyun-Jong Choi, Minjun Kwak, Doo-Won Seo, Sang-Kuk Woo, Sun-Dong Kim
Abstract:
Solid Oxide Cell(SOC) systems can contribute to the transition to the hydrogen society by utilized as a power and hydrogen generator by the electrochemical reaction with high efficiency at high operation temperature (>750 ℃). La1-xSrxCo1-yFeyO3, which is an air electrode, is occurred stability degradations due to reaction and delamination with yittria stabilized zirconia(YSZ) electrolyte in a water electrolysis mode. To complement this phenomenon SOCs need gadolinium doped ceria(GDC) buffer layer between electrolyte and air electrode. However, GDC buffer layer requires a high sintering temperature and it causes a reaction with YSZ electrolyte. This study carried out low temperature sintering of GDC layer by applying Cu-oxide as a sintering aid. The effect of a copper additive as a sintering aid to lower the sintering temperature for the construction of solid oxide fuel cells (SOFCs) was investigated. GDC buffer layer with 0.25-10 mol% CuO sintering aid was prepared by reacting GDC power and copper nitrate solution followed by heating at 600 ℃. The sintering of CuO-added GDC powder was optimized by investigating linear shrinkage, microstructure, grain size, ionic conductivity, and activation energy of CuO-GDC electrolytes at temperatures ranging from 1100 to 1400 ℃. The sintering temperature of the CuO-GDC electrolyte decreases from 1400 ℃ to 1100 ℃ by adding the CuO sintering aid. The ionic conductivity of the CuO-GDC electrolyte shows a maximum value at 0.5 mol% of CuO. However, the addition of CuO has no significant effects on the activation energy of GDC electrolyte. GDC-LSCF layers were co-sintering at 1050 and 1100 ℃ and button cell tests were carried out at 750 ℃.Keywords: Co-Sintering, GDC-LSCF, Sintering Aid, solid Oxide Cells
Procedia PDF Downloads 2431186 Tea (Camellia sinensis (L.) O. Kuntze) Typology in Kenya: A Review
Authors: Joseph Kimutai Langat
Abstract:
Tea typology is the science of classifying tea. This study was carried out between November 2023 and July 2024, whose main objective was to investigate the typological classification nomenclature of processed tea in the world, narrowing down to Kenya. Centres of origin, historical background, tea growing region, scientific naming system, market, fermentation levels, processing/ oxidation levels and cultural reasons are used to classify tea at present. Of these, the most common typology is by oxidation, and more specifically, by the production methods within the oxidation categories. While the Asian tea producing countries categorises tea products based on the decreasing oxidation levels during the manufacturing process: black tea, green tea, oolong tea and instant tea, Kenya’s tea typology system is based on the degree of fermentation process, i.e. black tea, purple tea, green tea and white tea. Tea is also classified into five categories: black tea, green tea, white tea, oolong tea, and dark tea. Black tea is the main tea processed and exported in Kenya, manufactured mainly by withering, rolling, or by use of cutting-tearing-curling (CTC) method that ensures efficient conversion of leaf herbage to made tea, oxidizing, and drying before being sorted into different grades. It is from these varied typological methods that this review paper concludes that different regions of the world use different classification nomenclature. Therefore, since tea typology is not standardized, it is recommended that a global tea regulator dealing in tea classification be created to standardize tea typology, with domestic in-country regulatory bodies in tea growing countries accredited to implement the global-wide typological agreements and resolutions.Keywords: classification, fermentation, oxidation, tea, typology
Procedia PDF Downloads 401185 Design of SAE J2716 Single Edge Nibble Transmission Digital Sensor Interface for Automotive Applications
Authors: Jongbae Lee, Seongsoo Lee
Abstract:
Modern sensors often embed small-size digital controller for sensor control, value calibration, and signal processing. These sensors require digital data communication with host microprocessors, but conventional digital communication protocols are too heavy for price reduction. SAE J2716 SENT (single edge nibble transmission) protocol transmits direct digital waveforms instead of complicated analog modulated signals. In this paper, a SENT interface is designed in Verilog HDL (hardware description language) and implemented in FPGA (field-programmable gate array) evaluation board. The designed SENT interface consists of frame encoder/decoder, configuration register, tick period generator, CRC (cyclic redundancy code) generator/checker, and TX/RX (transmission/reception) buffer. Frame encoder/decoder is implemented as a finite state machine, and it controls whole SENT interface. Configuration register contains various parameters such as operation mode, tick length, CRC option, pause pulse option, and number of nibble data. Tick period generator generates tick signals from input clock. CRC generator/checker generates or checks CRC in the SENT data frame. TX/RX buffer stores transmission/received data. The designed SENT interface can send or receives digital data in 25~65 kbps at 3 us tick. Synthesized in 0.18 um fabrication technologies, it is implemented about 2,500 gates.Keywords: digital sensor interface, SAE J2716, SENT, verilog HDL
Procedia PDF Downloads 2981184 Using Wearable Device with Neuron Network to Classify Severity of Sleep Disorder
Authors: Ru-Yin Yang, Chi Wu, Cheng-Yu Tsai, Yin-Tzu Lin, Wen-Te Liu
Abstract:
Background: Sleep breathing disorder (SDB) is a condition demonstrated by recurrent episodes of the airway obstruction leading to intermittent hypoxia and quality fragmentation during sleep time. However, the procedures for SDB severity examination remain complicated and costly. Objective: The objective of this study is to establish a simplified examination method for SDB by the respiratory impendence pattern sensor combining the signal processing and machine learning model. Methodologies: We records heart rate variability by the electrocardiogram and respiratory pattern by impendence. After the polysomnography (PSG) been done with the diagnosis of SDB by the apnea and hypopnea index (AHI), we calculate the episodes with the absence of flow and arousal index (AI) from device record. Subjects were divided into training and testing groups. Neuron network was used to establish a prediction model to classify the severity of the SDB by the AI, episodes, and body profiles. The performance was evaluated by classification in the testing group compared with PSG. Results: In this study, we enrolled 66 subjects (Male/Female: 37/29; Age:49.9±13.2) with the diagnosis of SDB in a sleep center in Taipei city, Taiwan, from 2015 to 2016. The accuracy from the confusion matrix on the test group by NN is 71.94 %. Conclusion: Based on the models, we established a prediction model for SDB by means of the wearable sensor. With more cases incoming and training, this system may be used to rapidly and automatically screen the risk of SDB in the future.Keywords: sleep breathing disorder, apnea and hypopnea index, body parameters, neuron network
Procedia PDF Downloads 1481183 Surfactant-Assisted Aqueous Extraction of Residual Oil from Palm-Pressed Mesocarp Fibre
Authors: Rabitah Zakaria, Chan M. Luan, Nor Hakimah Ramly
Abstract:
The extraction of vegetable oil using aqueous extraction process assisted by ionic extended surfactant has been investigated as an alternative to hexane extraction. However, the ionic extended surfactant has not been commercialised and its safety with respect to food processing is uncertain. Hence, food-grade non-ionic surfactants (Tween 20, Span 20, and Span 80) were proposed for the extraction of residual oil from palm-pressed mesocarp fibre. Palm-pressed mesocarp fibre contains a significant amount of residual oil ( 5-10 wt %) and its recovery is beneficial as the oil contains much higher content of vitamin E, carotenoids, and sterols compared to crude palm oil. In this study, the formulation of food-grade surfactants using a combination of high hydrophilic-lipophilic balance (HLB) surfactants and low HLB surfactants to produce micro-emulsion with very low interfacial tension (IFT) was investigated. The suitable surfactant formulation was used in the oil extraction process and the efficiency of the extraction was correlated with the IFT, droplet size and viscosity. It was found that a ternary surfactant mixture with a HLB value of 15 (82% Tween 20, 12% Span 20 and 6% Span 80) was able to produce micro-emulsion with very low IFT compared to other HLB combinations. Results suggested that the IFT and droplet size highly affect the oil recovery efficiency. Finally, optimization of the operating parameters shows that the highest extraction efficiency of 78% was achieved at 1:31 solid to liquid ratio, 2 wt % surfactant solution, temperature of 50˚C, and 50 minutes contact time.Keywords: food-grade surfactants, aqueous extraction of residual oil, palm-pressed mesocarp fibre, interfacial tension
Procedia PDF Downloads 3891182 Reorientation of Anisotropic Particles in Free Liquid Microjets
Authors: Mathias Schlenk, Susanne Seibt, Sabine Rosenfeldt, Josef Breu, Stephan Foerster
Abstract:
Thin liquid jets on micrometer scale play an important role in processing such as in fiber fabrication, inkjet printing, but also for sample delivery in modern synchrotron X-ray devices. In all these cases the liquid jets contain solvents and dissolved materials such as polymers, nanoparticles, fibers pigments or proteins. As liquid flow in liquid jets differs significantly from flow in capillaries and microchannels, particle localization and orientation will also be different. This is of critical importance for applications, which depend on well-defined homogeneous particle and fiber distribution and orientation in liquid jets. Investigations of particle orientation in liquid microjets of diluted solutions have been rare, despite their importance. With the arise of micro-focused X-ray beams it has become possible to scan across samples with micrometer resolution to locally analyse structure and orientation of the samples. In the present work, we used this method to scan across liquid microjets to determine the local distribution and orientation of anisotropic particles. The compromise wormlike block copolymer micelles as an example of long flexible fibrous structures, hectorite materials as a model of extended nanosheet structures, and gold nanorods as an illustration of short stiff cylinders to comprise all relevant anisotropic geometries. We find that due to the different velocity profile in the liquid jet, which resembles plug flow, the orientation of the particles which was generated in the capillary is lost or changed into non-oriented or bi-axially orientations depending on the geometrical shape of the particle.Keywords: anisotropic particles, liquid microjets, reorientation, SAXS
Procedia PDF Downloads 3371181 Project Design Deliverables Sequence (PDD)
Authors: Nahed Al-Hajeri
Abstract:
There are several reasons which lead to a delay in project completion, out of all, one main reason is the delay in deliverable processing, i.e. submission and review of documents. Most of the project cycles start with a list of deliverables but without a sequence of submission of the same, means without a direction to move, leading to overlapping of activities and more interdependencies. Hence Project Design Deliverables (PDD) is developed as a solution to Organize Transmittals (Documents/Drawings) received from contractors/consultants during different phases of an EPC (Engineering, Procurement, and Construction) projects, which gives proper direction to the stakeholders from the beginning, to reduce inter-discipline dependency, avoid overlapping of activities, provide a list of deliverables, sequence of activities, etc. PDD attempts to provide a list and sequencing of the engineering documents/drawings required during different phases of a Project which will benefit both client and Contractor in performing planned activities through timely submission and review of deliverables. This helps in ensuring improved quality and completion of Project in time. The successful implementation begins with a detailed understanding the specific challenges and requirements of the project. PDD will help to learn about vendor document submissions including general workflow, sequence and monitor the submission and review of the deliverables from the early stages of Project. This will provide an overview for the Submission of deliverables by the concerned during the projects in proper sequence. The goal of PDD is also to hold responsible and accountability of all stakeholders during complete project cycle. We believe that successful implementation of PDD with a detailed list of documents and their sequence will help organizations to achieve the project target.Keywords: EPC (Engineering, Procurement, and Construction), project design deliverables (PDD), econometrics sciences, management sciences
Procedia PDF Downloads 3991180 Elimination of Mixed-Culture Biofilms Using Biological Agents
Authors: Anita Vidacs, Csaba Vagvolgyi, Judit Krisch
Abstract:
The attachment of microorganisms to different surfaces and the development of biofilms can lead to outbreaks of food-borne diseases and economic losses due to perished food. In food processing environments, bacterial communities are generally formed by mixed cultures of different species. Plants are sources of several antimicrobial substances that may be potential candidates for the development of new disinfectants. We aimed to investigate cinnamon (Cinnamomum zeylanicum), marjoram (Origanum majorana), and thyme (Thymus vulgaris). Essential oils and their major components (cinnamaldehyde, terpinene-4-ol, and thymol) on four-species biofilms of E. coli, L. monocytogenes, P. putida, and S. aureus. Experiments had three parts: (i) determination of minimum bactericide concentration and the killing time with microdilution methods; (ii) elimination of the four-species 24– and 168-hours old biofilm from stainless steel, polypropylene, tile and wood surfaces; and (iii) comparing the disinfectant effect with industrial used per-acetic based sanitizer (HC-DPE). E. coli and P. putida were more resistant to investigated essential oils and their main components in biofilm, than L. monocytogenes and S. aureus. These Gram-negative bacteria were detected on the surfaces, where the natural based disinfectant had not total biofilm elimination effect. Most promoted solutions were the cinnamon essential oil and the terpinene-4-ol that could eradicate the biofilm from stainless steel, polypropylene and even from tile, too. They have a better disinfectant effect than HC-DPE. These natural agents can be used as alternative solutions in the battle against bacterial biofilms.Keywords: biofilm, essential oils, surfaces, terpinene-4-ol
Procedia PDF Downloads 1101179 Identification of Potential Small Molecule Regulators of PERK Kinase
Authors: Ireneusz Majsterek, Dariusz Pytel, J. Alan Diehl
Abstract:
PKR-like ER kinase (PERK) is serine/threonie endoplasmic reticulum (ER) transmembrane kinase activated during ER-stress. PERK can activate signaling pathways known as unfolded protein response (UPR). Attenuation of translation is mediated by PERK via phosphorylation of eukaryotic initiation factor 2α (eIF2α), which is necessary for translation initiation. PERK activation also directly contributes to activation of Nrf2 which regulates expression of anti-oxidant enzymes. An increased phosphorylation of eIF2α has been reported in Alzheimer disease (AD) patient hippocampus, indicating that PERK is activated in this disease. Recent data have revealed activation of PERK signaling in non-Hodgkins lymphomas. Results also revealed that loss of PERK limits mammary tumor cell growth in vitro and in vivo. Consistent with these observations, activation of UPR in vitro increases levels of the amyloid precursor protein (APP), the peptide from which beta-amyloid plaques (AB) fragments are derived. Finally, proteolytic processing of APP, including the cleavages that produce AB, largely occurs in the ER, and localization coincident with PERK activity. Thus, we expect that PERK-dependent signaling is critical for progression of many types of diseases (human cancer, neurodegenerative disease and other). Therefore, modulation of PERK activity may be a useful therapeutic target in the treatment of different diseases that fail to respond to traditional chemotherapeutic strategies, including Alzheimer’s disease. Our goal will be to developed therapeutic modalities targeting PERK activity.Keywords: PERK kinase, small molecule inhibitor, neurodegenerative disease, Alzheimer’s disease
Procedia PDF Downloads 4801178 Structural Damage Detection via Incomplete Model Data Using Output Data Only
Authors: Ahmed Noor Al-qayyim, Barlas Özden Çağlayan
Abstract:
Structural failure is caused mainly by damage that often occurs on structures. Many researchers focus on obtaining very efficient tools to detect the damage in structures in the early state. In the past decades, a subject that has received considerable attention in literature is the damage detection as determined by variations in the dynamic characteristics or response of structures. This study presents a new damage identification technique. The technique detects the damage location for the incomplete structure system using output data only. The method indicates the damage based on the free vibration test data by using “Two Points - Condensation (TPC) technique”. This method creates a set of matrices by reducing the structural system to two degrees of freedom systems. The current stiffness matrices are obtained from optimization of the equation of motion using the measured test data. The current stiffness matrices are compared with original (undamaged) stiffness matrices. High percentage changes in matrices’ coefficients lead to the location of the damage. TPC technique is applied to the experimental data of a simply supported steel beam model structure after inducing thickness change in one element. Where two cases are considered, the method detects the damage and determines its location accurately in both cases. In addition, the results illustrate that these changes in stiffness matrix can be a useful tool for continuous monitoring of structural safety using ambient vibration data. Furthermore, its efficiency proves that this technique can also be used for big structures.Keywords: damage detection, optimization, signals processing, structural health monitoring, two points–condensation
Procedia PDF Downloads 3631177 Recognition and Counting Algorithm for Sub-Regional Objects in a Handwritten Image through Image Sets
Authors: Kothuri Sriraman, Mattupalli Komal Teja
Abstract:
In this paper, a novel algorithm is proposed for the recognition of hulls in a hand written images that might be irregular or digit or character shape. Identification of objects and internal objects is quite difficult to extract, when the structure of the image is having bulk of clusters. The estimation results are easily obtained while going through identifying the sub-regional objects by using the SASK algorithm. Focusing mainly to recognize the number of internal objects exist in a given image, so as it is shadow-free and error-free. The hard clustering and density clustering process of obtained image rough set is used to recognize the differentiated internal objects, if any. In order to find out the internal hull regions it involves three steps pre-processing, Boundary Extraction and finally, apply the Hull Detection system. By detecting the sub-regional hulls it can increase the machine learning capability in detection of characters and it can also be extend in order to get the hull recognition even in irregular shape objects like wise black holes in the space exploration with their intensities. Layered hulls are those having the structured layers inside while it is useful in the Military Services and Traffic to identify the number of vehicles or persons. This proposed SASK algorithm is helpful in making of that kind of identifying the regions and can useful in undergo for the decision process (to clear the traffic, to identify the number of persons in the opponent’s in the war).Keywords: chain code, Hull regions, Hough transform, Hull recognition, Layered Outline Extraction, SASK algorithm
Procedia PDF Downloads 3471176 Event Related Brain Potentials Evoked by Carmen in Musicians and Dancers
Authors: Hanna Poikonen, Petri Toiviainen, Mari Tervaniemi
Abstract:
Event-related potentials (ERPs) evoked by simple tones in the brain have been extensively studied. However, in reality the music surrounding us is spectrally and temporally complex and dynamic. Thus, the research using natural sounds is crucial in understanding the operation of the brain in its natural environment. Music is an excellent example of natural stimulation, which, in various forms, has always been an essential part of different cultures. In addition to sensory responses, music elicits vast cognitive and emotional processes in the brain. When compared to laymen, professional musicians have stronger ERP responses in processing individual musical features in simple tone sequences, such as changes in pitch, timbre and harmony. Here we show that the ERP responses evoked by rapid changes in individual musical features are more intense in musicians than in laymen, also while listening to long excerpts of the composition Carmen. Interestingly, for professional dancers, the amplitudes of the cognitive P300 response are weaker than for musicians but still stronger than for laymen. Also, the cognitive P300 latencies of musicians are significantly shorter whereas the latencies of laymen are significantly longer. In contrast, sensory N100 do not differ in amplitude or latency between musicians and laymen. These results, acquired from a novel ERP methodology for natural music, suggest that we can take the leap of studying the brain with long pieces of natural music also with the ERP method of electroencephalography (EEG), as has already been made with functional magnetic resonance (fMRI), as these two brain imaging devices complement each other.Keywords: electroencephalography, expertise, musical features, real-life music
Procedia PDF Downloads 4811175 INRAM-3DCNN: Multi-Scale Convolutional Neural Network Based on Residual and Attention Module Combined with Multilayer Perceptron for Hyperspectral Image Classification
Authors: Jianhong Xiang, Rui Sun, Linyu Wang
Abstract:
In recent years, due to the continuous improvement of deep learning theory, Convolutional Neural Network (CNN) has played a great superior performance in the research of Hyperspectral Image (HSI) classification. Since HSI has rich spatial-spectral information, only utilizing a single dimensional or single size convolutional kernel will limit the detailed feature information received by CNN, which limits the classification accuracy of HSI. In this paper, we design a multi-scale CNN with MLP based on residual and attention modules (INRAM-3DCNN) for the HSI classification task. We propose to use multiple 3D convolutional kernels to extract the packet feature information and fully learn the spatial-spectral features of HSI while designing residual 3D convolutional branches to avoid the decline of classification accuracy due to network degradation. Secondly, we also design the 2D Inception module with a joint channel attention mechanism to quickly extract key spatial feature information at different scales of HSI and reduce the complexity of the 3D model. Due to the high parallel processing capability and nonlinear global action of the Multilayer Perceptron (MLP), we use it in combination with the previous CNN structure for the final classification process. The experimental results on two HSI datasets show that the proposed INRAM-3DCNN method has superior classification performance and can perform the classification task excellently.Keywords: INRAM-3DCNN, residual, channel attention, hyperspectral image classification
Procedia PDF Downloads 771174 Effect of Saponin Enriched Soapwort Powder on Structural and Sensorial Properties of Turkish Delight
Authors: Ihsan Burak Cam, Ayhan Topuz
Abstract:
Turkish delight has been produced by bleaching the plain delight mix (refined sugar, water and starch) via soapwort extract and powdered sugar. Soapwort extract which contains high amount of saponin, is an additive used in Turkish delight and tahini halvah production to improve consistency, chewiness and color due to its bioactive saponin content by acting as emulsifier. In this study, soapwort powder has been produced by determining optimum process conditions of soapwort extract by using response-surface method. This extract has been enriched with saponin by reverse osmosis (contains %63 saponin in dry bases). Büchi mini spray dryer B-290 was used to produce spray-dried soapwort powder (aw=0.254) from the enriched soapwort concentrate. Processing steps optimization and saponin content enrichment of soapwort extract has been tested on Turkish Delight production. Delight samples, produced by soapwort powder and commercial extract (control), were compared in chewiness, springiness, stickiness, adhesiveness, hardness, color and sensorial characteristics. According to the results, all textural properties except hardness of delights produced by powder were found to be statistically different than control samples. Chewiness, springiness, stickiness, adhesiveness and hardness values of samples (delights produced by the powder / control delights) were determined to be 361.9/1406.7, 0.095/0.251, -120.3/-51.7, 781.9/1869.3, 3427.3g/3118.4g, respectively. According to the quality analysis that has been ran with the end products it has been determined that; there is no statistically negative effect of the soapwort extract and the soapwort powder on the color and the appearance of Turkish Delight.Keywords: saponin, delight, soapwort powder, spray drying
Procedia PDF Downloads 2521173 Genetic Algorithm and Multi Criteria Decision Making Approach for Compressive Sensing Based Direction of Arrival Estimation
Authors: Ekin Nurbaş
Abstract:
One of the essential challenges in array signal processing, which has drawn enormous research interest over the past several decades, is estimating the direction of arrival (DOA) of plane waves impinging on an array of sensors. In recent years, the Compressive Sensing based DoA estimation methods have been proposed by researchers, and it has been discovered that the Compressive Sensing (CS)-based algorithms achieved significant performances for DoA estimation even in scenarios where there are multiple coherent sources. On the other hand, the Genetic Algorithm, which is a method that provides a solution strategy inspired by natural selection, has been used in sparse representation problems in recent years and provides significant improvements in performance. With all of those in consideration, in this paper, a method that combines the Genetic Algorithm (GA) and the Multi-Criteria Decision Making (MCDM) approaches for Direction of Arrival (DoA) estimation in the Compressive Sensing (CS) framework is proposed. In this method, we generate a multi-objective optimization problem by splitting the norm minimization and reconstruction loss minimization parts of the Compressive Sensing algorithm. With the help of the Genetic Algorithm, multiple non-dominated solutions are achieved for the defined multi-objective optimization problem. Among the pareto-frontier solutions, the final solution is obtained with the multiple MCDM methods. Moreover, the performance of the proposed method is compared with the CS-based methods in the literature.Keywords: genetic algorithm, direction of arrival esitmation, multi criteria decision making, compressive sensing
Procedia PDF Downloads 1451172 Plant Identification Using Convolution Neural Network and Vision Transformer-Based Models
Authors: Virender Singh, Mathew Rees, Simon Hampton, Sivaram Annadurai
Abstract:
Plant identification is a challenging task that aims to identify the family, genus, and species according to plant morphological features. Automated deep learning-based computer vision algorithms are widely used for identifying plants and can help users narrow down the possibilities. However, numerous morphological similarities between and within species render correct classification difficult. In this paper, we tested custom convolution neural network (CNN) and vision transformer (ViT) based models using the PyTorch framework to classify plants. We used a large dataset of 88,000 provided by the Royal Horticultural Society (RHS) and a smaller dataset of 16,000 images from the PlantClef 2015 dataset for classifying plants at genus and species levels, respectively. Our results show that for classifying plants at the genus level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420 and other state-of-the-art CNN-based models suggested in previous studies on a similar dataset. ViT model achieved top accuracy of 83.3% for classifying plants at the genus level. For classifying plants at the species level, ViT models perform better compared to CNN-based models ResNet50 and ResNet-RS-420, with a top accuracy of 92.5%. We show that the correct set of augmentation techniques plays an important role in classification success. In conclusion, these results could help end users, professionals and the general public alike in identifying plants quicker and with improved accuracy.Keywords: plant identification, CNN, image processing, vision transformer, classification
Procedia PDF Downloads 1021171 The Functional Rehabilitation of Peri-Implant Tissue Defects: A Case Report
Authors: Özgür Öztürk, Cumhur Sipahi, Hande Yeşil
Abstract:
Implant retained restorations commonly consist of a metal-framework veneered with ceramic or composite facings. The increasing and expanding use of indirect resin composites in dentistry is a result of innovations in materials and processing techniques. Of special interest to the implant restorative field is the possibility that composites present significantly lower peak vertical and transverse forces transmitted at the peri-implant level compared to metal-ceramic supra structures in implant-supported restorations. A 43-year-old male patient referred to the department of prosthodontics for an implant retained fixed prosthesis. The clinical and radiographic examination of the patient demonstrated the presence of an implant in the right mandibular first molar tooth region. A considerable amount of marginal bone loss around the implant was detected in radiographic examinations combined with a remarkable peri-implant soft tissue deficiency. To minimize the chewing loads transmitted to the implant-bone interface it was decided to fabricate an indirect composite resin veneered single metal crown over a screw-retained abutment. At the end of the treatment, the functional and aesthetic deficiencies were fully compensated. After a 6 months clinical and radiographic follow-up period the not any additional pathologic invasion was detected in the implant-bone interface and implant retained restoration did not reveal any vehement complication.Keywords: dental implant, fixed partial dentures, indirect composite resin, peri-implant defects
Procedia PDF Downloads 2591170 Gas Phase Extraction: An Environmentally Sustainable and Effective Method for The Extraction and Recovery of Metal from Ores
Authors: Kolela J Nyembwe, Darlington C. Ashiegbu, Herman J. Potgieter
Abstract:
Over the past few decades, the demand for metals has increased significantly. This has led to a decrease and decline of high-grade ore over time and an increase in mineral complexity and matrix heterogeneity. In addition to that, there are rising concerns about greener processes and a sustainable environment. Due to these challenges, the mining and metal industry has been forced to develop new technologies that are able to economically process and recover metallic values from low-grade ores, materials having a metal content locked up in industrially processed residues (tailings and slag), and complex matrix mineral deposits. Several methods to address these issues have been developed, among which are ionic liquids (IL), heap leaching, and bioleaching. Recently, the gas phase extraction technique has been gaining interest because it eliminates many of the problems encountered in conventional mineral processing methods. The technique relies on the formation of volatile metal complexes, which can be removed from the residual solids by a carrier gas. The complexes can then be reduced using the appropriate method to obtain the metal and regenerate-recover the organic extractant. Laboratory work on the gas phase have been conducted for the extraction and recovery of aluminium (Al), iron (Fe), copper (Cu), chrome (Cr), nickel (Ni), lead (Pb), and vanadium V. In all cases the extraction revealed to depend of temperature and mineral surface area. The process technology appears very promising, offers the feasibility of recirculation, organic reagent regeneration, and has the potential to deliver on all promises of a “greener” process.Keywords: gas-phase extraction, hydrometallurgy, low-grade ore, sustainable environment
Procedia PDF Downloads 1301169 Comparison of Central Light Reflex Width-to-Retinal Vessel Diameter Ratio between Glaucoma and Normal Eyes by Using Edge Detection Technique
Authors: P. Siriarchawatana, K. Leungchavaphongse, N. Covavisaruch, K. Rojananuangnit, P. Boondaeng, N. Panyayingyong
Abstract:
Glaucoma is a disease that causes visual loss in adults. Glaucoma causes damage to the optic nerve and its overall pathophysiology is still not fully understood. Vasculopathy may be one of the possible causes of nerve damage. Photographic imaging of retinal vessels by fundus camera during eye examination may complement clinical management. This paper presents an innovation for measuring central light reflex width-to-retinal vessel diameter ratio (CRR) from digital retinal photographs. Using our edge detection technique, CRRs from glaucoma and normal eyes were compared to examine differences and associations. CRRs were evaluated on fundus photographs of participants from Mettapracharak (Wat Raikhing) Hospital in Nakhon Pathom, Thailand. Fifty-five photographs from normal eyes and twenty-one photographs from glaucoma eyes were included. Participants with hypertension were excluded. In each photograph, CRRs from four retinal vessels, including arteries and veins in the inferotemporal and superotemporal regions, were quantified using edge detection technique. From our finding, mean CRRs of all four retinal arteries and veins were significantly higher in persons with glaucoma than in those without glaucoma (0.34 vs. 0.32, p < 0.05 for inferotemporal vein, 0.33 vs. 0.30, p < 0.01 for inferotemporal artery, 0.34 vs. 0.31, p < 0.01 for superotemporal vein, and 0.33 vs. 0.30, p < 0.05 for superotemporal artery). From these results, an increase in CRRs of retinal vessels, as quantitatively measured from fundus photographs, could be associated with glaucoma.Keywords: glaucoma, retinal vessel, central light reflex, image processing, fundus photograph, edge detection
Procedia PDF Downloads 323