Search results for: sol-gel processing
301 Use of Cassava Waste and Its Energy Potential
Authors: I. Inuaeyen, L. Phil, O. Eni
Abstract:
Fossil fuels have been the main source of global energy for many decades, accounting for about 80% of global energy need. This is beginning to change however with increasing concern about greenhouse gas emissions which comes mostly from fossil fuel combustion. Greenhouse gases such as carbon dioxide are responsible for stimulating climate change. As a result, there has been shift towards more clean and renewable energy sources of energy as a strategy for stemming greenhouse gas emission into the atmosphere. The production of bio-products such as bio-fuel, bio-electricity, bio-chemicals, and bio-heat etc. using biomass materials in accordance with the bio-refinery concept holds a great potential for reducing high dependence on fossil fuel and their resources. The bio-refinery concept promotes efficient utilisation of biomass material for the simultaneous production of a variety of products in order to minimize or eliminate waste materials. This will ultimately reduce greenhouse gas emissions into the environment. In Nigeria, cassava solid waste from cassava processing facilities has been identified as a vital feedstock for bio-refinery process. Cassava is generally a staple food in Nigeria and one of the most widely cultivated foodstuff by farmers across Nigeria. As a result, there is an abundant supply of cassava waste in Nigeria. In this study, the aim is to explore opportunities for converting cassava waste to a range of bio-products such as butanol, ethanol, electricity, heat, methanol, furfural etc. using a combination of biochemical, thermochemical and chemical conversion routes. . The best process scenario will be identified through the evaluation of economic analysis, energy efficiency, life cycle analysis and social impact. The study will be carried out by developing a model representing different process options for cassava waste conversion to useful products. The model will be developed using Aspen Plus process simulation software. Process economic analysis will be done using Aspen Icarus software. So far, comprehensive survey of literature has been conducted. This includes studies on conversion of cassava solid waste to a variety of bio-products using different conversion techniques, cassava waste production in Nigeria, modelling and simulation of waste conversion to useful products among others. Also, statistical distribution of cassava solid waste production in Nigeria has been established and key literatures with useful parameters for developing different cassava waste conversion process has been identified. In the future work, detailed modelling of the different process scenarios will be carried out and the models validated using data from literature and demonstration plants. A techno-economic comparison of the various process scenarios will be carried out to identify the best scenario using process economics, life cycle analysis, energy efficiency and social impact as the performance indexes.Keywords: bio-refinery, cassava waste, energy, process modelling
Procedia PDF Downloads 373300 Myanmar Consonants Recognition System Based on Lip Movements Using Active Contour Model
Authors: T. Thein, S. Kalyar Myo
Abstract:
Human uses visual information for understanding the speech contents in noisy conditions or in situations where the audio signal is not available. The primary advantage of visual information is that it is not affected by the acoustic noise and cross talk among speakers. Using visual information from the lip movements can improve the accuracy and robustness of automatic speech recognition. However, a major challenge with most automatic lip reading system is to find a robust and efficient method for extracting the linguistically relevant speech information from a lip image sequence. This is a difficult task due to variation caused by different speakers, illumination, camera setting and the inherent low luminance and chrominance contrast between lip and non-lip region. Several researchers have been developing methods to overcome these problems; the one is lip reading. Moreover, it is well known that visual information about speech through lip reading is very useful for human speech recognition system. Lip reading is the technique of a comprehensive understanding of underlying speech by processing on the movement of lips. Therefore, lip reading system is one of the different supportive technologies for hearing impaired or elderly people, and it is an active research area. The need for lip reading system is ever increasing for every language. This research aims to develop a visual teaching method system for the hearing impaired persons in Myanmar, how to pronounce words precisely by identifying the features of lip movement. The proposed research will work a lip reading system for Myanmar Consonants, one syllable consonants (င (Nga)၊ ည (Nya)၊ မ (Ma)၊ လ (La)၊ ၀ (Wa)၊ သ (Tha)၊ ဟ (Ha)၊ အ (Ah) ) and two syllable consonants ( က(Ka Gyi)၊ ခ (Kha Gway)၊ ဂ (Ga Nge)၊ ဃ (Ga Gyi)၊ စ (Sa Lone)၊ ဆ (Sa Lain)၊ ဇ (Za Gwe) ၊ ဒ (Da Dway)၊ ဏ (Na Gyi)၊ န (Na Nge)၊ ပ (Pa Saug)၊ ဘ (Ba Gone)၊ ရ (Ya Gaug)၊ ဠ (La Gyi) ). In the proposed system, there are three subsystems, the first one is the lip localization system, which localizes the lips in the digital inputs. The next one is the feature extraction system, which extracts features of lip movement suitable for visual speech recognition. And the final one is the classification system. In the proposed research, Two Dimensional Discrete Cosine Transform (2D-DCT) and Linear Discriminant Analysis (LDA) with Active Contour Model (ACM) will be used for lip movement features extraction. Support Vector Machine (SVM) classifier is used for finding class parameter and class number in training set and testing set. Then, experiments will be carried out for the recognition accuracy of Myanmar consonants using the only visual information on lip movements which are useful for visual speech of Myanmar languages. The result will show the effectiveness of the lip movement recognition for Myanmar Consonants. This system will help the hearing impaired persons to use as the language learning application. This system can also be useful for normal hearing persons in noisy environments or conditions where they can find out what was said by other people without hearing voice.Keywords: feature extraction, lip reading, lip localization, Active Contour Model (ACM), Linear Discriminant Analysis (LDA), Support Vector Machine (SVM), Two Dimensional Discrete Cosine Transform (2D-DCT)
Procedia PDF Downloads 286299 A Methodology Based on Image Processing and Deep Learning for Automatic Characterization of Graphene Oxide
Authors: Rafael do Amaral Teodoro, Leandro Augusto da Silva
Abstract:
Originated from graphite, graphene is a two-dimensional (2D) material that promises to revolutionize technology in many different areas, such as energy, telecommunications, civil construction, aviation, textile, and medicine. This is possible because its structure, formed by carbon bonds, provides desirable optical, thermal, and mechanical characteristics that are interesting to multiple areas of the market. Thus, several research and development centers are studying different manufacturing methods and material applications of graphene, which are often compromised by the scarcity of more agile and accurate methodologies to characterize the material – that is to determine its composition, shape, size, and the number of layers and crystals. To engage in this search, this study proposes a computational methodology that applies deep learning to identify graphene oxide crystals in order to characterize samples by crystal sizes. To achieve this, a fully convolutional neural network called U-net has been trained to segment SEM graphene oxide images. The segmentation generated by the U-net is fine-tuned with a standard deviation technique by classes, which allows crystals to be distinguished with different labels through an object delimitation algorithm. As a next step, the characteristics of the position, area, perimeter, and lateral measures of each detected crystal are extracted from the images. This information generates a database with the dimensions of the crystals that compose the samples. Finally, graphs are automatically created showing the frequency distributions by area size and perimeter of the crystals. This methodological process resulted in a high capacity of segmentation of graphene oxide crystals, presenting accuracy and F-score equal to 95% and 94%, respectively, over the test set. Such performance demonstrates a high generalization capacity of the method in crystal segmentation, since its performance considers significant changes in image extraction quality. The measurement of non-overlapping crystals presented an average error of 6% for the different measurement metrics, thus suggesting that the model provides a high-performance measurement for non-overlapping segmentations. For overlapping crystals, however, a limitation of the model was identified. To overcome this limitation, it is important to ensure that the samples to be analyzed are properly prepared. This will minimize crystal overlap in the SEM image acquisition and guarantee a lower error in the measurements without greater efforts for data handling. All in all, the method developed is a time optimizer with a high measurement value, considering that it is capable of measuring hundreds of graphene oxide crystals in seconds, saving weeks of manual work.Keywords: characterization, graphene oxide, nanomaterials, U-net, deep learning
Procedia PDF Downloads 160298 Structure Conduct and Performance of Rice Milling Industry in Sri Lanka
Authors: W. A. Nalaka Wijesooriya
Abstract:
The increasing paddy production, stabilization of domestic rice consumption and the increasing dynamism of rice processing and domestic markets call for a rethinking of the general direction of the rice milling industry in Sri Lanka. The main purpose of the study was to explore levels of concentration in rice milling industry in Polonnaruwa and Hambanthota which are the major hubs of the country for rice milling. Concentration indices reveal that the rice milling industry in Polonnaruwa operates weak oligopsony and is highly competitive in Hambanthota. According to the actual quantity of paddy milling per day, 47 % is less than 8Mt/Day, while 34 % is 8-20 Mt/day, and the rest (19%) is greater than 20 Mt/day. In Hambanthota, nearly 50% of the mills belong to the range of 8-20 Mt/day. Lack of experience of the milling industry, poor knowledge on milling technology, lack of capital and finding an output market are the major entry barriers to the industry. Major problems faced by all the rice millers are the lack of a uniform electricity supply and low quality paddy. Many of the millers emphasized that the rice ceiling price is a constraint to produce quality rice. More than 80% of the millers in Polonnaruwa which is the major parboiling rice producing area have mechanical dryers. Nearly 22% millers have modern machineries like color sorters, water jet polishers. Major paddy purchasing method of large scale millers in Polonnaruwa is through brokers. In Hambanthota major channel is miller purchasing from paddy farmers. Millers in both districts have major rice selling markets in Colombo and suburbs. Huge variation can be observed in the amount of pledge (for paddy storage) loans. There is a strong relationship among the storage ability, credit affordability and the scale of operation of rice millers. The inter annual price fluctuation ranged 30%-35%. Analysis of market margins by using series of secondary data shows that farmers’ share on rice consumer price is stable or slightly increases in both districts. In Hambanthota a greater share goes to the farmer. Only four mills which have obtained the Good Manufacturing Practices (GMP) certification from Sri Lanka Standards Institution can be found. All those millers are small quantity rice exporters. Priority should be given for the Small and medium scale millers in distribution of storage paddy of PMB during the off season. The industry needs a proper rice grading system, and it is recommended to introduce a ceiling price based on graded rice according to the standards. Both husk and rice bran were underutilized. Encouraging investment for establishing rice oil manufacturing plant in Polonnaruwa area is highly recommended. The current taxation procedure needs to be restructured in order to ensure the sustainability of the industry.Keywords: conduct, performance, structure (SCP), rice millers
Procedia PDF Downloads 328297 Lying in a Sender-Receiver Deception Game: Effects of Gender and Motivation to Deceive
Authors: Eitan Elaad, Yeela Gal-Gonen
Abstract:
Two studies examined gender differences in lying when the truth-telling bias prevailed and when inspiring lying and distrust. The first study used 156 participants from the community (78 pairs). First, participants completed the Narcissistic Personality Inventory, the Lie- and Truth Ability Assessment Scale (LTAAS), and the Rational-Experiential Inventory. Then, they participated in a deception game where they performed as senders and receivers of true and false communications. Their goal was to retain as many points as possible according to a payoff matrix that specified the reward they would gain for any possible outcome. Results indicated that males in the sender position lied more and were more successful tellers of lies and truths than females. On the other hand, males, as receivers, trusted less than females but were not better at detecting lies and truths. We explained the results by a. Male's high perceived lie-telling ability. We observed that confidence in telling lies guided participants to increase their use of lies. Male's lie-telling confidence corresponded to earlier accounts that showed a consistent association between high self-assessed lying ability, reports of frequent lying, and predictions of actual lying in experimental settings; b. Male's narcissistic features. Earlier accounts described positive relations between narcissism and reported lying or unethical behavior in everyday life situations. Predictions about the association between narcissism and frequent lying received support in the present study. Furthermore, males scored higher than females on the narcissism scale; and c. Male's experiential thinking style. We observed that males scored higher than females on the experiential thinking style scale. We further hypothesized that the experiential thinking style predicts frequent lying in the deception game. Results confirmed the hypothesis. The second study used one hundred volunteers (40 females) who underwent the same procedure. However, the payoff matrix encouraged lying and distrust. Results showed that male participants lied more than females. We found no gender differences in trust. Males and females did not differ in their success of telling and detecting lies and truths. Participants also completed the LTAAS questionnaire. Males assessed their lie-telling ability higher than females, but the ability assessment did not predict lying frequency. A final note. The present design is limited to low stakes. Participants knew that they were participating in a game, and they would not experience any consequences from their deception in the game. Therefore, we advise caution when applying the present results to lying under high stakes.Keywords: gender, lying, detection of deception, information processing style, self-assessed lying ability
Procedia PDF Downloads 148296 Solid Particles Transport and Deposition Prediction in a Turbulent Impinging Jet Using the Lattice Boltzmann Method and a Probabilistic Model on GPU
Authors: Ali Abdul Kadhim, Fue Lien
Abstract:
Solid particle distribution on an impingement surface has been simulated utilizing a graphical processing unit (GPU). In-house computational fluid dynamics (CFD) code has been developed to investigate a 3D turbulent impinging jet using the lattice Boltzmann method (LBM) in conjunction with large eddy simulation (LES) and the multiple relaxation time (MRT) models. This paper proposed an improvement in the LBM-cellular automata (LBM-CA) probabilistic method. In the current model, the fluid flow utilizes the D3Q19 lattice, while the particle model employs the D3Q27 lattice. The particle numbers are defined at the same regular LBM nodes, and transport of particles from one node to its neighboring nodes are determined in accordance with the particle bulk density and velocity by considering all the external forces. The previous models distribute particles at each time step without considering the local velocity and the number of particles at each node. The present model overcomes the deficiencies of the previous LBM-CA models and, therefore, can better capture the dynamic interaction between particles and the surrounding turbulent flow field. Despite the increasing popularity of LBM-MRT-CA model in simulating complex multiphase fluid flows, this approach is still expensive in term of memory size and computational time required to perform 3D simulations. To improve the throughput of each simulation, a single GeForce GTX TITAN X GPU is used in the present work. The CUDA parallel programming platform and the CuRAND library are utilized to form an efficient LBM-CA algorithm. The methodology was first validated against a benchmark test case involving particle deposition on a square cylinder confined in a duct. The flow was unsteady and laminar at Re=200 (Re is the Reynolds number), and simulations were conducted for different Stokes numbers. The present LBM solutions agree well with other results available in the open literature. The GPU code was then used to simulate the particle transport and deposition in a turbulent impinging jet at Re=10,000. The simulations were conducted for L/D=2,4 and 6, where L is the nozzle-to-surface distance and D is the jet diameter. The effect of changing the Stokes number on the particle deposition profile was studied at different L/D ratios. For comparative studies, another in-house serial CPU code was also developed, coupling LBM with the classical Lagrangian particle dispersion model. Agreement between results obtained with LBM-CA and LBM-Lagrangian models and the experimental data is generally good. The present GPU approach achieves a speedup ratio of about 350 against the serial code running on a single CPU.Keywords: CUDA, GPU parallel programming, LES, lattice Boltzmann method, MRT, multi-phase flow, probabilistic model
Procedia PDF Downloads 207295 Spatial Distribution of Land Use in the North Canal of Beijing Subsidiary Center and Its Impact on the Water Quality
Authors: Alisa Salimova, Jiane Zuo, Christopher Homer
Abstract:
The objective of this study is to analyse the North Canal riparian zone land use with the help of remote sensing analysis in ArcGis using 30 cloudless Landsat8 open-source satellite images from May to August of 2013 and 2017. Land cover, urban construction, heat island effect, vegetation cover, and water system change were chosen as the main parameters and further analysed to evaluate its impact on the North Canal water quality. The methodology involved the following steps: firstly, 30 cloudless satellite images were collected from the Landsat TM image open-source database. The visual interpretation method was used to determine different land types in a catchment area. After primary and secondary classification, 28 land cover types in total were classified. Visual interpretation method was used with the help ArcGIS for the grassland monitoring, US Landsat TM remote sensing image processing with a resolution of 30 meters was used to analyse the vegetation cover. The water system was analysed using the visual interpretation method on the GIS software platform to decode the target area, water use and coverage. Monthly measurements of water temperature, pH, BOD, COD, ammonia nitrogen, total nitrogen and total phosphorus in 2013 and 2017 were taken from three locations of the North Canal in Tongzhou district. These parameters were used for water quality index calculation and compared to land-use changes. The results of this research were promising. The vegetation coverage of North Canal riparian zone in 2017 was higher than the vegetation coverage in 2013. The surface brightness temperature value was positively correlated with the vegetation coverage density and the distance from the surface of the water bodies. This indicates that the vegetation coverage and water system have a great effect on temperature regulation and urban heat island effect. Surface temperature in 2017 was higher than in 2013, indicating a global warming effect. The water volume in the river area has been partially reduced, indicating the potential water scarcity risk in North Canal watershed. Between 2013 and 2017, urban residential, industrial and mining storage land areas significantly increased compared to other land use types; however, water quality has significantly improved in 2017 compared to 2013. This observation indicates that the Tongzhou Water Restoration Plan showed positive results and water management of Tongzhou district had been improved.Keywords: North Canal, land use, riparian vegetation, river ecology, remote sensing
Procedia PDF Downloads 111294 Syntheses of Anionic Poly(urethanes) with Imidazolium, Phosphonium, and Ammonium as Counter-cations and Their Evaluation for CO2 Separation
Authors: Franciele L. Bernard, Felipe Dalla Vecchia, Barbara B. Polesso, Jose A. Donato, Marcus Seferin, Rosane Ligabue, Jailton F. do Nascimento, Sandra Einloft
Abstract:
The increasing level of carbon dioxide concentration in the atmosphere related to fossil fuels processing and utilization are contributing to global warming phenomena considerably. Carbon capture and storage (CCS) technologies appear as one of the key technologies to reduce CO2 emissions mitigating the effects of climate change. Absorption using amines solutions as solvents have been extensively studied and used in industry for decades. However, solvent degradation and equipment corrosion are two of the main problems in this process. Poly (ionic liquid) (PIL) is considered as a promising material for CCS technology, potentially more environmentally friendly and lesser energy demanding than traditional material. PILs possess a unique combination of ionic liquids (ILs) features, such as affinity for CO2, thermal and chemical stability and adjustable properties, coupled with the intrinsic properties of the polymer. This study investigated new Poly (ionic liquid) (PIL) based on polyurethanes with different ionic liquids cations and its potential for CO2 capture. The PILs were synthesized by the addition of diisocyante to a difunctional polyol, followed by an exchange reaction with the ionic Liquids 1-butyl-3-methylimidazolium chloride (BMIM Cl); tetrabutylammonium bromide (TBAB) and tetrabutylphosphonium bromide (TBPB). These materials were characterized by Fourier transform infrared spectroscopy (FTIR), Proton Nuclear Magnetic Resonance (1H-NMR), Atomic force microscopy (AFM), Tensile strength analysis, Field emission scanning electron microscopy (FESEM), Thermogravimetric analysis (TGA), Differential scanning calorimetry (DSC). The PILs CO2 sorption capacity were gravimetrically assessed in a Magnetic Suspension Balance (MSB). It was found that the ionic liquids cation influences in the compounds properties as well as in the CO2 sorption. The best result for CO2 sorption (123 mgCO2/g at 30 bar) was obtained for the PIL (PUPT-TBA). The higher CO2 sorption in PUPT-TBA is probably linked to the fact that the tetraalkylammonium cation having a higher positive density charge can have a stronger interaction with CO2, while the imidazolium charge is delocalized. The comparative CO2 sorption values of the PUPT-TBA with different ionic liquids showed that this material has greater capacity for capturing CO2 when compared to the ILs even at higher temperature. This behavior highlights the importance of this study, as the poly (urethane) based PILs are cheap and versatile materials.Keywords: capture, CO2, ionic liquids, ionic poly(urethane)
Procedia PDF Downloads 234293 Effects of Drying and Extraction Techniques on the Profile of Volatile Compounds in Banana Pseudostem
Authors: Pantea Salehizadeh, Martin P. Bucknall, Robert Driscoll, Jayashree Arcot, George Srzednicki
Abstract:
Banana is one of the most important crops produced in large quantities in tropical and sub-tropical countries. Of the total plant material grown, approximately 40% is considered waste and left in the field to decay. This practice allows fungal diseases such as Sigatoka Leaf Spot to develop, limiting plant growth and spreading spores in the air that can cause respiratory problems in the surrounding population. The pseudostem is considered a waste residue of production (60 to 80 tonnes/ha/year), although it is a good source of dietary fiber and volatile organic compounds (VOC’s). Strategies to process banana pseudostem into palatable, nutritious and marketable food materials could provide significant social and economic benefits. Extraction of VOC’s with desirable odor from dried and fresh pseudostem could improve the smell of products from the confectionary and bakery industries. Incorporation of banana pseudostem flour into bakery products could provide cost savings and improve nutritional value. The aim of this study was to determine the effects of drying methods and different banana species on the profile of volatile aroma compounds in dried banana pseudostem. The banana species analyzed were Musa acuminata and Musa balbisiana. Fresh banana pseudostem samples were processed by either freeze-drying (FD) or heat pump drying (HPD). The extraction of VOC’s was performed at ambient temperature using vacuum distillation and the resulting, mostly aqueous, distillates were analyzed using headspace solid phase microextraction (SPME) gas chromatography – mass spectrometry (GC-MS). Optimal SPME adsorption conditions were 50 °C for 60 min using a Supelco 65 μm PDMS/DVB Stableflex fiber1. Compounds were identified by comparison of their electron impact mass spectra with those from the Wiley 9 / NIST 2011 combined mass spectral library. The results showed that the two species have notably different VOC profiles. Both species contained VOC’s that have been established in literature to have pleasant appetizing aromas. These included l-Menthone, D-Limonene, trans-linlool oxide, 1-Nonanol, CIS 6 Nonen-1ol, 2,6 Nonadien-1-ol, Benzenemethanol, 4-methyl, 1-Butanol, 3-methyl, hexanal, 1-Propanol, 2-methyl- acid، 2-Methyl-2-butanol. Results show banana pseudostem VOC’s are better preserved by FD than by HPD. This study is still in progress and should lead to the optimization of processing techniques that would promote the utilization of banana pseudostem in the food industry.Keywords: heat pump drying, freeze drying, SPME, vacuum distillation, VOC analysis
Procedia PDF Downloads 334292 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 110291 Breast Cancer Metastasis Detection and Localization through Transfer-Learning Convolutional Neural Network Classification Based on Convolutional Denoising Autoencoder Stack
Authors: Varun Agarwal
Abstract:
Introduction: With the advent of personalized medicine, histopathological review of whole slide images (WSIs) for cancer diagnosis presents an exceedingly time-consuming, complex task. Specifically, detecting metastatic regions in WSIs of sentinel lymph node biopsies necessitates a full-scanned, holistic evaluation of the image. Thus, digital pathology, low-level image manipulation algorithms, and machine learning provide significant advancements in improving the efficiency and accuracy of WSI analysis. Using Camelyon16 data, this paper proposes a deep learning pipeline to automate and ameliorate breast cancer metastasis localization and WSI classification. Methodology: The model broadly follows five stages -region of interest detection, WSI partitioning into image tiles, convolutional neural network (CNN) image-segment classifications, probabilistic mapping of tumor localizations, and further processing for whole WSI classification. Transfer learning is applied to the task, with the implementation of Inception-ResNetV2 - an effective CNN classifier that uses residual connections to enhance feature representation, adding convolved outputs in the inception unit to the proceeding input data. Moreover, in order to augment the performance of the transfer learning CNN, a stack of convolutional denoising autoencoders (CDAE) is applied to produce embeddings that enrich image representation. Through a saliency-detection algorithm, visual training segments are generated, which are then processed through a denoising autoencoder -primarily consisting of convolutional, leaky rectified linear unit, and batch normalization layers- and subsequently a contrast-normalization function. A spatial pyramid pooling algorithm extracts the key features from the processed image, creating a viable feature map for the CNN that minimizes spatial resolution and noise. Results and Conclusion: The simplified and effective architecture of the fine-tuned transfer learning Inception-ResNetV2 network enhanced with the CDAE stack yields state of the art performance in WSI classification and tumor localization, achieving AUC scores of 0.947 and 0.753, respectively. The convolutional feature retention and compilation with the residual connections to inception units synergized with the input denoising algorithm enable the pipeline to serve as an effective, efficient tool in the histopathological review of WSIs.Keywords: breast cancer, convolutional neural networks, metastasis mapping, whole slide images
Procedia PDF Downloads 130290 Design of a Low-Cost, Portable, Sensor Device for Longitudinal, At-Home Analysis of Gait and Balance
Authors: Claudia Norambuena, Myissa Weiss, Maria Ruiz Maya, Matthew Straley, Elijah Hammond, Benjamin Chesebrough, David Grow
Abstract:
The purpose of this project is to develop a low-cost, portable sensor device that can be used at home for long-term analysis of gait and balance abnormalities. One area of particular concern involves the asymmetries in movement and balance that can accompany certain types of injuries and/or the associated devices used in the repair and rehabilitation process (e.g. the use of splints and casts) which can often increase chances of falls and additional injuries. This device has the capacity to monitor a patient during the rehabilitation process after injury or operation, increasing the patient’s access to healthcare while decreasing the number of visits to the patient’s clinician. The sensor device may thereby improve the quality of the patient’s care, particularly in rural areas where access to the clinician could be limited, while simultaneously decreasing the overall cost associated with the patient’s care. The device consists of nine interconnected accelerometer/ gyroscope/compass chips (9-DOF IMU, Adafruit, New York, NY). The sensors attach to and are used to determine the orientation and acceleration of the patient’s lower abdomen, C7 vertebra (lower neck), L1 vertebra (middle back), anterior side of each thigh and tibia, and dorsal side of each foot. In addition, pressure sensors are embedded in shoe inserts with one sensor (ESS301, Tekscan, Boston, MA) beneath the heel and three sensors (Interlink 402, Interlink Electronics, Westlake Village, CA) beneath the metatarsal bones of each foot. These sensors measure the distribution of the weight applied to each foot as well as stride duration. A small microntroller (Arduino Mega, Arduino, Ivrea, Italy) is used to collect data from these sensors in a CSV file. MATLAB is then used to analyze the data and output the hip, knee, ankle, and trunk angles projected on the sagittal plane. An open-source program Processing is then used to generate an animation of the patient’s gait. The accuracy of the sensors was validated through comparison to goniometric measurements (±2° error). The sensor device was also shown to have sufficient sensitivity to observe various gait abnormalities. Several patients used the sensor device, and the data collected from each represented the patient’s movements. Further, the sensors were found to have the ability to observe gait abnormalities caused by the addition of a small amount of weight (4.5 - 9.1 kg) to one side of the patient. The user-friendly interface and portability of the sensor device will help to construct a bridge between patients and their clinicians with fewer necessary inpatient visits.Keywords: biomedical sensing, gait analysis, outpatient, rehabilitation
Procedia PDF Downloads 288289 Methods Used to Achieve Airtightness of 0.07 Ach@50Pa for an Industrial Building
Authors: G. Wimmers
Abstract:
The University of Northern British Columbia needed a new laboratory building for the Master of Engineering in Integrated Wood Design Program and its new Civil Engineering Program. Since the University is committed to reducing its environmental footprint and because the Master of Engineering Program is actively involved in research of energy efficient buildings, the decision was made to request the energy efficiency of the Passive House Standard in the Request for Proposals. The building is located in Prince George in Northern British Columbia, a city located at the northern edge of climate zone 6 with an average low between -8 and -10.5 in the winter months. The footprint of the building is 30m x 30m with a height of about 10m. The building consists of a large open space for the shop and laboratory with a small portion of the floorplan being two floors, allowing for a mezzanine level with a few offices as well as mechanical and storage rooms. The total net floor area is 1042m² and the building’s gross volume 9686m³. One key requirement of the Passive House Standard is the airtight envelope with an airtightness of < 0.6 ach@50Pa. In the past, we have seen that this requirement can be challenging to reach for industrial buildings. When testing for air tightness, it is important to test in both directions, pressurization, and depressurization, since the airflow through all leakages of the building will, in reality, happen simultaneously in both directions. A specific detail or situation such as overlapping but not sealed membranes might be airtight in one direction, due to the valve effect, but are opening up when tested in the opposite direction. In this specific project, the advantage was the overall very compact envelope and the good volume to envelope area ratio. The building had to be very airtight and the details for the windows and doors installation as well as all transitions from walls to roof and floor, the connections of the prefabricated wall panels and all penetrations had to be carefully developed to allow for maximum airtightness. The biggest challenges were the specific components of this industrial building, the large bay door for semi-trucks and the dust extraction system for the wood processing machinery. The testing was carried out in accordance with EN 132829 (method A) as specified in the International Passive House Standard and the volume calculation was also following the Passive House guideline resulting in a net volume of 7383m3, excluding all walls, floors and suspended ceiling volumes. This paper will explore the details and strategies used to achieve an airtightness of 0.07 ach@50Pa, to the best of our knowledge the lowest value achieved in North America so far following the test protocol of the International Passive House Standard and discuss the crucial steps throughout the project phases and the most challenging details.Keywords: air changes, airtightness, envelope design, industrial building, passive house
Procedia PDF Downloads 148288 Requirement Engineering for Intrusion Detection Systems in Wireless Sensor Networks
Authors: Afnan Al-Romi, Iman Al-Momani
Abstract:
The urge of applying the Software Engineering (SE) processes is both of vital importance and a key feature in critical, complex large-scale systems, for example, safety systems, security service systems, and network systems. Inevitably, associated with this are risks, such as system vulnerabilities and security threats. The probability of those risks increases in unsecured environments, such as wireless networks in general and in Wireless Sensor Networks (WSNs) in particular. WSN is a self-organizing network of sensor nodes connected by wireless links. WSNs consist of hundreds to thousands of low-power, low-cost, multi-function sensor nodes that are small in size and communicate over short-ranges. The distribution of sensor nodes in an open environment that could be unattended in addition to the resource constraints in terms of processing, storage and power, make such networks in stringent limitations such as lifetime (i.e. period of operation) and security. The importance of WSN applications that could be found in many militaries and civilian aspects has drawn the attention of many researchers to consider its security. To address this important issue and overcome one of the main challenges of WSNs, security solution systems have been developed by researchers. Those solutions are software-based network Intrusion Detection Systems (IDSs). However, it has been witnessed, that those developed IDSs are neither secure enough nor accurate to detect all malicious behaviours of attacks. Thus, the problem is the lack of coverage of all malicious behaviours in proposed IDSs, leading to unpleasant results, such as delays in the detection process, low detection accuracy, or even worse, leading to detection failure, as illustrated in the previous studies. Also, another problem is energy consumption in WSNs caused by IDS. So, in other words, not all requirements are implemented then traced. Moreover, neither all requirements are identified nor satisfied, as for some requirements have been compromised. The drawbacks in the current IDS are due to not following structured software development processes by researches and developers when developing IDS. Consequently, they resulted in inadequate requirement management, process, validation, and verification of requirements quality. Unfortunately, WSN and SE research communities have been mostly impermeable to each other. Integrating SE and WSNs is a real subject that will be expanded as technology evolves and spreads in industrial applications. Therefore, this paper will study the importance of Requirement Engineering when developing IDSs. Also, it will study a set of existed IDSs and illustrate the absence of Requirement Engineering and its effect. Then conclusions are drawn in regard of applying requirement engineering to systems to deliver the required functionalities, with respect to operational constraints, within an acceptable level of performance, accuracy and reliability.Keywords: software engineering, requirement engineering, Intrusion Detection System, IDS, Wireless Sensor Networks, WSN
Procedia PDF Downloads 322287 An Observation Approach of Reading Order for Single Column and Two Column Layout Template
Authors: In-Tsang Lin, Chiching Wei
Abstract:
Reading order is an important task in many digitization scenarios involving the preservation of the logical structure of a document. From the paper survey, it finds that the state-of-the-art algorithm could not fulfill to get the accurate reading order in the portable document format (PDF) files with rich formats, diverse layout arrangement. In recent years, most of the studies on the analysis of reading order have targeted the specific problem of associating layout components with logical labels, while less attention has been paid to the problem of extracting relationships the problem of detecting the reading order relationship between logical components, such as cross-references. Over 3 years of development, the company Foxit has demonstrated the layout recognition (LR) engine in revision 20601 to eager for the accuracy of the reading order. The bounding box of each paragraph can be obtained correctly by the Foxit LR engine, but the result of reading-order is not always correct for single-column, and two-column layout format due to the table issue, formula issue, and multiple mini separated bounding box and footer issue. Thus, the algorithm is developed to improve the accuracy of the reading order based on the Foxit LR structure. In this paper, a creative observation method (Here called the MESH method) is provided here to open a new chance in the research of the reading-order field. Here two important parameters are introduced, one parameter is the number of the bounding box on the right side of the present bounding box (NRight), and another parameter is the number of the bounding box under the present bounding box (Nunder). And the normalized x-value (x/the whole width), the normalized y-value (y/the whole height) of each bounding box, the x-, and y- position of each bounding box were also put into consideration. Initial experimental results of single column layout format demonstrate a 19.33% absolute improvement in accuracy of the reading-order over 7 PDF files (total 150 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 72%. And for two-column layout format, the preliminary results demonstrate a 44.44% absolute improvement in accuracy of the reading-order over 2 PDF files (total 18 pages) using our proposed method based on the LR structure over the baseline method using the LR structure in 20601 revision, which its accuracy of the reading-order is 0%. Until now, the footer issue and a part of multiple mini separated bounding box issue can be solved by using the MESH method. However, there are still three issues that cannot be solved, such as the table issue, formula issue, and the random multiple mini separated bounding boxes. But the detection of the table position and the recognition of the table structure are out of the scope in this paper, and there is needed another research. In the future, the tasks are chosen- how to detect the table position in the page and to extract the content of the table.Keywords: document processing, reading order, observation method, layout recognition
Procedia PDF Downloads 181286 Exploring the Spatial Characteristics of Mortality Map: A Statistical Area Perspective
Authors: Jung-Hong Hong, Jing-Cen Yang, Cai-Yu Ou
Abstract:
The analysis of geographic inequality heavily relies on the use of location-enabled statistical data and quantitative measures to present the spatial patterns of the selected phenomena and analyze their differences. To protect the privacy of individual instance and link to administrative units, point-based datasets are spatially aggregated to area-based statistical datasets, where only the overall status for the selected levels of spatial units is used for decision making. The partition of the spatial units thus has dominant influence on the outcomes of the analyzed results, well known as the Modifiable Areal Unit Problem (MAUP). A new spatial reference framework, the Taiwan Geographical Statistical Classification (TGSC), was recently introduced in Taiwan based on the spatial partition principles of homogeneous consideration of the number of population and households. Comparing to the outcomes of the traditional township units, TGSC provides additional levels of spatial units with finer granularity for presenting spatial phenomena and enables domain experts to select appropriate dissemination level for publishing statistical data. This paper compares the results of respectively using TGSC and township unit on the mortality data and examines the spatial characteristics of their outcomes. For the mortality data between the period of January 1st, 2008 and December 31st, 2010 of the Taitung County, the all-cause age-standardized death rate (ASDR) ranges from 571 to 1757 per 100,000 persons, whereas the 2nd dissemination area (TGSC) shows greater variation, ranged from 0 to 2222 per 100,000. The finer granularity of spatial units of TGSC clearly provides better outcomes for identifying and evaluating the geographic inequality and can be further analyzed with the statistical measures from other perspectives (e.g., population, area, environment.). The management and analysis of the statistical data referring to the TGSC in this research is strongly supported by the use of Geographic Information System (GIS) technology. An integrated workflow that consists of the tasks of the processing of death certificates, the geocoding of street address, the quality assurance of geocoded results, the automatic calculation of statistic measures, the standardized encoding of measures and the geo-visualization of statistical outcomes is developed. This paper also introduces a set of auxiliary measures from a geographic distribution perspective to further examine the hidden spatial characteristics of mortality data and justify the analyzed results. With the common statistical area framework like TGSC, the preliminary results demonstrate promising potential for developing a web-based statistical service that can effectively access domain statistical data and present the analyzed outcomes in meaningful ways to avoid wrong decision making.Keywords: mortality map, spatial patterns, statistical area, variation
Procedia PDF Downloads 258285 The Internet of Things in Luxury Hotels: Generating Customized Multisensory Guest Experiences
Authors: Jean-Eric Pelet, Erhard Lick, Basma Taieb
Abstract:
Purpose This research bridges the gap between sensory marketing and the use of the Internet of Things (IoT) in luxury hotels. We investigated how stimulating guests’ senses through IoT devices influenced their emotions, affective experiences, eudaimonism (well-being), and, ultimately, guest behavior. We examined potential moderating effects of gender. Design/methodology/approach We adopted a mixed method approach, combining qualitative research (semi-structured interviews) to explore hotel managers’ perspectives on the potential use of IoT in luxury hotels and quantitative research (surveying hotel guests; n=357). Findings The results showed that while the senses of smell, hearing, and sight had an impact on guests’ emotions, the senses of touch, hearing, and sight impacted guests’ affective experiences. The senses of smell and taste influenced guests’ eudaimonism. The sense of smell had a greater effect on eudaimonism and behavioral intentions among women compared to men. Originality IoT can be applied in creating customized multi-sensory hotel experiences. For example, hotels may offer unique and diverse ambiences in their rooms and suites to improve guest experiences. Research limitations/implications This study concentrated on luxury hotels located in Europe. Further research may explore the generalizability of the findings (e.g., in other cultures, comparison between high-end and low-end hotels). Practical implications Context awareness and hyper-personalization, through intensive and continuous data collection (hyper-connectivity) and real time processing, are key trends in the service industry. Therefore, big data plays a crucial role in the collection of information since it allows hoteliers to retrieve, analyze, and visualize data to provide personalized services in real time. Together with their guests, hotels may co-create customized sensory experiences. For instance, if the hotel knows about the guest’s music preferences based on social media as well as their age and gender, etc. and considers the temperature and size (standard, suite, etc.) of the guest room, this may determine the playlist of the concierge-tablet made available in the guest room. Furthermore, one may record the guest’s voice to use it for voice command purposes once the guest arrives at the hotel. Based on our finding that the sense of smell has a greater impact on eudaimonism and behavioral intentions among women than men, hotels may deploy subtler scents with lower intensities, or even different scents, for female guests in comparison to male guests.Keywords: affective experience, emotional value, eudaimonism, hospitality industry, Internet of Things, sensory marketing
Procedia PDF Downloads 57284 Novel Framework for MIMO-Enhanced Robust Selection of Critical Control Factors in Auto Plastic Injection Moulding Quality Optimization
Authors: Seyed Esmail Seyedi Bariran, Khairul Salleh Mohamed Sahari
Abstract:
Apparent quality defects such as warpage, shrinkage, weld line, etc. are such an irresistible phenomenon in mass production of auto plastic appearance parts. These frequently occurred manufacturing defects should be satisfied concurrently so as to achieve a final product with acceptable quality standards. Determining the significant control factors that simultaneously affect multiple quality characteristics can significantly improve the optimization results by eliminating the deviating effect of the so-called ineffective outliers. Hence, a robust quantitative approach needs to be developed upon which major control factors and their level can be effectively determined to help improve the reliability of the optimal processing parameter design. Hence, the primary objective of current study was to develop a systematic methodology for selection of significant control factors (SCF) relevant to multiple quality optimization of auto plastic appearance part. Auto bumper was used as a specimen with the most identical quality and production characteristics to APAP group. A preliminary failure modes and effect analysis (FMEA) was conducted to nominate a database of pseudo significant significant control factors prior to the optimization phase. Later, CAE simulation Moldflow analysis was implemented to manipulate four rampant plastic injection quality defects concerned with APAP group including warpage deflection, volumetric shrinkage, sink mark and weld line. Furthermore, a step-backward elimination searching method (SESME) has been developed for systematic pre-optimization selection of SCF based on hierarchical orthogonal array design and priority-based one-way analysis of variance (ANOVA). The development of robust parameter design in the second phase was based on DOE module powered by Minitab v.16 statistical software. Based on the F-test (F 0.05, 2, 14) one-way ANOVA results, it was concluded that for warpage deflection, material mixture percentage was the most significant control factor yielding a 58.34% of contribution while for the other three quality defects, melt temperature was the most significant control factor with a 25.32%, 84.25%, and 34.57% contribution for sin mark, shrinkage and weld line strength control. Also, the results on the he least significant control factors meaningfully revealed injection fill time as the least significant factor for both warpage and sink mark with respective 1.69% and 6.12% contribution. On the other hand, for shrinkage and weld line defects, the least significant control factors were holding pressure and mold temperature with a 0.23% and 4.05% overall contribution accordingly.Keywords: plastic injection moulding, quality optimization, FMEA, ANOVA, SESME, APAP
Procedia PDF Downloads 348283 Computational Fluid Dynamics (CFD) Calculations of the Wind Turbine with an Adjustable Working Surface
Authors: Zdzislaw Kaminski, Zbigniew Czyz, Krzysztof Skiba
Abstract:
This paper discusses the CFD simulation of a flow around a rotor of a Vertical Axis Wind Turbine. Numerical simulation, unlike experiments, enables us to validate project assumptions when it is designed and avoid a costly preparation of a model or a prototype for a bench test. CFD simulation enables us to compare characteristics of aerodynamic forces acting on rotor working surfaces and define operational parameters like torque or power generated by a turbine assembly. This research focused on the rotor with the blades capable of modifying their working surfaces, i.e. absorbing wind kinetic energy. The operation of this rotor is based on adjusting angular aperture α of the top and bottom parts of the blades mounted on an axis. If this angular aperture α increases, the working surface which absorbs wind kinetic energy also increases. The operation of turbines is characterized by parameters like the angular aperture of blades, power, torque, speed for a given wind speed. These parameters have an impact on the efficiency of assemblies. The distribution of forces acting on the working surfaces in our turbine changes according to the angular velocity of the rotor. Moreover, the resultant force from the force acting on an advancing blade and retreating blade should be as high as possible. This paper is part of the research to improve an efficiency of a rotor assembly. Therefore, using simulation, the courses of the above parameters were studied in three full rotations individually for each of the blades for three angular apertures of blade working surfaces, i.e. 30 °, 60 °, 90 °, at three wind speeds, i.e. 4 m / s, 6 m / s, 8 m / s and rotor speeds ranging from 100 to 500 rpm. Finally, there were created the characteristics of torque coefficients and power as a function of time for each blade separately and for the entire rotor. Accordingly, the correlation between the turbine rotor power as a function of wind speed for varied values of rotor rotational speed. By processing this data, the correlation between the power of the turbine rotor and its rotational speed for each of the angular aperture of the working surfaces was specified. Finally, the optimal values, i.e. of the highest output power for given wind speeds were read. The research results in receiving the basic characteristics of turbine rotor power as a function of wind speed for the three angular apertures of the blades. Given the nature of rotor operation, the growth in the output turbine can be estimated if angular aperture of the blades increases. The controlled adjustment of angle α enables a smooth adjustment of power generated by a turbine rotor. If wind speed is significant, this type of adjustment enables this output power to remain at the same level (by reducing angle α) with no risk of damaging a construction. This work has been financed by the Polish Ministry of Science and Higher Education.Keywords: computational fluid dynamics, numerical analysis, renewable energy, wind turbine
Procedia PDF Downloads 217282 The Temporal Implications of Spatial Prospects
Authors: Zhuo Job Chen, Kevin Nute
Abstract:
The work reported examines potential linkages between spatial and temporal prospects, and more specifically, between variations in the spatial depth and foreground obstruction of window views, and observers’ sense of connection to the future. It was found that external views from indoor spaces were strongly associated with a sense of the future, that partially obstructing such a view with foreground objects significantly reduced its association with the future, and replacing it with a pictorial representation of the same scene (with no real actual depth) removed most of its temporal association. A lesser change in the spatial depth of the view, however, had no apparent effect on association with the future. While the role of spatial depth has still to be confirmed, the results suggest that spatial prospects directly affect temporal ones. The word “prospect” typifies the overlapping of the spatial and temporal in most human languages. It originated in classical times as a purely spatial term, but in the 16th century took on the additional temporal implication of an imagined view ahead, of the future. The psychological notion of prospection, then, has its distant origins in a spatial analogue. While it is not yet proven that space directly structures our processing of time at a physiological level, it is generally agreed that it commonly does so conceptually. The mental representation of possible futures has been a central part of human survival as a species (Boyer, 2008; Suddendorf & Corballis, 2007). A sense of the future seems critical not only practically, but also psychologically. It has been suggested, for example, that lack of a positive image of the future may be an important contributing cause of depression (Beck, 1974; Seligman, 2016). Most people in the developed world now spend more than 90% of their lives indoors. So any direct link between external views and temporal prospects could have important implications for both human well-being and building design. We found that the ability to see what lies in front of us spatially was strongly associated with a sense of what lies ahead temporally. Partial obstruction of a view was found to significantly reduce that sense connection to the future. Replacing a view with a flat pictorial representation of the same scene removed almost all of its connection with the future, but changing the spatial depth of a real view appeared to have no significant effect. While foreground obstructions were found to reduce subjects’ sense of connection to the future, they increased their sense of refuge and security. Consistent with Prospect and Refuge theory, an ideal environment, then, would seem to be one in which we can “see without being seen” (Lorenz, 1952), specifically one that conceals us frontally from others, without restricting our own view. It is suggested that these optimal conditions might be translated architecturally as screens, the apertures of which are large enough for a building occupant to see through unobstructed from close by, but small enough to conceal them from the view of someone looking from a distance outside.Keywords: foreground obstructions, prospection, spatial depth, window views
Procedia PDF Downloads 123281 Life Cycle Datasets for the Ornamental Stone Sector
Authors: Isabella Bianco, Gian Andrea Blengini
Abstract:
The environmental impact related to ornamental stones (such as marbles and granites) is largely debated. Starting from the industrial revolution, continuous improvements of machineries led to a higher exploitation of this natural resource and to a more international interaction between markets. As a consequence, the environmental impact of the extraction and processing of stones has increased. Nevertheless, if compared with other building materials, ornamental stones are generally more durable, natural, and recyclable. From the scientific point of view, studies on stone life cycle sustainability have been carried out, but these are often partial or not very significant because of the high percentage of approximations and assumptions in calculations. This is due to the lack, in life cycle databases (e.g. Ecoinvent, Thinkstep, and ELCD), of datasets about the specific technologies employed in the stone production chain. For example, databases do not contain information about diamond wires, chains or explosives, materials commonly used in quarries and transformation plants. The project presented in this paper aims to populate the life cycle databases with specific data of specific stone processes. To this goal, the methodology follows the standardized approach of Life Cycle Assessment (LCA), according to the requirements of UNI 14040-14044 and to the International Reference Life Cycle Data System (ILCD) Handbook guidelines of the European Commission. The study analyses the processes of the entire production chain (from-cradle-to-gate system boundaries), including the extraction of benches, the cutting of blocks into slabs/tiles and the surface finishing. Primary data have been collected in Italian quarries and transformation plants which use technologies representative of the current state-of-the-art. Since the technologies vary according to the hardness of the stone, the case studies comprehend both soft stones (marbles) and hard stones (gneiss). In particular, data about energy, materials and emissions were collected in marble basins of Carrara and in Beola and Serizzo basins located in the province of Verbano Cusio Ossola. Data were then elaborated through an appropriate software to build a life cycle model. The model was realized setting free parameters that allow an easy adaptation to specific productions. Through this model, the study aims to boost the direct participation of stone companies and encourage the use of LCA tool to assess and improve the stone sector environmental sustainability. At the same time, the realization of accurate Life Cycle Inventory data aims at making available, to researchers and stone experts, ILCD compliant datasets of the most significant processes and technologies related to the ornamental stone sector.Keywords: life cycle assessment, LCA datasets, ornamental stone, stone environmental impact
Procedia PDF Downloads 233280 In-Situ Formation of Particle Reinforced Aluminium Matrix Composites by Laser Powder Bed Fusion of Fe₂O₃/AlSi12 Powder Mixture Using Consecutive Laser Melting+Remelting Strategy
Authors: Qimin Shi, Yi Sun, Constantinus Politis, Shoufeng Yang
Abstract:
In-situ preparation of particle-reinforced aluminium matrix composites (PRAMCs) by laser powder bed fusion (LPBF) additive manufacturing is a promising strategy to strengthen traditional Al-based alloys. The laser-driven thermite reaction can be a practical mechanism to in-situ synthesize PRAMCs. However, introducing oxygen elements through adding Fe₂O₃ makes the powder mixture highly sensitive to form porosity and Al₂O₃ film during LPBF, bringing challenges to producing dense Al-based materials. Therefore, this work develops a processing strategy combined with consecutive high-energy laser melting scanning and low-energy laser remelting scanning to prepare PRAMCs from a Fe₂O₃/AlSi12 powder mixture. The powder mixture consists of 5 wt% Fe₂O₃ and the remainder AlSi12 powder. The addition of 5 wt% Fe₂O₃ aims to achieve balanced strength and ductility. A high relative density (98.2 ± 0.55 %) was successfully obtained by optimizing laser melting (Emelting) and laser remelting surface energy density (Eremelting) to Emelting = 35 J/mm² and Eremelting = 5 J/mm². Results further reveal the necessity of increasing Emelting, to improve metal liquid’s spreading/wetting by breaking up the Al₂O₃ films surrounding the molten pools; however, the high-energy laser melting produced much porosity, including H₂₋, O₂₋ and keyhole-induced pores. The subsequent low-energy laser remelting could close the resulting internal pores, backfill open gaps and smoothen solidified surfaces. As a result, the material was densified by repeating laser melting and laser remelting layer by layer. Although with two-times laser scanning, the microstructure still shows fine cellular Si networks with Al grains inside (grain size of about 370 nm) and in-situ nano-precipitates (Al₂O₃, Si, and Al-Fe(-Si) intermetallics). Finally, the fine microstructure, nano-structured dispersion strengthening, and high-level densification strengthened the in-situ PRAMCs, reaching yield strength of 426 ± 4 MPa and tensile strength of 473 ± 6 MPa. Furthermore, the results can expect to provide valuable information to process other powder mixtures with severe porosity/oxide-film formation potential, considering the evidenced contribution of laser melting/remelting strategy to densify material and obtain good mechanical properties during LPBF.Keywords: densification, laser powder bed fusion, metal matrix composites, microstructures, mechanical properties
Procedia PDF Downloads 155279 Features of Composites Application in Shipbuilding
Authors: Valerii Levshakov, Olga Fedorova
Abstract:
Specific features of ship structures, made from composites, i.e. simultaneous shaping of material and structure, large sizes, complicated outlines and tapered thickness have defined leading role of technology, integrating test results from material science, designing and structural analysis. Main procedures of composite shipbuilding are contact molding, vacuum molding and winding. Now, the most demanded composite shipbuilding technology is the manufacture of structures from fiberglass and multilayer hybrid composites by means of vacuum molding. This technology enables the manufacture of products with improved strength properties (in comparison with contact molding), reduction of production duration, weight and secures better environmental conditions in production area. Mechanized winding is applied for the manufacture of parts, shaped as rotary bodies – i.e. parts of ship, oil and other pipelines, deep-submergence vehicles hulls, bottles, reservoirs and other structures. This procedure involves processing of reinforcing fiberglass, carbon and polyaramide fibers. Polyaramide fibers have tensile strength of 5000 MPa, elastic modulus value of 130 MPa and rigidity of the same can be compared with rigidity of fiberglass, however, the weight of polyaramide fiber is 30% less than weight of fiberglass. The same enables to the manufacture different structures, including that, using both – fiberglass and organic composites. Organic composites are widely used for the manufacture of parts with size and weight limitations. High price of polyaramide fiber restricts the use of organic composites. Perspective area of winding technology development is the manufacture of carbon fiber shafts and couplings for ships. JSC ‘Shipbuilding & Shiprepair Technology Center’ (JSC SSTC) developed technology of dielectric uncouplers for cryogenic lines, cooled by gaseous or liquid cryogenic agents (helium, nitrogen, etc.) for temperature range 4.2-300 K and pressure up to 30 MPa – the same is used for separating components of electro physical equipment with different electrical potentials. Dielectric uncouplers were developed, the manufactured and tested in accordance with International Thermonuclear Experimental Reactor (ITER) Technical specification. Spiral uncouplers withstand operating voltage of 30 kV, direct-flow uncoupler – 4 kV. Application of spiral channel instead of rectilinear enables increasing of breakdown potential and reduction of uncouplers sizes. 95 uncouplers were successfully the manufactured and tested. At the present time, Russian the manufacturers of ship composite structures have started absorption of technology of manufacturing the same using automated prepreg laminating; this technology enables the manufacture of structures with improved operational specifications.Keywords: fiberglass, infusion, polymeric composites, winding
Procedia PDF Downloads 238278 Comparison of Microstructure, Mechanical Properties and Residual Stresses in Laser and Electron Beam Welded Ti–5Al–2.5Sn Titanium Alloy
Authors: M. N. Baig, F. N. Khan, M. Junaid
Abstract:
Titanium alloys are widely employed in aerospace, medical, chemical, and marine applications. These alloys offer many advantages such as low specific weight, high strength to weight ratio, excellent corrosion resistance, high melting point and good fatigue behavior. These attractive properties make titanium alloys very unique and therefore they require special attention in all areas of processing, especially welding. In this work, 1.6 mm thick sheets of Ti-5Al-2,5Sn, an alpha titanium (α-Ti) alloy, were welded using electron beam (EBW) and laser beam (LBW) welding processes to achieve a full penetration Bead-on Plate (BoP) configuration. The weldments were studied using polarized optical microscope, SEM, EDS and XRD. Microhardness distribution across the weld zone and smooth and notch tensile strengths of the weldments were also recorded. Residual stresses using Hole-drill Strain Measurement (HDSM) method and deformation patterns of the weldments were measured for the purpose of comparison of the two welding processes. Fusion zone widths of both EBW and LBW weldments were found to be approximately equivalent owing to fairly similar high power densities of both the processes. Relatively less oxide content and consequently high joint quality were achieved in EBW weldment as compared to LBW due to vacuum environment and absence of any shielding gas. However, an increase in heat-affected zone width and partial ά-martensitic transformation infusion zone of EBW weldment were observed because of lesser cooling rates associated with EBW as compared with LBW. The microstructure infusion zone of EBW weldment comprised both acicular α and ά martensite within the prior β grains whereas complete ά martensitic transformation was observed within the fusion zone of LBW weldment. Hardness of the fusion zone in EBW weldment was found to be lower than the fusion zone of LBW weldment due to the observed microstructural differences. Notch tensile specimen of LBW exhibited higher load capacity, ductility, and absorbed energy as compared with EBW specimen due to the presence of high strength ά martensitic phase. It was observed that the sheet deformation and deformation angle in EBW weldment were more than LBW weldment due to relatively more heat retention in EBW which led to more thermal strains and hence higher deformations and deformation angle. The lowest residual stresses were found in LBW weldments which were tensile in nature. This was owing to high power density and higher cooling rates associated with LBW process. EBW weldment exhibited highest compressive residual stresses due to which the service life of EBW weldment is expected to improve.Keywords: Laser and electron beam welding, Microstructure and mechanical properties, Residual stress and distortions, Titanium alloys
Procedia PDF Downloads 226277 Viability Analysis of a Centralized Hydrogen Generation Plant for Use in Oil Refining Industry
Authors: C. Fúnez Guerra, B. Nieto Calderón, M. Jaén Caparrós, L. Reyes-Bozo, A. Godoy-Faúndez, E. Vyhmeister
Abstract:
The global energy system is experiencing a change of scenery. Unstable energy markets, an increasing focus on climate change and its sustainable development is forcing businesses to pursue new solutions in order to ensure future economic growth. This has led to the interest in using hydrogen as an energy carrier in transportation and industrial applications. As an energy carrier, hydrogen is accessible and holds a high gravimetric energy density. Abundant in hydrocarbons, hydrogen can play an important role in the shift towards low-emission fossil value chains. By combining hydrogen production by natural gas reforming with carbon capture and storage, the overall CO2 emissions are significantly reduced. In addition, the flexibility of hydrogen as an energy storage makes it applicable as a stabilizer in the renewable energy mix. The recent development in hydrogen fuel cells is also raising the expectations for a hydrogen powered transportation sector. Hydrogen value chains exist to a large extent in the industry today. The global hydrogen consumption was approximately 50 million tonnes (7.2 EJ) in 2013, where refineries, ammonia, methanol production and metal processing were main consumers. Natural gas reforming produced 48% of this hydrogen, but without carbon capture and storage (CCS). The total emissions from the production reached 500 million tonnes of CO2, hence alternative production methods with lower emissions will be necessary in future value chains. Hydrogen from electrolysis is used for a wide range of industrial chemical reactions for many years. Possibly, the earliest use was for the production of ammonia-based fertilisers by Norsk Hydro, with a test reactor set up in Notodden, Norway, in 1927. This application also claims one of the world’s largest electrolyser installations, at Sable Chemicals in Zimbabwe. Its array of 28 electrolysers consumes 80 MW per hour, producing around 21,000 Nm3/h of hydrogen. These electrolysers can compete if cheap sources of electricity are available and natural gas for steam reforming is relatively expensive. Because electrolysis of water produces oxygen as a by-product, a system of Autothermal Reforming (ATR) utilizing this oxygen has been analyzed. Replacing the air separation unit with electrolysers produces the required amount of oxygen to the ATR as well as additional hydrogen. The aim of this paper is to evaluate the technical and economic potential of large-scale production of hydrogen for oil refining industry. Sensitivity analysis of parameters such as investment costs, plant operating hours, electricity price and sale price of hydrogen and oxygen are performed.Keywords: autothermal reforming, electrolyser, hydrogen, natural gas, steam methane reforming
Procedia PDF Downloads 211276 Obtaining Composite Cotton Fabric by Cyclodextrin Grafting
Authors: U. K. Sahin, N. Erdumlu, C. Saricam, I. Gocek, M. H. Arslan, H. Acikgoz-Tufan, B. Kalav
Abstract:
Finishing is an important part of fabric processing with which a wide range of features are imparted to greige or colored fabrics for various end-uses. Especially, by the addition or impartation of nano-scaled particles to the fabric structure composite fabrics, a kind of composite materials can be acquired. Composite materials, generally shortened as composites or in other words composition materials, are engineered or naturally occurring materials made from two or more component materials with significantly different physical, mechanical or chemical characteristics remaining separate and distinctive at the macroscopic or microscopic scale within the end product structure. Therefore, the technique finishing which is one of the fundamental methods to be applied on fabrics for obtainment of composite fabrics with many functionalities was used in the current study with the same purpose. However, regardless of the finishing materials applied, the efficient life of finished product on offering desired feature is low, since the durability of finishes on the material is limited. Any increase in durability of these finishes on textiles would enhance the life of use for textiles, which will result in happier users. Therefore, in this study, since higher durability was desired for the finishing materials fixed on the fabrics, nano-scaled hollow structured cyclodextrins were chemically imparted by grafting to the structure of conventional cotton fabrics by the help of finishing technique in order to be fixed permanently. By this way, a processed and functionalized base fabric having potential to be treated in the subsequent processes with many different finishing agents and nanomaterials could be obtained. Henceforth, this fabric can be used as a multi-functional fabric due to the encapturing ability of cyclodextrins to molecules/particles via physical/chemical means. In this study, scoured and rinsed woven bleached plain weave 100% cotton fabrics were utilized because textiles made of cotton are the most demanded textile products in the textile market by the textile consumers in daily life. Cotton fabric samples were immersed in treating baths containing β-cyclodextrin and 1,2,3,4-butanetetracarboxylic acid and to reduce the curing temperature the catalyst sodium hypophosphite monohydrate was used. All impregnated fabric samples were pre-dried. The reaction of grafting was performed in dry state. The treated and cured fabric samples were rinsed with warm distilled water and dried. The samples were dried for 4 h and weighed before and after finishing and rinsing. Stability and durability of β-cyclodextrins on fabric surface against external factors such as washing as well as strength of functionalized fabric in terms of tensile and tear strength were tested. Presence and homogeneity of distribution of β-cyclodextrins on fabric surface were characterized.Keywords: cotton fabric, cyclodextrine, improved durability, multifunctional composite textile
Procedia PDF Downloads 295275 Sensitivity and Uncertainty Analysis of Hydrocarbon-In-Place in Sandstone Reservoir Modeling: A Case Study
Authors: Nejoud Alostad, Anup Bora, Prashant Dhote
Abstract:
Kuwait Oil Company (KOC) has been producing from its major reservoirs that are well defined and highly productive and of superior reservoir quality. These reservoirs are maturing and priority is shifting towards difficult reservoir to meet future production requirements. This paper discusses the results of the detailed integrated study for one of the satellite complex field discovered in the early 1960s. Following acquisition of new 3D seismic data in 1998 and re-processing work in the year 2006, an integrated G&G study was undertaken to review Lower Cretaceous prospectivity of this reservoir. Nine wells have been drilled in the area, till date with only three wells showing hydrocarbons in two formations. The average oil density is around 300API (American Petroleum Institute), and average porosity and water saturation of the reservoir is about 23% and 26%, respectively. The area is dissected by a number of NW-SE trending faults. Structurally, the area consists of horsts and grabens bounded by these faults and hence compartmentalized. The Wara/Burgan formation consists of discrete, dirty sands with clean channel sand complexes. There is a dramatic change in Upper Wara distributary channel facies, and reservoir quality of Wara and Burgan section varies with change of facies over the area. So predicting reservoir facies and its quality out of sparse well data is a major challenge for delineating the prospective area. To characterize the reservoir of Wara/Burgan formation, an integrated workflow involving seismic, well, petro-physical, reservoir and production engineering data has been used. Porosity and water saturation models are prepared and analyzed to predict reservoir quality of Wara and Burgan 3rd sand upper reservoirs. Subsequently, boundary conditions are defined for reservoir and non-reservoir facies by integrating facies, porosity and water saturation. Based on the detailed analyses of volumetric parameters, potential volumes of stock-tank oil initially in place (STOIIP) and gas initially in place (GIIP) were documented after running several probablistic sensitivity analysis using Montecalro simulation method. Sensitivity analysis on probabilistic models of reservoir horizons, petro-physical properties, and oil-water contacts and their effect on reserve clearly shows some alteration in the reservoir geometry. All these parameters have significant effect on the oil in place. This study has helped to identify uncertainty and risks of this prospect particularly and company is planning to develop this area with drilling of new wells.Keywords: original oil-in-place, sensitivity, uncertainty, sandstone, reservoir modeling, Monte-Carlo simulation
Procedia PDF Downloads 197274 Reflective Portfolio to Bridge the Gap in Clinical Training
Authors: Keenoo Bibi Sumera, Alsheikh Mona, Mubarak Jan Beebee Zeba Mahetaab
Abstract:
Background: Due to the busy schedule of the practicing clinicians at the hospitals, students may not always be attended to, which is to their detriment. The clinicians at the hospitals are also not always acquainted with teaching and/or supervising students on their placements. Additionally, there is a high student-patient ratio. Since they are the prospective clinical doctors under training, they need to reach the competence levels in clinical decision-making skills to be able to serve the healthcare system of the country and to be safe doctors. Aims and Objectives: A reflective portfolio was used to provide a means for students to learn by reflecting on their experiences and obtaining continuous feedback. This practice is an attempt to compensate for the scarcity of lack of resources, that is, clinical placement supervisors and patients. It is also anticipated that it will provide learners with a continuous monitoring and learning gap analysis tool for their clinical skills. Methodology: A hardcopy reflective portfolio was designed and validated. The portfolio incorporated a mini clinical evaluation exercise (mini-CEX), direct observation of procedural skills and reflection sections. Workshops were organized for the stakeholders, that is the management, faculty and students, separately. The rationale of reflection was emphasized. Students were given samples of reflective writing. The portfolio was then implemented amongst the undergraduate medical students of years four, five and six during clinical clerkship. After 16 weeks of implementation of the portfolio, a survey questionnaire was introduced to explore how undergraduate students perceive the educational value of the reflective portfolio and its impact on their deep information processing. Results: The majority of the respondents are in MD Year 5. Out of 52 respondents, 57.7% were doing the internal medicine clinical placement rotation, and 42.3% were in Otorhinolaryngology clinical placement rotation. The respondents believe that the implementation of a reflective portfolio helped them identify their weaknesses, gain professional development in terms of helping them to identify areas where the knowledge is good, increase the learning value if it is used as a formative assessment, try to relate to different courses and in improving their professional skills. However, it is not necessary that the portfolio will improve the self-esteem of respondents or help in developing their critical thinking, The portfolio takes time to complete, and the supervisors are not useful. They had to chase supervisors for feedback. 53.8% of the respondents followed the Gibbs reflective model to write the reflection, whilst the others did not follow any guidelines to write the reflection 48.1% said that the feedback was helpful, 17.3% preferred the use of written feedback, whilst 11.5% preferred oral feedback. Most of them suggested more frequent feedback. 59.6% of respondents found the current portfolio user-friendly, and 28.8% thought it was too bulky. 27.5% have mentioned that for a mobile application. Conclusion: The reflective portfolio, through the reflection of their work and regular feedback from supervisors, has an overall positive impact on the learning process of undergraduate medical students during their clinical clerkship.Keywords: Portfolio, Reflection, Feedback, Clinical Placement, Undergraduate Medical Education
Procedia PDF Downloads 85273 Numerical Investigation of Flow Boiling within Micro-Channels in the Slug-Plug Flow Regime
Authors: Anastasios Georgoulas, Manolia Andredaki, Marco Marengo
Abstract:
The present paper investigates the hydrodynamics and heat transfer characteristics of slug-plug flows under saturated flow boiling conditions within circular micro-channels. Numerical simulations are carried out, using an enhanced version of the open-source CFD-based solver ‘interFoam’ of OpenFOAM CFD Toolbox. The proposed user-defined solver is based in the Volume Of Fluid (VOF) method for interface advection, and the mentioned enhancements include the implementation of a smoothing process for spurious current reduction, the coupling with heat transfer and phase change as well as the incorporation of conjugate heat transfer to account for transient solid conduction. In all of the considered cases in the present paper, a single phase simulation is initially conducted until a quasi-steady state is reached with respect to the hydrodynamic and thermal boundary layer development. Then, a predefined and constant frequency of successive vapour bubbles is patched upstream at a certain distance from the channel inlet. The proposed numerical simulation set-up can capture the main hydrodynamic and heat transfer characteristics of slug-plug flow regimes within circular micro-channels. In more detail, the present investigation is focused on exploring the interaction between subsequent vapour slugs with respect to their generation frequency, the hydrodynamic characteristics of the liquid film between the generated vapour slugs and the channel wall as well as of the liquid plug between two subsequent vapour slugs. The proposed investigation is carried out for the 3 different working fluids and three different values of applied heat flux in the heated part of the considered microchannel. The post-processing and analysis of the results indicate that the dynamics of the evolving bubbles in each case are influenced by both the upstream and downstream bubbles in the generated sequence. In each case a slip velocity between the vapour bubbles and the liquid slugs is evident. In most cases interfacial waves appear close to the bubble tail that significantly reduce the liquid film thickness. Finally, in accordance with previous investigations vortices that are identified in the liquid slugs between two subsequent vapour bubbles can significantly enhance the convection heat transfer between the liquid regions and the heated channel walls. The overall results of the present investigation can be used to enhance the present understanding by providing better insight of the complex, underpinned heat transfer mechanisms in saturated boiling within micro-channels in the slug-plug flow regime.Keywords: slug-plug flow regime, micro-channels, VOF method, OpenFOAM
Procedia PDF Downloads 267272 Safety Assessment of Traditional Ready-to-Eat Meat Products Vended at Retail Outlets in Kebbi and Sokoto States, Nigeria
Authors: M. I. Ribah, M. Jibir, Y. A. Bashar, S. S. Manga
Abstract:
Food safety is a significant and growing public health problem in the world and Nigeria as a developing country, since food-borne diseases are important contributors to the huge burden of sickness and death of humans. In Nigeria, traditional ready-to-eat meat products (RTE-MPs) like balangu, tsire, guru and dried meat products like kilishi, dambun nama, banda, were reported to be highly appreciated because of their eating qualities. The consumption of these products was considered as safe due to the treatments that are usually involved during their production process. However, during processing and handling, the products could be contaminated by pathogens that could cause food poisoning. Therefore, a hazard identification for pathogenic bacteria on some traditional RTE-MPs was conducted in Kebbi and Sokoto States, Nigeria. A total of 116 RTE-MPs (balangu-38, kilishi-39 and tsire-39) samples were obtained from retail outlets and analyzed using standard cultural microbiological procedures in general and selective enrichment media to isolate the target pathogens. A six-fold serial dilution was prepared and using the pour plating method, colonies were counted. Serial dilutions were selected based on the prepared pre-labeled Petri dishes for each sample. A volume of 10-12 ml of molten Nutrient agar cooled to 42-45°C was poured into each Petri dish and 1 ml each from dilutions of 102, 104 and 106 for every sample was respectively poured on a pre-labeled Petri plate after which colonies were counted. The isolated pathogens were identified and confirmed after series of biochemical tests. Frequencies and percentages were used to describe the presence of pathogens. The General Linear Model was used to analyze data on pathogen presence according to RTE-MPs and means were separated using the Tukey test at 0.05 confidence level. Of the 116 RTE-MPs samples collected, 35 (30.17%) samples were found to be contaminated with some tested pathogens. Prevalence results showed that Escherichia coli, salmonella and Staphylococcus aureus were present in the samples. Mean total bacterial count was 23.82×106 cfu/g. The frequency of individual pathogens isolated was; Staphylococcus aureus 18 (15.51%), Escherichia coli 12 (10.34%) and Salmonella 5 (4.31%). Also, among the RTE-MPs tested, the total bacterial counts were found to differ significantly (P < 0.05), with 1.81, 2.41 and 2.9×104 cfu/g for tsire, kilishi, and balangu, respectively. The study concluded that the presence of pathogenic bacteria in balangu could pose grave health risks to consumers, and hence, recommended good manufacturing practices in the production of balangu to improve the products’ safety.Keywords: ready-to-eat meat products, retail outlets, public health, safety assessment
Procedia PDF Downloads 133