Search results for: large building
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 10414

Search results for: large building

2374 The Vulnerability of Farmers in Valencia Negros Oriental to Climate Change: El Niño Phenomenon and Malnutrition

Authors: J. K. Pis-An

Abstract:

Objective: The purpose of the study was to examine the vulnerability of farmers to the effects of climate change, specifically the El Niño phenomenon was felt in the Philippines in 2009-2010. Methods: KAP Survey determines behavioral response to vulnerability to the effects of El Niño. Body Mass Index: Dietary Assessment using 24-hour food recall. Results: 75% of the respondents claimed that crop significantly decreased during drought. Indications that households of farmers are large where 51.6% are composed of 6-10 family members with 68% annual incomes below Php 100,00. Anthropometric assessment showed that the prevalence of Chronic Energy Deficiency Grade 1 among females 17% and 28.57% for low normal. While male body mass index result for chronic energy deficiency grade 1 10%, low normal 18.33% and and obese grade 1, 31.67%. Dietary assessment of macronutrient intake of carbohydrates, protein, and fat 31.6 % among respondents are below recommended amounts. Micronutrient deficiency of calcium, iron, vit. A, thiamine, riboflavin, niacin, and Vit. C. Conclusion: Majority of the rural populations are engaged into farming livelihood that makes up the backbone of their economic growth. Placing the current nutritional status of the farmers in the context of food security, there are reasons to believe that the status will go for worse if the extreme climatic conditions will once again prevail in the region. Farmers rely primarily on home grown crops for their food supply, a reduction in farm production during drought is expected to adversely affect dietary intake. The local government therefore institute programs to increase food resiliency and to prioritize health of the population as the moving force for productivity and development.

Keywords: world health organization, united nation framework convention on climate change, anthropometric, macronutrient, micronutrient

Procedia PDF Downloads 433
2373 Computational Fluid Dynamics Modeling of Physical Mass Transfer of CO₂ by N₂O Analogy Using One Fluid Formulation in OpenFOAM

Authors: Phanindra Prasad Thummala, Umran Tezcan Un, Ahmet Ozan Celik

Abstract:

Removal of CO₂ by MEA (monoethanolamine) in structured packing columns depends highly on the gas-liquid interfacial area and film thickness (liquid load). CFD (computational fluid dynamics) is used to find the interfacial area, film thickness and their impact on mass transfer in gas-liquid flow effectively in any column geometry. In general modeling approaches used in CFD derive mass transfer parameters from standard correlations based on penetration or surface renewal theories. In order to avoid the effect of assumptions involved in deriving the correlations and model the mass transfer based solely on fluid properties, state of art approaches like one fluid formulation is useful. In this work, the one fluid formulation was implemented and evaluated for modeling the physical mass transfer of CO₂ by N₂O analogy in OpenFOAM CFD software. N₂O analogy avoids the effect of chemical reactions on absorption and allows studying the amount of CO₂ physical mass transfer possible in a given geometry. The computational domain in the current study was a flat plate with gas and liquid flowing in the countercurrent direction. The effect of operating parameters such as flow rate, the concentration of MEA and angle of inclination on the physical mass transfer is studied in detail. Liquid side mass transfer coefficients obtained by simulations are compared to the correlations available in the literature and it was found that the one fluid formulation was effectively capturing the effects of interface surface instabilities on mass transfer coefficient with higher accuracy. The high mesh refinement near the interface region was found as a limiting reason for utilizing this approach on large-scale simulations. Overall, the one fluid formulation is found more promising for CFD studies involving the CO₂ mass transfer.

Keywords: one fluid formulation, CO₂ absorption, liquid mass transfer coefficient, OpenFOAM, N₂O analogy

Procedia PDF Downloads 213
2372 Anti-Parasite Targeting with Amino Acid-Capped Nanoparticles Modulates Multiple Cellular Processes in Host

Authors: Oluyomi Stephen Adeyemi, Kentaro Kato

Abstract:

Toxoplasma gondii is the etiological agent of toxoplasmosis, a common parasitic disease capable of infecting a range of hosts, including nearly one-third of the human population. Current treatment options for toxoplasmosis patients are limited. In consequence, toxoplasmosis represents a large global burden that is further enhanced by the shortcomings of the current therapeutic options. These factors underscore the need for better anti-T. gondii agents and/or new treatment approach. In the present study, we sought to find out whether preparing and capping nanoparticles (NPs) in amino acids, would enhance specificity toward the parasite versus the host cell. The selection of amino acids was premised on the fact that T. gondii is auxotrophic for some amino acids. The amino acid-nanoparticles (amino-NPs) were synthesized, purified and characterized following established protocols. Next, we tested to determine the anti-T. gondii activity of the amino-NPs using in vitro experimental model of infection. Overall, our data show evidence that supports enhanced and excellent selective action against the parasite versus the host cells by amino-NPs. The findings are promising and provide additional support that warrants exploring the prospects of NPs as alternative anti-parasite agents. In addition, the anti-parasite action by amino-NPs indicates that nutritional requirement of parasite may represent a viable target in the development of better alternative anti-parasite agents. Furthermore, data suggest the anti-parasite mechanism of the amino-NPs involves multiple cellular processes including the production of reactive oxygen species (ROS), modulation of hypoxia-inducing factor-1 alpha (HIF-1α) as well as the activation of kynurenine pathway. Taken together, findings highlight further, the prospects of NPs as alternative source of anti-parasite agents.

Keywords: drug discovery, infectious diseases, mode of action, nanomedicine

Procedia PDF Downloads 100
2371 Minimizing the Drilling-Induced Damage in Fiber Reinforced Polymeric Composites

Authors: S. D. El Wakil, M. Pladsen

Abstract:

Fiber reinforced polymeric (FRP) composites are finding wide-spread industrial applications because of their exceptionally high specific strength and specific modulus of elasticity. Nevertheless, it is very seldom to get ready-for-use components or products made of FRP composites. Secondary processing by machining, particularly drilling, is almost always required to make holes for fastening components together to produce assemblies. That creates problems since the FRP composites are neither homogeneous nor isotropic. Some of the problems that are encountered include the subsequent damage in the region around the drilled hole and the drilling – induced delamination of the layer of ply, that occurs both at the entrance and the exit planes of the work piece. Evidently, the functionality of the work piece would be detrimentally affected. The current work was carried out with the aim of eliminating or at least minimizing the work piece damage associated with drilling of FPR composites. Each test specimen involves a woven reinforced graphite fiber/epoxy composite having a thickness of 12.5 mm (0.5 inch). A large number of test specimens were subjected to drilling operations with different combinations of feed rates and cutting speeds. The drilling induced damage was taken as the absolute value of the difference between the drilled hole diameter and the nominal one taken as a percentage of the nominal diameter. The later was determined for each combination of feed rate and cutting speed, and a matrix comprising those values was established, where the columns indicate varying feed rate while and rows indicate varying cutting speeds. Next, the analysis of variance (ANOVA) approach was employed using Minitab software, in order to obtain the combination that would improve the drilling induced damage. Experimental results show that low feed rates coupled with low cutting speeds yielded the best results.

Keywords: drilling of composites, dimensional accuracy of holes drilled in composites, delamination and charring, graphite-epoxy composites

Procedia PDF Downloads 381
2370 Use Cloud-Based Watson Deep Learning Platform to Train Models Faster and More Accurate

Authors: Susan Diamond

Abstract:

Machine Learning workloads have traditionally been run in high-performance computing (HPC) environments, where users log in to dedicated machines and utilize the attached GPUs to run training jobs on huge datasets. Training of large neural network models is very resource intensive, and even after exploiting parallelism and accelerators such as GPUs, a single training job can still take days. Consequently, the cost of hardware is a barrier to entry. Even when upfront cost is not a concern, the lead time to set up such an HPC environment takes months from acquiring hardware to set up the hardware with the right set of firmware, software installed and configured. Furthermore, scalability is hard to achieve in a rigid traditional lab environment. Therefore, it is slow to react to the dynamic change in the artificial intelligent industry. Watson Deep Learning as a service, a cloud-based deep learning platform that mitigates the long lead time and high upfront investment in hardware. It enables robust and scalable sharing of resources among the teams in an organization. It is designed for on-demand cloud environments. Providing a similar user experience in a multi-tenant cloud environment comes with its own unique challenges regarding fault tolerance, performance, and security. Watson Deep Learning as a service tackles these challenges and present a deep learning stack for the cloud environments in a secure, scalable and fault-tolerant manner. It supports a wide range of deep-learning frameworks such as Tensorflow, PyTorch, Caffe, Torch, Theano, and MXNet etc. These frameworks reduce the effort and skillset required to design, train, and use deep learning models. Deep Learning as a service is used at IBM by AI researchers in areas including machine translation, computer vision, and healthcare. 

Keywords: deep learning, machine learning, cognitive computing, model training

Procedia PDF Downloads 198
2369 Hyperspectral Imaging and Nonlinear Fukunaga-Koontz Transform Based Food Inspection

Authors: Hamidullah Binol, Abdullah Bal

Abstract:

Nowadays, food safety is a great public concern; therefore, robust and effective techniques are required for detecting the safety situation of goods. Hyperspectral Imaging (HSI) is an attractive material for researchers to inspect food quality and safety estimation such as meat quality assessment, automated poultry carcass inspection, quality evaluation of fish, bruise detection of apples, quality analysis and grading of citrus fruits, bruise detection of strawberry, visualization of sugar distribution of melons, measuring ripening of tomatoes, defect detection of pickling cucumber, and classification of wheat kernels. HSI can be used to concurrently collect large amounts of spatial and spectral data on the objects being observed. This technique yields with exceptional detection skills, which otherwise cannot be achieved with either imaging or spectroscopy alone. This paper presents a nonlinear technique based on kernel Fukunaga-Koontz transform (KFKT) for detection of fat content in ground meat using HSI. The KFKT which is the nonlinear version of FKT is one of the most effective techniques for solving problems involving two-pattern nature. The conventional FKT method has been improved with kernel machines for increasing the nonlinear discrimination ability and capturing higher order of statistics of data. The proposed approach in this paper aims to segment the fat content of the ground meat by regarding the fat as target class which is tried to be separated from the remaining classes (as clutter). We have applied the KFKT on visible and nearinfrared (VNIR) hyperspectral images of ground meat to determine fat percentage. The experimental studies indicate that the proposed technique produces high detection performance for fat ratio in ground meat.

Keywords: food (ground meat) inspection, Fukunaga-Koontz transform, hyperspectral imaging, kernel methods

Procedia PDF Downloads 419
2368 Object Detection in Digital Images under Non-Standardized Conditions Using Illumination and Shadow Filtering

Authors: Waqqas-ur-Rehman Butt, Martin Servin, Marion Pause

Abstract:

In recent years, object detection has gained much attention and very encouraging research area in the field of computer vision. The robust object boundaries detection in an image is demanded in numerous applications of human computer interaction and automated surveillance systems. Many methods and approaches have been developed for automatic object detection in various fields, such as automotive, quality control management and environmental services. Inappropriately, to the best of our knowledge, object detection under illumination with shadow consideration has not been well solved yet. Furthermore, this problem is also one of the major hurdles to keeping an object detection method from the practical applications. This paper presents an approach to automatic object detection in images under non-standardized environmental conditions. A key challenge is how to detect the object, particularly under uneven illumination conditions. Image capturing conditions the algorithms need to consider a variety of possible environmental factors as the colour information, lightening and shadows varies from image to image. Existing methods mostly failed to produce the appropriate result due to variation in colour information, lightening effects, threshold specifications, histogram dependencies and colour ranges. To overcome these limitations we propose an object detection algorithm, with pre-processing methods, to reduce the interference caused by shadow and illumination effects without fixed parameters. We use the Y CrCb colour model without any specific colour ranges and predefined threshold values. The segmented object regions are further classified using morphological operations (Erosion and Dilation) and contours. Proposed approach applied on a large image data set acquired under various environmental conditions for wood stack detection. Experiments show the promising result of the proposed approach in comparison with existing methods.

Keywords: image processing, illumination equalization, shadow filtering, object detection

Procedia PDF Downloads 205
2367 Surface Deformation Studies in South of Johor Using the Integration of InSAR and Resistivity Methods

Authors: Sirajo Abubakar, Ismail Ahmad Abir, Muhammad Sabiu Bala, Muhammad Mustapha Adejo, Aravind Shanmugaveloo

Abstract:

Over the years, land subsidence has been a serious threat mostly to urban areas. Land subsidence is the sudden sinking or gradual downward settling of the ground’s surface with little or no horizontal motion. In most areas, land subsidence is a slow process that covers a large area; therefore, it is sometimes left unnoticed. South of Johor is the area of interest for this project because it is going through rapid urbanization. The objective of this research is to evaluate and identify potential deformations in the south of Johor using integrated remote sensing and 2D resistivity methods. Synthetic aperture radar interferometry (InSAR) which is a remote sensing technique has the potential to map coherent displacements at centimeter to millimeter resolutions. Persistent scatterer interferometry (PSI) stacking technique was applied to Sentinel-1 data to detect the earth deformation in the study area. A dipole-dipole configuration resistivity profiling was conducted in three areas to determine the subsurface features in that area. This subsurface features interpreted were then correlated with the remote sensing technique to predict the possible causes of subsidence and uplifts in the south of Johor. Based on the results obtained, West Johor Bahru (0.63mm/year) and Ulu Tiram (1.61mm/year) are going through uplift due to possible geological uplift. On the other end, East Johor Bahru (-0.26mm/year) and Senai (-1.16mm/year) undergo subsidence due to possible fracture and granitic boulders loading. Land subsidence must be taken seriously as it can cause serious damages to infrastructures and human life. Monitoring land subsidence and taking preventive actions must be done to prevent any disasters.

Keywords: interferometric synthetic aperture radar, persistent scatter, minimum spanning tree, resistivity, subsidence

Procedia PDF Downloads 135
2366 Recovery of Selenium from Scrubber Sludge in Copper Process

Authors: Lakshmikanth Reddy, Bhavin Desai, Chandrakala Kari, Sanjay Sarkar, Pradeep Binu

Abstract:

The sulphur dioxide gases generated as a by-product of smelting and converting operations of copper concentrate contain selenium apart from zinc, lead, copper, cadmium, bismuth, antimony, and arsenic. The gaseous stream is treated in waste heat boiler, electrostatic precipitator and scrubbers to remove coarse particulate matter in order to produce commercial grade sulfuric acid. The gas cleaning section of the acid plant uses water to scrub the smelting gases. After scrubbing, the sludge settled at the bottom of the scrubber, was analyzed in present investigation. It was found to contain 30 to 40 wt% copper and selenium up to 40 wt% selenium. The sludge collected during blow-down is directly recycled to the smelter for copper recovery. However, the selenium is expected to again vaporize due to high oxidation potential during smelting and converting, causing accumulation of selenium in sludge. In present investigation, a roasting process has been developed to recover the selenium before the copper recovery from the sludge at smelter. Selenium is associated with copper in sludge as copper selenide, as determined by X-ray diffraction and electron microscopy. The thermodynamic and thermos-gravimetry study revealed that the copper selenide phase present in the sludge was amenable to oxidation at 600°C forming oxides of copper and selenium (Cu-Se-O). However, the dissociation of selenium from the copper oxide was made possible by sulfatation using sulfur dioxide between 450 to 600°C, resulting into the formation of CuSO₄ (s) and SeO₂ (g). Lab scale trials were carried out in vertical tubular furnace to determine the optimum roasting conditions with respect to roasting time, temperature and molar ratio of O₂:SO₂. Using these optimum conditions, selenium up to 90 wt% in the form of SeO₂ vapors could be recovered from the sludge in a large-scale commercial roaster. Roasted sludge free from the selenium and containing oxides and sulfates of copper could now be recycled in the smelter for copper recovery.

Keywords: copper, selenium, copper selenide, sludge, roasting, SeO₂

Procedia PDF Downloads 193
2365 Community Perceptions on Honey Quality in Tobacco Growing Areas in Kigoma Region, Tanzania

Authors: Pilly Kagosi, Cherestino Balama

Abstract:

Beekeeping plays major role in improving biodiversity, increasing household income, and crop production through pollination. Tobacco farming is also the main source of household income for smallholder farmers. In Kigoma, production of Tobacco has increased and is perceived to threaten honey quality. The study explored the perception of the community on quality of honey in tobacco and non tobacco growing areas. The study was conducted in Kigoma Region, Tanzania. District and Villages were purposively sampled based on large numbers of people dealing with beekeeping activities and tobacco farming. Socioeconomic data were collected and analysed using Statistical Package for Social Sciences and content analysis. The perception of stakeholders on honey quality was analysed using Likert scale. Majority of the respondents agreed that tobacco farming greatly affects honey quality because honey from beehives near tobacco farms test bitter and sometimes irritating, which was associated with nicotine content and agrochemicals applied to tobacco crops. Though they cannot differentiate honey bitterness from agrochemicals and bee fodders. Furthermore, it was revealed that chemicals applied to tobacco and vegetables have negative effect on the bees and honey quality. Respondents believe that setting bee hives near tobacco farms might contaminate honey and therefore affect its quality. Beekeepers are not aware of the nicotine content from other bee fodders like miombo of which do not have any effect on human beings. Actually, tobacco farming does not affect beekeeping activities in issue of quality when farmers follow proper management of tobacco flowers and proper handling of honey. Though, big challenge in tobacco farming is chemically applied to the crops and harvest bee fodders for curing tobacco. The study recommends training to community on proper management of tobacco and proper handling of bee products.

Keywords: community, honey, perceptions, tobacco

Procedia PDF Downloads 134
2364 Different Data-Driven Bivariate Statistical Approaches to Landslide Susceptibility Mapping (Uzundere, Erzurum, Turkey)

Authors: Azimollah Aleshzadeh, Enver Vural Yavuz

Abstract:

The main goal of this study is to produce landslide susceptibility maps using different data-driven bivariate statistical approaches; namely, entropy weight method (EWM), evidence belief function (EBF), and information content model (ICM), at Uzundere county, Erzurum province, in the north-eastern part of Turkey. Past landslide occurrences were identified and mapped from an interpretation of high-resolution satellite images, and earlier reports as well as by carrying out field surveys. In total, 42 landslide incidence polygons were mapped using ArcGIS 10.4.1 software and randomly split into a construction dataset 70 % (30 landslide incidences) for building the EWM, EBF, and ICM models and the remaining 30 % (12 landslides incidences) were used for verification purposes. Twelve layers of landslide-predisposing parameters were prepared, including total surface radiation, maximum relief, soil groups, standard curvature, distance to stream/river sites, distance to the road network, surface roughness, land use pattern, engineering geological rock group, topographical elevation, the orientation of slope, and terrain slope gradient. The relationships between the landslide-predisposing parameters and the landslide inventory map were determined using different statistical models (EWM, EBF, and ICM). The model results were validated with landslide incidences, which were not used during the model construction. In addition, receiver operating characteristic curves were applied, and the area under the curve (AUC) was determined for the different susceptibility maps using the success (construction data) and prediction (verification data) rate curves. The results revealed that the AUC for success rates are 0.7055, 0.7221, and 0.7368, while the prediction rates are 0.6811, 0.6997, and 0.7105 for EWM, EBF, and ICM models, respectively. Consequently, landslide susceptibility maps were classified into five susceptibility classes, including very low, low, moderate, high, and very high. Additionally, the portion of construction and verification landslides incidences in high and very high landslide susceptibility classes in each map was determined. The results showed that the EWM, EBF, and ICM models produced satisfactory accuracy. The obtained landslide susceptibility maps may be useful for future natural hazard mitigation studies and planning purposes for environmental protection.

Keywords: entropy weight method, evidence belief function, information content model, landslide susceptibility mapping

Procedia PDF Downloads 125
2363 Legal Problems with the Thai Political Party Establishment

Authors: Paiboon Chuwatthanakij

Abstract:

Each of the countries around the world has different ways of management and many of them depend on people to administrate their country. Thailand, for example, empowers the sovereignty of Thai people under constitution; however, our Thai voting system is not able to flow fast enough under the current Political management system. The sovereignty of Thai people is addressing this problem through representatives during current elections, in order to set a new policy for the countries ideology to change in the House and the Cabinet. This is particularly important in a democracy to be developed under our current political institution. The Organic Act on Political Parties 2007 is the establishment we have today that is causing confrontations within the establishment. There are many political parties that will soon be abolished. Many political parties have already been subsidized. This research study is to analyze the legal problems with the political party establishment under the Organic Act on Political Parties 2007. This will focus on the freedom of each political establishment compared to an effective political operation. Textbooks and academic papers will be referenced from studies home and abroad. The study revealed that Organic Act on Political Parties 2007 has strict provisions on the political structure over the number of members and the number of branches involved within political parties system. Such operations shall be completed within one year; but under the existing laws the small parties are not able to participate with the bigger parties. The cities are capable of fulfilling small political party requirements but fail to become coalesced because the current laws won't allow them to be united as one. It is important to allow all independent political parties to join our current political structure. Board members can’t help the smaller parties to become a large organization under the existing Thai laws. Creating a new establishment that functions efficiently throughout all branches would be one solution to these legal problems between all political parties. With this new operation, individual political parties can participate with the bigger parties during elections. Until current political institutions change their system to accommodate public opinion, these current Thai laws will continue to be a problem with all political parties in Thailand.

Keywords: coalesced, political party, sovereignty, elections

Procedia PDF Downloads 302
2362 Chikungunya Virus Detection Utilizing an Origami Based Electrochemical Paper Analytical Device

Authors: Pradakshina Sharma, Jagriti Narang

Abstract:

Due to the critical significance in the early identification of infectious diseases, electrochemical sensors have garnered considerable interest. Here, we develop a detection platform for the chikungunya virus by rationally implementing the extremely high charge-transfer efficiency of a ternary nanocomposite of graphene oxide, silver, and gold (G/Ag/Au) (CHIKV). Because paper is an inexpensive substrate and can be produced in large quantities, the use of electrochemical paper analytical device (EPAD) origami further enhances the sensor's appealing qualities. A cost-effective platform for point-of-care diagnostics is provided by paper-based testing. These types of sensors are referred to as eco-designed analytical tools due to their efficient production, usage of the eco-friendly substrate, and potential to reduce waste management after measuring by incinerating the sensor. In this research, the paper's foldability property has been used to develop and create 3D multifaceted biosensors that can specifically detect the CHIKVX-ray diffraction, scanning electron microscopy, UV-vis spectroscopy, and transmission electron microscopy (TEM) were used to characterize the produced nanoparticles. In this work, aptamers are used since they are thought to be a unique and sensitive tool for use in rapid diagnostic methods. Cyclic voltammetry (CV) and linear sweep voltammetry (LSV), which were both validated with a potentiostat, were used to measure the analytical response of the biosensor. The target CHIKV antigen was hybridized with using the aptamer-modified electrode as a signal modulation platform, and its presence was determined by a decline in the current produced by its interaction with an anionic mediator, Methylene Blue (MB). Additionally, a detection limit of 1ng/ml and a broad linear range of 1ng/ml-10µg/ml for the CHIKV antigen were reported.

Keywords: biosensors, ePAD, arboviral infections, point of care

Procedia PDF Downloads 81
2361 Analysis of a IncResU-Net Model for R-Peak Detection in ECG Signals

Authors: Beatriz Lafuente Alcázar, Yash Wani, Amit J. Nimunkar

Abstract:

Cardiovascular Diseases (CVDs) are the leading cause of death globally, and around 80% of sudden cardiac deaths are due to arrhythmias or irregular heartbeats. The majority of these pathologies are revealed by either short-term or long-term alterations in the electrocardiogram (ECG) morphology. The ECG is the main diagnostic tool in cardiology. It is a non-invasive, pain free procedure that measures the heart’s electrical activity and that allows the detecting of abnormal rhythms and underlying conditions. A cardiologist can diagnose a wide range of pathologies based on ECG’s form alterations, but the human interpretation is subjective and it is contingent to error. Moreover, ECG records can be quite prolonged in time, which can further complicate visual diagnosis, and deeply retard disease detection. In this context, deep learning methods have risen as a promising strategy to extract relevant features and eliminate individual subjectivity in ECG analysis. They facilitate the computation of large sets of data and can provide early and precise diagnoses. Therefore, the cardiology field is one of the areas that can most benefit from the implementation of deep learning algorithms. In the present study, a deep learning algorithm is trained following a novel approach, using a combination of different databases as the training set. The goal of the algorithm is to achieve the detection of R-peaks in ECG signals. Its performance is further evaluated in ECG signals with different origins and features to test the model’s ability to generalize its outcomes. Performance of the model for detection of R-peaks for clean and noisy ECGs is presented. The model is able to detect R-peaks in the presence of various types of noise, and when presented with data, it has not been trained. It is expected that this approach will increase the effectiveness and capacity of cardiologists to detect divergences in the normal cardiac activity of their patients.

Keywords: arrhythmia, deep learning, electrocardiogram, machine learning, R-peaks

Procedia PDF Downloads 162
2360 Improving Graduate Student Writing Skills: Best Practices and Outcomes

Authors: Jamie Sundvall, Lisa Jennings

Abstract:

A decline in writing skills and abilities of students entering graduate school has become a focus for university systems within the United States. This decline has become a national trend that requires reflection on the intervention strategies used to address the deficit and unintended consequences as outcomes in the profession. Social work faculty is challenged to increase written scholarship within the academic setting. However, when a large number of students in each course have writing deficits, there is a shift from focus on content, ability to demonstrate competency, and application of core social work concepts. This qualitative study focuses on the experiences of online faculty who support increasing scholarship through writing and are following best practices preparing students academically to see improvements in written presentation in classroom work. This study outlines best practices to improve written academic presentation, especially in an online setting. The research also highlights how a student’s ability to show competency and application of concepts may be overlooked in the online setting. This can lead to new social workers who are prepared academically, but may unable to effectively advocate and document thought presentation in their writing. The intended progression of writing across all levels of higher education moves from summary, to application, and into abstract problem solving. Initial findings indicate that it is important to reflect on practices used to address writing deficits in terms of academic writing, competency, and application. It is equally important to reflect on how these methods of intervention impact a student post-graduation. Specifically, for faculty, it is valuable to assess a social worker’s ability to engage in continuity of documentation and advocacy at micro, mezzo, macro, and international levels of practice.

Keywords: intervention, professional impact, scholarship, writing

Procedia PDF Downloads 127
2359 Information Visualization Methods Applied to Nanostructured Biosensors

Authors: Osvaldo N. Oliveira Jr.

Abstract:

The control of molecular architecture inherent in some experimental methods to produce nanostructured films has had great impact on devices of various types, including sensors and biosensors. The self-assembly monolayers (SAMs) and the electrostatic layer-by-layer (LbL) techniques, for example, are now routinely used to produce tailored architectures for biosensing where biomolecules are immobilized with long-lasting preserved activity. Enzymes, antigens, antibodies, peptides and many other molecules serve as the molecular recognition elements for detecting an equally wide variety of analytes. The principles of detection are also varied, including electrochemical methods, fluorescence spectroscopy and impedance spectroscopy. In this presentation an overview will be provided of biosensors made with nanostructured films to detect antibodies associated with tropical diseases and HIV, in addition to detection of analytes of medical interest such as cholesterol and triglycerides. Because large amounts of data are generated in the biosensing experiments, use has been made of computational and statistical methods to optimize performance. Multidimensional projection techniques such as Sammon´s mapping have been shown more efficient than traditional multivariate statistical analysis in identifying small concentrations of anti-HIV antibodies and for distinguishing between blood serum samples of animals infected with two tropical diseases, namely Chagas´ disease and Leishmaniasis. Optimization of biosensing may include a combination of another information visualization method, the Parallel Coordinate technique, with artificial intelligence methods in order to identify the most suitable frequencies for reaching higher sensitivity using impedance spectroscopy. Also discussed will be the possible convergence of technologies, through which machine learning and other computational methods may be used to treat data from biosensors within an expert system for clinical diagnosis.

Keywords: clinical diagnosis, information visualization, nanostructured films, layer-by-layer technique

Procedia PDF Downloads 321
2358 Modified 'Perturb and Observe' with 'Incremental Conductance' Algorithm for Maximum Power Point Tracking

Authors: H. Fuad Usman, M. Rafay Khan Sial, Shahzaib Hamid

Abstract:

The trend of renewable energy resources has been amplified due to global warming and other environmental related complications in the 21st century. Recent research has very much emphasized on the generation of electrical power through renewable resources like solar, wind, hydro, geothermal, etc. The use of the photovoltaic cell has become very public as it is very useful for the domestic and commercial purpose overall the world. Although a single cell gives the low voltage output but connecting a number of cells in a series formed a complete module of the photovoltaic cells, it is becoming a financial investment as the use of it fetching popular. This also reduced the prices of the photovoltaic cell which gives the customers a confident of using this source for their electrical use. Photovoltaic cell gives the MPPT at single specific point of operation at a given temperature and level of solar intensity received at a given surface whereas the focal point changes over a large range depending upon the manufacturing factor, temperature conditions, intensity for insolation, instantaneous conditions for shading and aging factor for the photovoltaic cells. Two improved algorithms have been proposed in this article for the MPPT. The widely used algorithms are the ‘Incremental Conductance’ and ‘Perturb and Observe’ algorithms. To extract the maximum power from the source to the load, the duty cycle of the convertor will be effectively controlled. After assessing the previous techniques, this paper presents the improved and reformed idea of harvesting maximum power point from the photovoltaic cells. A thoroughly go through of the previous ideas has been observed before constructing the improvement in the traditional technique of MPP. Each technique has its own importance and boundaries at various weather conditions. An improved technique of implementing the use of both ‘Perturb and Observe’ and ‘Incremental Conductance’ is introduced.

Keywords: duty cycle, MPPT (Maximum Power Point Tracking), perturb and observe (P&O), photovoltaic module

Procedia PDF Downloads 164
2357 A Flexible Real-Time Eco-Drive Strategy for Electric Minibus

Authors: Felice De Luca, Vincenzo Galdi, Piera Stella, Vito Calderaro, Adriano Campagna, Antonio Piccolo

Abstract:

Sustainable mobility has become one of the major issues of recent years. The challenge in reducing polluting emissions as much as possible has led to the production and diffusion of vehicles with internal combustion engines that are less polluting and to the adoption of green energy vectors, such as vehicles powered by natural gas or LPG and, more recently, with hybrid and electric ones. While on the one hand, the spread of electric vehicles for private use is becoming a reality, albeit rather slowly, not the same is happening for vehicles used for public transport, especially those that operate in the congested areas of the cities. Even if the first electric buses are increasingly being offered on the market, it remains central to the problem of autonomy for battery fed vehicles with high daily routes and little time available for recharging. In fact, at present, solid-state batteries are still too large in size, heavy, and unable to guarantee the required autonomy. Therefore, in order to maximize the energy management on the vehicle, the optimization of driving profiles offer a faster and cheaper contribution to improve vehicle autonomy. In this paper, following the authors’ precedent works on electric vehicles in public transport and energy management strategies in the electric mobility area, an eco-driving strategy for electric bus is presented and validated. Particularly, the characteristics of the prototype bus are described, and a general-purpose eco-drive methodology is briefly presented. The model is firstly simulated in MATLAB™ and then implemented on a mobile device installed on-board of a prototype bus developed by the authors in a previous research project. The solution implemented furnishes the bus-driver suggestions on the guide style to adopt. The result of the test in a real case will be shown to highlight the effectiveness of the solution proposed in terms of energy saving.

Keywords: eco-drive, electric bus, energy management, prototype

Procedia PDF Downloads 126
2356 Transportation and Urban Land-Use System for the Sustainability of Cities, a Case Study of Muscat

Authors: Bader Eddin Al Asali, N. Srinivasa Reddy

Abstract:

Cities are dynamic in nature and are characterized by concentration of people, infrastructure, services and markets, which offer opportunities for production and consumption. Often growth and development in urban areas is not systematic, and is directed by number of factors like natural growth, land prices, housing availability, job locations-the central business district (CBD’s), transportation routes, distribution of resources, geographical boundaries, administrative policies, etc. One sided spatial and geographical development in cities leads to the unequal spatial distribution of population and jobs, resulting in high transportation activity. City development can be measured by the parameters such as urban size, urban form, urban shape, and urban structure. Urban Size is the city size and defined by the population of the city, and urban form is the location and size of the economic activity (CBD) over the geographical space. Urban shape is the geometrical shape of the city over which the distribution of population and economic activity occupied. And Urban Structure is the transport network within which the population and activity centers are connected by hierarchy of roads. Among the urban land-use systems transportation plays significant role and is one of the largest energy consuming sector. Transportation interaction among the land uses is measured in Passenger-Km and mean trip length, and is often used as a proxy for measurement of energy consumption in transportation sector. Among the trips generated in cities, work trips constitute more than 70 percent. Work trips are originated from the place of residence and destination to the place of employment. To understand the role of urban parameters on transportation interaction, theoretical cities of different size and urban specifications are generated through building block exercise using a specially developed interactive C++ programme and land use transportation modeling is carried. The land-use transportation modeling exercise helps in understanding the role of urban parameters and also to classify the cities for their urban form, structure, and shape. Muscat the capital city of Oman underwent rapid urbanization over the last four decades is taken as a case study for its classification. Also, a pilot survey is carried to capture urban travel characteristics. Analysis of land-use transportation modeling with field data classified Muscat as a linear city with polycentric CBD. Conclusions are drawn suggestion are given for policy making for the sustainability of Muscat City.

Keywords: land-use transportation, transportation modeling urban form, urban structure, urban rule parameters

Procedia PDF Downloads 258
2355 Rejuvenation of Aged Kraft-Cellulose Insulating Paper Used in Transformers

Authors: Y. Jeon, A. Bissessur, J. Lin, P. Ndungu

Abstract:

Most transformers employ the usage of cellulose paper, which has been chemically modified through the Kraft process that acts as an effective insulator. Cellulose ageing and oil degradation are directly linked to fouling of the transformer and accumulation of large quantities of waste insulating paper. In addition to technical difficulties, this proves costly for power utilities to deal with. Currently there are no cost effective method for the rejuvenation of cellulose paper that has been documented nor proposed, since renewal of used insulating paper is implemented as the best option. This study proposes and contrasts different rejuvenation methods of accelerated aged cellulose insulating paper by chemical and bio-bleaching processes. Of the three bleaching methods investigated, two are, conventional chlorine-based sodium hypochlorite (m/v), and chlorine-free hydrogen peroxide (v/v), whilst the third is a bio-bleaching technique that uses a bacterium isolate, Acinetobacter strain V2. Through chemical bleaching, varying the strengths of the bleaching reagents at 0.3 %, 0.6 %, 0.9 %, 1.2 %, 1.5 % and 1.8 % over 4 hrs. were analyzed. Bio-bleaching implemented a bacterium isolate, Acinetobacter strain V2, to bleach the aged Kraft paper over 4 hrs. The determination of the amount of alpha cellulose, degree of polymerization and viscosity carried out on Kraft-cellulose insulating paper before and after bleaching. Overall the investigated techniques of chemical and bio-bleaching were successful and effective in treating degraded and accelerated aged Kraft-cellulose insulating paper, however, to varying extents. Optimum conditions for chemical bleaching were attained at bleaching strengths of 1.2 % (m/v) NaOCl and 1.5 % (v/v) H2O2 yielding alpha cellulose contents of 82.4 % and 80.7 % and degree of polymerizations of 613 and 616 respectively. Bio-bleaching using Acinetobacter strain V2 proved to be the superior technique with alpha cellulose levels of 89.0 % and a degree of polymerization of 620. Chemical bleaching techniques require careful and controlled clean-up treatments as it is chlorine and hydrogen peroxide based while bio-bleaching is an extremely eco-friendly technique.

Keywords: alpha cellulose, bio-bleaching, degree of polymerization, Kraft-cellulose insulating paper, transformer, viscosity

Procedia PDF Downloads 261
2354 Analyzing Middle Actors' Influence on Land Use Policy: A Case Study in Central Kalimantan, Indonesia

Authors: Kevin Soubly, Kaysara Khatun

Abstract:

This study applies the existing Middle-Out Perspective (MOP) as a complementing analytical alternative to the customary dichotomous options of top-down vs. bottom-up strategies of international development and commons governance. It expands the framework by applying it to a new context of land management and environmental change, enabling fresh understandings of decision making around land use. Using a case study approach in Central Kalimantan, Indonesia among a village of indigenous Dayak, this study explores influences from both internal and external middle actors, utilizing qualitative empirical evidence and incorporating responses across 25 village households and 11 key stakeholders. Applying the factors of 'agency' and 'capacity' specific to the MOP, this study demonstrates middle actors’ unique capabilities and criticality to change due to their influence across various levels of decision-making. Study results indicate that middle actors play a large role, both passively and actively, both directly and indirectly, across various levels of decision-making, perception-shaping, and commons governance. In addition, the prominence of novel 'passive' middle actors, such as the internet, can provide communities themselves with a level of agency beyond that provided by other middle actors such as NGOs and palm oil industry entities – which often operate at the behest of the 'top' or out of self-interest. Further, the study posits that existing development and decision-making frameworks may misidentify the 'bottom' as the 'middle,' raising questions about traditional development and livelihood discourse, strategies, and support, from agricultural production to forest management. In conclusion, this study provides recommendations including that current policy preconceptions be reevaluated to engage middle actors in locally-adapted, integrative manners in order to improve governance and rural development efforts more broadly.

Keywords: environmental management, governance, Indonesia, land use, middle actors, middle-out perspective

Procedia PDF Downloads 104
2353 Synthesis of (S)-Naproxen Based Amide Bond Forming Chiral Reagent and Application for Liquid Chromatographic Resolution of (RS)-Salbutamol

Authors: Poonam Malik, Ravi Bhushan

Abstract:

This work describes a very efficient approach for synthesis of activated ester of (S)-naproxen which was characterized by UV, IR, ¹HNMR, elemental analysis and polarimetric studies. It was used as a C-N bond forming chiral derivatizing reagent for further synthesis of diastereomeric amides of (RS)-salbutamol (a β₂ agonist that belongs to the group β-adrenolytic and is marketed as racamate) under microwave irradiation. The diastereomeric pair was separated by achiral phase HPLC, using mobile phase in gradient mode containing methanol and aqueous triethylaminephosphate (TEAP); separation conditions were optimized with respect to pH, flow rate, and buffer concentration and the method of separation was validated as per International Council for Harmonisation (ICH) guidelines. The reagent proved to be very effective for on-line sensitive detection of the diastereomers with very low limit of detection (LOD) values of 0.69 and 0.57 ng mL⁻¹ for diastereomeric derivatives of (S)- and (R)-salbutamol, respectively. The retention times were greatly reduced (2.7 min) with less consumption of organic solvents and large (α) as compared to literature reports. Besides, the diastereomeric derivatives were separated and isolated by preparative HPLC; these were characterized and were used as standard reference samples for recording ¹HNMR and IR spectra for determining absolute configuration and elution order; it ensured the success of diastereomeric synthesis and established the reliability of enantioseparation and eliminated the requirement of pure enantiomer of the analyte which is generally not available. The newly developed reagent can suitably be applied to several other amino group containing compounds either from organic syntheses or pharmaceutical industries because the presence of (S)-Npx as a strong chromophore would allow sensitive detection.This work is significant not only in the area of enantioseparation and determination of absolute configuration of diastereomeric derivatives but also in the area of developing new chiral derivatizing reagents (CDRs).

Keywords: chiral derivatizing reagent, naproxen, salbutamol, synthesis

Procedia PDF Downloads 143
2352 The Requirements of Developing a Framework for Successful Adoption of Quality Management Systems in the Construction Industry

Authors: Mohammed Ali Ahmed, Vaughan Coffey, Bo Xia

Abstract:

Quality management systems (QMSs) in the construction industry are often implemented to ensure that sufficient effort is made by companies to achieve the required levels of quality for clients. Attainment of these quality levels can result in greater customer satisfaction, which is fundamental to ensure long-term competitiveness for construction companies. However, the construction sector is still lagging behind other industries in terms of its successful adoption of QMSs, due to the relative lack of acceptance of the benefits of these systems among industry stakeholders, as well as from other barriers related to implementing them. Thus, there is a critical need to undertake a detailed and comprehensive exploration of adoption of QMSs in the construction sector. This paper comprehensively investigates in the construction sector setting, the impacts of all the salient factors surrounding successful implementation of QMSs in building organizations, especially those of external factors. This study is part of an ongoing PhD project, which aims to develop a new framework that integrates both internal and external factors affecting QMS implementation. To achieve the paper aim and objectives, interviews will be conducted to define the external factors influencing the adoption of QMSs, and to obtain holistic critical success factors (CSFs) for implementing these systems. In the next stage of data collection, a questionnaire survey will be developed to investigate the prime barriers facing the adoption of QMSs, the CSFs for their implementation, and the external factors affecting the adoption of these systems. Following the survey, case studies will be undertaken to validate and explain in greater detail the real effects of these factors on QMSs adoption. Specifically, this paper evaluates the effects of the external factors in terms of their impact on implementation success within the selected case studies. Using findings drawn from analyzing the data obtained from these various approaches, specific recommendations for the successful implementation of QMSs will be presented, and an operational framework will be developed. Finally, through a focus group, the findings of the study and the new developed framework will be validated. Ultimately, this framework will be made available to the construction industry to facilitate the greater adoption and implementation of QMSs. In addition, deployment of the applicable recommendations suggested by the study will be shared with the construction industry to more effectively help construction companies to implement QMSs, and overcome the barriers experienced by businesses, thus promoting the achievement of higher levels of quality and customer satisfaction.

Keywords: barriers, critical success factors, external factors, internal factors, quality management systems

Procedia PDF Downloads 171
2351 Enhancing Environmental Impact Assessment for Natural Gas Pipeline Systems: Lessons in Water and Wastewater Management

Authors: Kittipon Chittanukul, Chayut Bureethan, Chutimon Piromyaporn

Abstract:

In Thailand, the natural gas pipeline system requires the preparation of an Environmental Impact Assessment (EIA) report for approval by the relevant agency, the Office of Natural Resources and Environmental Policy and Planning (ONEP), in the pre-construction stage. As of December 2022, PTT has a lot of gas pipeline system spanning around the country. Our experience has shown that the EIA is a significant part of the project plan. In 2011, There was a catastrophic flood in multiple areas of Thailand. It destroyed lives and properties. This event is still in Thai people’s mind. Furthermore, rainfall has been increasing for three consecutive years (2020-2022). Moreover, municipalities are situated in low land river basin and tropical rainfall zone. So many areas still suffer from flooding. Especially in 2022, there will be a 60% increase in water demand compared to the previous year. Therefore, all activities will take into account the quality of the receiving water. The above information emphasizes water and wastewater management are significant in EIA report. PTT has accumulated a large number of lessons learned in water and wastewater management. Our pipeline system execution is composed of EIA stage, construction stage, and operation and maintenance phase. We provide practical Information on water and wastewater management to enhance the EIA process for the pipeline system. The examples of lessons learned in water and wastewater management include techniques to address water and wastewater impact throughout the overall pipelines systems, mitigation measures and monitoring results of these measures. This practical information will alleviate the anxiety of the ONEP committee when approving the EIA report and will build trust among stakeholders in the vicinity of the gas pipeline system area.

Keywords: environmental impact assessment, gas pipeline system, low land basin, high risk flooding area, mitigation measure

Procedia PDF Downloads 52
2350 The Mechanisms of Peer-Effects in Education: A Frame-Factor Analysis of Instruction

Authors: Pontus Backstrom

Abstract:

In the educational literature on peer effects, attention has been brought to the fact that the mechanisms creating peer effects are still to a large extent hidden in obscurity. The hypothesis in this study is that the Frame Factor Theory can be used to explain these mechanisms. At heart of the theory is the concept of “time needed” for students to learn a certain curricula unit. The relations between class-aggregated time needed and the actual time available, steers and hinders the actions possible for the teacher. Further, the theory predicts that the timing and pacing of the teachers’ instruction is governed by a “criterion steering group” (CSG), namely the pupils in the 10th-25th percentile of the aptitude distribution in class. The class composition hereby set the possibilities and limitations for instruction, creating peer effects on individual outcomes. To test if the theory can be applied to the issue of peer effects, the study employs multilevel structural equation modelling (M-SEM) on Swedish TIMSS 2015-data (Trends in International Mathematics and Science Study; students N=4090, teachers N=200). Using confirmatory factor analysis (CFA) in the SEM-framework in MPLUS, latent variables are specified according to the theory, such as “limitations of instruction” from TIMSS survey items. The results indicate a good model fit to data of the measurement model. Research is still in progress, but preliminary results from initial M-SEM-models verify a strong relation between the mean level of the CSG and the latent variable of limitations on instruction, a variable which in turn have a great impact on individual students’ test results. Further analysis is required, but so far the analysis indicates a confirmation of the predictions derived from the frame factor theory and reveals that one of the important mechanisms creating peer effects in student outcomes is the effect the class composition has upon the teachers’ instruction in class.

Keywords: compositional effects, frame factor theory, peer effects, structural equation modelling

Procedia PDF Downloads 124
2349 Analysis and Design of Inductive Power Transfer Systems for Automotive Battery Charging Applications

Authors: Wahab Ali Shah, Junjia He

Abstract:

Transferring electrical power without any wiring has been a dream since late 19th century. There were some advances in this area as to know more about microwave systems. However, this subject has recently become very attractive due to their practiScal systems. There are low power applications such as charging the batteries of contactless tooth brushes or implanted devices, and higher power applications such as charging the batteries of electrical automobiles or buses. In the first group of applications operating frequencies are in microwave range while the frequency is lower in high power applications. In the latter, the concept is also called inductive power transfer. The aim of the paper is to have an overview of the inductive power transfer for electrical vehicles with a special concentration on coil design and power converter simulation for static charging. Coil design is very important for an efficient and safe power transfer. Coil design is one of the most critical tasks. Power converters are used in both side of the system. The converter on the primary side is used to generate a high frequency voltage to excite the primary coil. The purpose of the converter in the secondary is to rectify the voltage transferred from the primary to charge the battery. In this paper, an inductive power transfer system is studied. Inductive power transfer is a promising technology with several possible applications. Operation principles of these systems are explained, and components of the system are described. Finally, a single phase 2 kW system was simulated and results were presented. The work presented in this paper is just an introduction to the concept. A reformed compensation network based on traditional inductor-capacitor-inductor (LCL) topology is proposed to realize robust reaction to large coupling variation that is common in dynamic wireless charging application. In the future, this type compensation should be studied. Also, comparison of different compensation topologies should be done for the same power level.

Keywords: coil design, contactless charging, electrical automobiles, inductive power transfer, operating frequency

Procedia PDF Downloads 238
2348 An Experimental Investigation of the Surface Pressure on Flat Plates in Turbulent Boundary Layers

Authors: Azadeh Jafari, Farzin Ghanadi, Matthew J. Emes, Maziar Arjomandi, Benjamin S. Cazzolato

Abstract:

The turbulence within the atmospheric boundary layer induces highly unsteady aerodynamic loads on structures. These loads, if not accounted for in the design process, will lead to structural failure and are therefore important for the design of the structures. For an accurate prediction of wind loads, understanding the correlation between atmospheric turbulence and the aerodynamic loads is necessary. The aim of this study is to investigate the effect of turbulence within the atmospheric boundary layer on the surface pressure on a flat plate over a wide range of turbulence intensities and integral length scales. The flat plate is chosen as a fundamental geometry which represents structures such as solar panels and billboards. Experiments were conducted at the University of Adelaide large-scale wind tunnel. Two wind tunnel boundary layers with different intensities and length scales of turbulence were generated using two sets of spires with different dimensions and a fetch of roughness elements. Average longitudinal turbulence intensities of 13% and 26% were achieved in each boundary layer, and the longitudinal integral length scale within the three boundary layers was between 0.4 m and 1.22 m. The pressure distributions on a square flat plate at different elevation angles between 30° and 90° were measured within the two boundary layers with different turbulence intensities and integral length scales. It was found that the peak pressure coefficient on the flat plate increased with increasing turbulence intensity and integral length scale. For example, the peak pressure coefficient on a flat plate elevated at 90° increased from 1.2 to 3 with increasing turbulence intensity from 13% to 26%. Furthermore, both the mean and the peak pressure distribution on the flat plates varied with turbulence intensity and length scale. The results of this study can be used to provide a more accurate estimation of the unsteady wind loads on structures such as buildings and solar panels.

Keywords: atmospheric boundary layer, flat plate, pressure coefficient, turbulence

Procedia PDF Downloads 128
2347 A Simple and Empirical Refraction Correction Method for UAV-Based Shallow-Water Photogrammetry

Authors: I GD Yudha Partama, A. Kanno, Y. Akamatsu, R. Inui, M. Goto, M. Sekine

Abstract:

The aerial photogrammetry of shallow water bottoms has the potential to be an efficient high-resolution survey technique for shallow water topography, thanks to the advent of convenient UAV and automatic image processing techniques Structure-from-Motion (SfM) and Multi-View Stereo (MVS)). However, it suffers from the systematic overestimation of the bottom elevation, due to the light refraction at the air-water interface. In this study, we present an empirical method to correct for the effect of refraction after the usual SfM-MVS processing, using common software. The presented method utilizes the empirical relation between the measured true depth and the estimated apparent depth to generate an empirical correction factor. Furthermore, this correction factor was utilized to convert the apparent water depth into a refraction-corrected (real-scale) water depth. To examine its effectiveness, we applied the method to two river sites, and compared the RMS errors in the corrected bottom elevations with those obtained by three existing methods. The result shows that the presented method is more effective than the two existing methods: The method without applying correction factor and the method utilizes the refractive index of water (1.34) as correction factor. In comparison with the remaining existing method, which used the additive terms (offset) after calculating correction factor, the presented method performs well in Site 2 and worse in Site 1. However, we found this linear regression method to be unstable when the training data used for calibration are limited. It also suffers from a large negative bias in the correction factor when the apparent water depth estimated is affected by noise, according to our numerical experiment. Overall, the good accuracy of refraction correction method depends on various factors such as the locations, image acquisition, and GPS measurement conditions. The most effective method can be selected by using statistical selection (e.g. leave-one-out cross validation).

Keywords: bottom elevation, MVS, river, SfM

Procedia PDF Downloads 293
2346 Deorbiting Performance of Electrodynamic Tethers to Mitigate Space Debris

Authors: Giulia Sarego, Lorenzo Olivieri, Andrea Valmorbida, Carlo Bettanini, Giacomo Colombatti, Marco Pertile, Enrico C. Lorenzini

Abstract:

International guidelines recommend removing any artificial body in Low Earth Orbit (LEO) within 25 years from mission completion. Among disposal strategies, electrodynamic tethers appear to be a promising option for LEO, thanks to the limited storage mass and the minimum interface requirements to the host spacecraft. In particular, recent technological advances make it feasible to deorbit large objects with tether lengths of a few kilometers or less. To further investigate such an innovative passive system, the European Union is currently funding the project E.T.PACK – Electrodynamic Tether Technology for Passive Consumable-less Deorbit Kit in the framework of the H2020 Future Emerging Technologies (FET) Open program. The project focuses on the design of an end of life disposal kit for LEO satellites. This kit aims to deploy a taped tether that can be activated at the spacecraft end of life to perform autonomous deorbit within the international guidelines. In this paper, the orbital performance of the E.T.PACK deorbiting kit is compared to other disposal methods. Besides, the orbital decay prediction is parametrized as a function of spacecraft mass and tether system performance. Different values of length, width, and thickness of the tether will be evaluated for various scenarios (i.e., different initial orbital parameters). The results will be compared to other end-of-life disposal methods with similar allocated resources. The analysis of the more innovative system’s performance with the tape coated with a thermionic material, which has a low work-function (LWT), for which no active component for the cathode is required, will also be briefly discussed. The results show that the electrodynamic tether option can be a competitive and performant solution for satellite disposal compared to other deorbit technologies.

Keywords: deorbiting performance, H2020, spacecraft disposal, space electrodynamic tethers

Procedia PDF Downloads 157
2345 Seminal Attributes, Cooling Procedure and Post Thaw Quality of Semen of Indigenous Khari Bucks (Capra hircus) of Nepal

Authors: Pankaj Kumar Jha, Saroj Sapkota, Dil Bahadur Gurung, Raju Kadel, Neena Amatya Gorkhali, Bhola Shankar Shrestha

Abstract:

The study was conducted to evaluate the seminal attributes, effectiveness of cooling process and post-thawed semen quality of a Nepalese indigenous Khari buck. Thirty-two ejaculates, 16 from each buck were studied for seminal attributes of fresh semen: volume, color, mass activity, motility, viability, sperm concentration, and morphology. The pooled mean values for each seminal attributes were: volume 0.7±0.3 ml; colour 3.1±0.3 (milky white); mass activity 3.8±0.4 (rapid wave motion with formation of eddies at the end of waves to very rapid wave motion with distinct eddies formation); sperm motility 80.9±5.6%; sperm viability 94.6±2.0%; sperm concentration 2597.0±406.8x106/ml; abnormal acrosome, mid-piece and tail 10.7±1.8% and abnormal head 5±1.7%. For freezing semen, further 6 ejaculates from each buck were studied with Tris based egg yolk citrate extender. The pooled mean values of motility and viability of post diluted semen for 90 and 120 minutes each for cooling and glycerol equilibration were 73.8±4.8%, 88.1±2.6% and 69.2±6.0%, 85.0±1.7%, respectively. The pooled mean values of post thaw motility and viability with advancement of preservation time were: 0hour 49.0±4.6%, 81.2±1.9%; 2nd day 41±2.2%, 79±1%; 5th day 41±2.2%, 78.6±0.9% and 10th day 41±2.2%, 78.6±0.9%. We concluded from the above study that the seminal attributes and results of post-thaw semen quality were satisfactory and in accordance with other work in foreign countries, which indicated the feasibility of cryopreserving buck semen. For more validation, research with large number of bucks, different types of diluents and freezing trials by removing seminal plasma followed by pregnancy rate is recommended.

Keywords: cryopreservation, Nepalese indigenous Khari (Hill goat) buck, post-thaw semen quality, seminal attributes

Procedia PDF Downloads 386