Search results for: small data sets
27779 Remote Sensing of Aerated Flows at Large Dams: Proof of Concept
Authors: Ahmed El Naggar, Homyan Saleh
Abstract:
Dams are crucial for flood control, water supply, and the creation of hydroelectric power. Every dam has a water conveyance system, such as a spillway, providing the safe discharge of catastrophic floods when necessary. Spillway design has historically been investigated in laboratory research owing to the absence of suitable full-scale flow monitoring equipment and safety problems. Prototype measurements of aerated flows are urgently needed to quantify projected scale effects and provide missing validation data for design guidelines and numerical simulations. In this work, an image-based investigation of free-surface flows on a tiered spillway was undertaken at the laboratory (fixed camera installation) and prototype size (drone video) (drone footage) (drone footage). The drone videos were generated using data from citizen science. Analyses permitted the measurement of the free-surface aeration inception point, air-water surface velocities, fluctuations, and residual energy at the chute's downstream end from a remote site. The prototype observations offered full-scale proof of concept, while laboratory results were efficiently confirmed against invasive phase-detection probe data. This paper stresses the efficacy of image-based analyses at prototype spillways. It highlights how citizen science data may enable academics better understand real-world air-water flow dynamics and offers a framework for a small collection of long-missing prototype data.Keywords: remote sensing, aerated flows, large dams, proof of concept, dam spillways, air-water flows, prototype operation, remote sensing, inception point, optical flow, turbulence, residual energy
Procedia PDF Downloads 9027778 Classification of Generative Adversarial Network Generated Multivariate Time Series Data Featuring Transformer-Based Deep Learning Architecture
Authors: Thrivikraman Aswathi, S. Advaith
Abstract:
As there can be cases where the use of real data is somehow limited, such as when it is hard to get access to a large volume of real data, we need to go for synthetic data generation. This produces high-quality synthetic data while maintaining the statistical properties of a specific dataset. In the present work, a generative adversarial network (GAN) is trained to produce multivariate time series (MTS) data since the MTS is now being gathered more often in various real-world systems. Furthermore, the GAN-generated MTS data is fed into a transformer-based deep learning architecture that carries out the data categorization into predefined classes. Further, the model is evaluated across various distinct domains by generating corresponding MTS data.Keywords: GAN, transformer, classification, multivariate time series
Procedia PDF Downloads 12827777 Comparison of Existing Predictor and Development of Computational Method for S- Palmitoylation Site Identification in Arabidopsis Thaliana
Authors: Ayesha Sanjana Kawser Parsha
Abstract:
S-acylation is an irreversible bond in which cysteine residues are linked to fatty acids palmitate (74%) or stearate (22%), either at the COOH or NH2 terminal, via a thioester linkage. There are several experimental methods that can be used to identify the S-palmitoylation site; however, since they require a lot of time, computational methods are becoming increasingly necessary. There aren't many predictors, however, that can locate S- palmitoylation sites in Arabidopsis Thaliana with sufficient accuracy. This research is based on the importance of building a better prediction tool. To identify the type of machine learning algorithm that predicts this site more accurately for the experimental dataset, several prediction tools were examined in this research, including the GPS PALM 6.0, pCysMod, GPS LIPID 1.0, CSS PALM 4.0, and NBA PALM. These analyses were conducted by constructing the receiver operating characteristics plot and the area under the curve score. An AI-driven deep learning-based prediction tool has been developed utilizing the analysis and three sequence-based input data, such as the amino acid composition, binary encoding profile, and autocorrelation features. The model was developed using five layers, two activation functions, associated parameters, and hyperparameters. The model was built using various combinations of features, and after training and validation, it performed better when all the features were present while using the experimental dataset for 8 and 10-fold cross-validations. While testing the model with unseen and new data, such as the GPS PALM 6.0 plant and pCysMod mouse, the model performed better, and the area under the curve score was near 1. It can be demonstrated that this model outperforms the prior tools in predicting the S- palmitoylation site in the experimental data set by comparing the area under curve score of 10-fold cross-validation of the new model with the established tools' area under curve score with their respective training sets. The objective of this study is to develop a prediction tool for Arabidopsis Thaliana that is more accurate than current tools, as measured by the area under the curve score. Plant food production and immunological treatment targets can both be managed by utilizing this method to forecast S- palmitoylation sites.Keywords: S- palmitoylation, ROC PLOT, area under the curve, cross- validation score
Procedia PDF Downloads 7227776 Development of a Multi-Locus DNA Metabarcoding Method for Endangered Animal Species Identification
Authors: Meimei Shi
Abstract:
Objectives: The identification of endangered species, especially simultaneous detection of multiple species in complex samples, plays a critical role in alleged wildlife crime incidents and prevents illegal trade. This study was to develop a multi-locus DNA metabarcoding method for endangered animal species identification. Methods: Several pairs of universal primers were designed according to the mitochondria conserved gene regions. Experimental mixtures were artificially prepared by mixing well-defined species, including endangered species, e.g., forest musk, bear, tiger, pangolin, and sika deer. The artificial samples were prepared with 1-16 well-characterized species at 1% to 100% DNA concentrations. After multiplex-PCR amplification and parameter modification, the amplified products were analyzed by capillary electrophoresis and used for NGS library preparation. The DNA metabarcoding was carried out based on Illumina MiSeq amplicon sequencing. The data was processed with quality trimming, reads filtering, and OTU clustering; representative sequences were blasted using BLASTn. Results: According to the parameter modification and multiplex-PCR amplification results, five primer sets targeting COI, Cytb, 12S, and 16S, respectively, were selected as the NGS library amplification primer panel. High-throughput sequencing data analysis showed that the established multi-locus DNA metabarcoding method was sensitive and could accurately identify all species in artificial mixtures, including endangered animal species Moschus berezovskii, Ursus thibetanus, Panthera tigris, Manis pentadactyla, Cervus nippon at 1% (DNA concentration). In conclusion, the established species identification method provides technical support for customs and forensic scientists to prevent the illegal trade of endangered animals and their products.Keywords: DNA metabarcoding, endangered animal species, mitochondria nucleic acid, multi-locus
Procedia PDF Downloads 13627775 The Effect of Main Factors on Forces during FSJ Processing of AA2024 Aluminum
Authors: Dunwen Zuo, Yongfang Deng, Bo Song
Abstract:
An attempt is made here to measure the forces of three directions, under conditions of different feed speeds, different tilt angles of tool and without or with the pin on the tool, by using octagonal ring dynamometer in the AA2024 aluminum FSJ (Friction Stir Joining) process, and investigate how four main factors influence forces in the FSJ process. It is found that, high feed speed lead to small feed force and small lateral force, but high feed speed leads to large feed force in the stable joining stage of process. As the rotational speed increasing, the time of axial force drop from the maximum to the minimum required increased in the push-up process. In the stable joining stage, the rotational speed has little effect on the feed force; large rotational speed leads to small lateral force and axial force. The maximum axial force increases as the tilt angle of tool increases at the downward movement stage. At the moment of start feeding, as tilt angle of tool increases, the amplitudes of the axial force increasing become large. In the stable joining stage, with the increase of tilt angle of tool, the axial force is increased, the lateral force is decreased, and the feed force almost unchanged. The tool with pin will decrease axial force in the downward movement stage. The feed force and lateral force will increase, but the axial force will reduced in the stable joining stage by using the tool with pin compare to by using the tool without pin.Keywords: FSJ, force factor, AA2024 aluminum, friction stir joining
Procedia PDF Downloads 48727774 Formation of the Investment Portfolio of Intangible Assets with a Wide Pairwise Comparison Matrix Application
Authors: Gulnara Galeeva
Abstract:
The Analytic Hierarchy Process is widely used in the economic and financial studies, including the formation of investment portfolios. In this study, a generalized method of obtaining a vector of priorities for the case with separate pairwise comparisons of the expert opinion being presented as a set of several equal evaluations on a ratio scale is examined. The author claims that this method allows solving an important and up-to-date problem of excluding vagueness and ambiguity of the expert opinion in the decision making theory. The study describes the authentic wide pairwise comparison matrix. Its application in the formation of the efficient investment portfolio of intangible assets of a small business enterprise with limited funding is considered. The proposed method has been successfully approbated on the practical example of a functioning dental clinic. The result of the study confirms that the wide pairwise comparison matrix can be used as a simple and reliable method for forming the enterprise investment policy. Moreover, a comparison between the method based on the wide pairwise comparison matrix and the classical analytic hierarchy process was conducted. The results of the comparative analysis confirm the correctness of the method based on the wide matrix. The application of a wide pairwise comparison matrix also allows to widely use the statistical methods of experimental data processing for obtaining the vector of priorities. A new method is available for simple users. Its application gives about the same accuracy result as that of the classical hierarchy process. Financial directors of small and medium business enterprises get an opportunity to solve the problem of companies’ investments without resorting to services of analytical agencies specializing in such studies.Keywords: analytic hierarchy process, decision processes, investment portfolio, intangible assets
Procedia PDF Downloads 26427773 Assessing Denitrification-Disintegration Model’s Efficacy in Simulating Greenhouse Gas Emissions, Crop Growth, Yield, and Soil Biochemical Processes in Moroccan Context
Authors: Mohamed Boullouz, Mohamed Louay Metougui
Abstract:
Accurate modeling of greenhouse gas (GHG) emissions, crop growth, soil productivity, and biochemical processes is crucial considering escalating global concerns about climate change and the urgent need to improve agricultural sustainability. The application of the denitrification-disintegration (DNDC) model in the context of Morocco's unique agro-climate is thoroughly investigated in this study. Our main research hypothesis is that the DNDC model offers an effective and powerful tool for precisely simulating a wide range of significant parameters, including greenhouse gas emissions, crop growth, yield potential, and complex soil biogeochemical processes, all consistent with the intricate features of environmental Moroccan agriculture. In order to verify these hypotheses, a vast amount of field data covering Morocco's various agricultural regions and encompassing a range of soil types, climatic factors, and crop varieties had to be gathered. These experimental data sets will serve as the foundation for careful model calibration and subsequent validation, ensuring the accuracy of simulation results. In conclusion, the prospective research findings add to the global conversation on climate-resilient agricultural practices while encouraging the promotion of sustainable agricultural models in Morocco. A policy architect's and an agricultural actor's ability to make informed decisions that not only advance food security but also environmental stability may be strengthened by the impending recognition of the DNDC model as a potent simulation tool tailored to Moroccan conditions.Keywords: greenhouse gas emissions, DNDC model, sustainable agriculture, Moroccan cropping systems
Procedia PDF Downloads 6327772 Quality Analysis of Lake Malawi's Diplotaxodon Fish Species Processed in Solar Tent Dryer versus Open Sun Drying
Authors: James Banda, Jupiter Simbeye, Essau Chisale, Geoffrey Kanyerere, Kings Kamtambe
Abstract:
Improved solar tent dryers for processing small fish species were designed to reduce post-harvest fish losses and improve supply of quality fish products in the southern part of Lake Malawi under CultiAF project. A comparative analysis of the quality of Diplotaxodon (Ndunduma) from Lake Malawi processed in solar tent dryer and open sun drying was conducted using proximate analysis, microbial analysis and sensory evaluation. Proximates for solar tent dried fish and open sun dried fish in terms of proteins, fats, moisture and ash were 63.3±0.15% and 63.3±0.34%, 19.6±0.09% and 19.9±0.25%, 8.3±0.12% and 17.0±0.01%, and 15.6±0.61% and 21.9±0.91% respectively. Crude protein and crude fat showed non-significant differences (p = 0.05), while moisture and ash content were significantly different (p = 001). Open sun dried fish had significantly higher numbers of viable bacteria counts (5.2×10⁶ CFU) than solar tent dried fish (3.9×10² CFU). Most isolated bacteria from solar tent dried and open sun dried fish were 1.0×10¹ and 7.2×10³ for Total coliform, 0 and 4.5 × 10³ for Escherishia coli, 0 and 7.5 × 10³ for Salmonella, 0 and 5.7×10² for shigella, 4.0×10¹ and 6.1×10³ for Staphylococcus, 1.0×10¹ and 7.0×10² for vibrio. Qualitative evaluation of sensory properties showed higher acceptability of 3.8 for solar tent dried fish than 1.7 for open sun dried fish. It is concluded that promotion of solar tent drying in processing small fish species in Malawi would support small-scale fish processors to produce quality fish in terms of nutritive value, reduced microbial contamination, sensory acceptability and reduced moisture content.Keywords: diplotaxodon, Malawi, open sun drying, solar tent drying
Procedia PDF Downloads 33427771 A Privacy Protection Scheme Supporting Fuzzy Search for NDN Routing Cache Data Name
Authors: Feng Tao, Ma Jing, Guo Xian, Wang Jing
Abstract:
Named Data Networking (NDN) replaces IP address of traditional network with data name, and adopts dynamic cache mechanism. In the existing mechanism, however, only one-to-one search can be achieved because every data has a unique name corresponding to it. There is a certain mapping relationship between data content and data name, so if the data name is intercepted by an adversary, the privacy of the data content and user’s interest can hardly be guaranteed. In order to solve this problem, this paper proposes a one-to-many fuzzy search scheme based on order-preserving encryption to reduce the query overhead by optimizing the caching strategy. In this scheme, we use hash value to ensure the user’s query safe from each node in the process of search, so does the privacy of the requiring data content.Keywords: NDN, order-preserving encryption, fuzzy search, privacy
Procedia PDF Downloads 48327770 A Study of Learning Achievement for Heat Transfer by Using Experimental Sets of Convection with the Predict-Observe-Explain Teaching Technique
Authors: Wanlapa Boonsod, Nisachon Yangprasong, Udomsak Kitthawee
Abstract:
Thermal physics education is a complicated and challenging topic to discuss in any classroom. As a result, most students tend to be uninterested in learning this topic. In the current study, a convection experiment set was devised to show how heat can be transferred by a convection system to a thermoelectric plate until a LED flashes. This research aimed to 1) create a natural convection experimental set, 2) study learning achievement on the convection experimental set with the predict-observe-explain (POE) technique, and 3) study satisfaction for the convection experimental set with the predict-observe-explain (POE) technique. The samples were chosen by purposive sampling and comprised 28 students in grade 11 at Patumkongka School in Bangkok, Thailand. The primary research instrument was the plan for predict-observe-explain (POE) technique on heat transfer using a convection experimental set. Heat transfer experimental set by convection. The instruments used to collect data included a heat transfer achievement model by convection, a Satisfaction Questionnaire after the learning activity, and the predict-observe-explain (POE) technique for heat transfer using a convection experimental set. The research format comprised a one-group pretest-posttest design. The data was analyzed by GeoGebra program. The statistics used in the research were mean, standard deviation and t-test for dependent samples. The results of the research showed that achievement on heat transfer using convection experimental set was composed of thermo-electrics on the top side attached to the heat sink and another side attached to a stainless plate. Electrical current was displayed by the flashing of a 5v LED. The entire set of thermo-electrics was set up on the top of the box and heated by an alcohol burner. The achievement of learning was measured with the predict-observe-explain (POE) technique, with the natural convection experimental set statistically higher than before learning at a 0.01 level. Satisfaction with POE for physics learning of heat transfer by using convection experimental set was at a high level (4.83 from 5.00).Keywords: convection, heat transfer, physics education, POE
Procedia PDF Downloads 21627769 Design and Implementation Wireless System by Using Microcontrollers.Application for Drive Acquisition System with Multiple Sensors
Authors: H. Fekhar
Abstract:
Design and implementation acquisition system using radio frequency (RF) ASK module and micro controllers PIC is proposed in this work. The paper includes hardware and software design. The design tools are divided into two units , namely the sender MCU and receiver.The system was designed to measure temperatures of two furnaces and pressure pneumatic process. The wireless transmitter unit use the 433.95 MHz band directly interfaced to micro controller PIC18F4620. The sender unit consists of temperatures-pressure sensors , conditioning circuits , keypad GLCD display and RF module.Signal conditioner converts the output of the sensors into an electric quantity suitable for operation of the display and recording system.The measurements circuits are connected directly to 10 bits multiplexed A/D converter.The graphic liquid crystal display (GLCD) is used . The receiver (RF) module connected to a second microcontroller ,receive the signal via RF receiver , decode the Address/data and reproduces the original data . The strategy adopted for establishing communication between the sender MCU and receiver uses the specific protocol “Header, Address and data”.The communication protocol dealing with transmission and reception have been successfully implemented . Some experimental results are provided to demonstrate the effectiveness of the proposed wireless system. This embedded system track temperatures – pressure signal reasonably well with a small error.Keywords: microcontrollers, sensors, graphic liquid cristal display, protocol, temperature, pressure
Procedia PDF Downloads 45727768 Resilience of the American Agriculture Sector
Authors: Dipak Subedi, Anil Giri, Christine Whitt, Tia McDonald
Abstract:
This study aims to understand the impact of the pandemic on the overall economic well-being of the agricultural sector of the United States. The two key metrics used to examine the economic well-being are the bankruptcy rate of the U.S. farm operations and the operating profit margin. One of the primary reasons for farm operations (in the U.S.) to file for bankruptcy is continuous negative profit or a significant decrease in profit. The pandemic caused significant supply and demand shocks in the domestic market. Furthermore, the ongoing trade disruptions, especially with China, also impacted the prices of agricultural commodities. The significantly reduced demand for ethanol and closure of meat processing plants affected both livestock and crop producers. This study uses data from courts to examine the bankruptcy rate over time of U.S. farm operations. Preliminary results suggest there wasn’t an increase in farm operations filing for bankruptcy in 2020. This was most likely because of record high Government payments to producers in 2020. The Federal Government made direct payments of more than $45 billion in 2020. One commonly used economic metric to measure farm profitability is the operating profit margin (OPM). Operating profit margin measures profitability as a share of the total value of production and government payments. The Economic Research Service of the United States Department of Agriculture defines a farm operation to be in a) a high-risk zone if the OPM is less than 10 percent and b) a low-risk zone if the OPM is higher than 25 percent. For this study, OPM was calculated for small, medium, and large-scale farm operations using the data from the Agriculture Resource Management Survey (OPM). Results show that except for small family farms, the share of farms in high-risk zone decreased in 2020 compared to the most recent non-pandemic year, 2019. This was most likely due to higher commodity prices at the end of 2020 and record-high government payments. Further investigation suggests a lower share of smaller farm operations receiving lower average government payments resulting in a large share (over 70 percent) being in the critical zone. This study should be of interest to multiple stakeholders, including policymakers across the globe, as it shows the resilience of the U.S. agricultural system as well as (some) impact of government payments.Keywords: U.S. farm sector, COVID-19, operating profit margin, farm bankruptcy, ag finance, government payments to the farm sector
Procedia PDF Downloads 8727767 The Influence of Ecologically -Valid High- and Low-Volume Resistance Training on Muscle Strength and Size in Trained Men
Authors: Jason Dellatolla, Scott Thomas
Abstract:
Much of the current literature pertaining to resistance training (RT) volume prescription lacks ecological validity, and very few studies investigate true high-volume ranges. Purpose: The present study sought to investigate the effects of ecologically-valid high- vs low-volume RT on muscular size and strength in trained men. Methods: This study systematically randomized trained, college-aged men into two groups: low-volume (LV; n = 4) and high-volume (HV; n = 5). The sample size was affected by COVID-19 limitations. Subjects followed an ecologically-valid 6-week RT program targeting both muscle size and strength. RT occurred 3x/week on non-consecutive days. Over the course of six weeks, LVR and HVR gradually progressed from 15 to 23 sets/week and 30 to 46 sets/week of lower-body RT, respectively. Muscle strength was assessed via 3RM tests in the squat, stiff-leg deadlift (SL DL), and leg press. Muscle hypertrophy was evaluated through a combination of DXA, BodPod, and ultrasound (US) measurements. Results: Two-way repeated-measures ANOVAs indicated that strength in all 3 compound lifts increased significantly among both groups (p < 0.01); between-group differences only occurred in the squat (p = 0.02) and SL DL (p = 0.03), both of which favored HVR. Significant pre-to-post-study increases in indicators of hypertrophy were discovered for lean body mass in the legs via DXA, overall fat-free mass via BodPod, and US measures of muscle thickness (MT) for the rectus femoris, vastus intermedius, vastus medialis, vastus lateralis, long-head of the biceps femoris, and total MT. Between-group differences were only found for MT of the vastus medialis – favoring HVR. Moreover, each additional weekly set of lower-body RT was associated with an average increase in MT of 0.39% in the thigh muscles. Conclusion: We conclude that ecologically-valid RT regimens significantly improve muscular strength and indicators of hypertrophy. When HVR is compared to LVR, HVR provides significantly greater gains in muscular strength but has no greater effect on hypertrophy over the course of 6 weeks in trained, college-aged men.Keywords: ecological validity, hypertrophy, resistance training, strength
Procedia PDF Downloads 11327766 Performance Analysis of Heterogeneous Cellular Networks with Multiple Connectivity
Authors: Sungkyung Kim, Jee-Hyeon Na, Dong-Seung Kwon
Abstract:
Future mobile networks following 5th generation will be characterized by one thousand times higher gains in capacity; connections for at least one hundred billion devices; user experience capable of extremely low latency and response times. To be close to the capacity requirements and higher reliability, advanced technologies have been studied, such as multiple connectivity, small cell enhancement, heterogeneous networking, and advanced interference and mobility management. This paper is focused on the multiple connectivity in heterogeneous cellular networks. We investigate the performance of coverage and user throughput in several deployment scenarios. Using the stochastic geometry approach, the SINR distributions and the coverage probabilities are derived in case of dual connection. Also, to compare the user throughput enhancement among the deployment scenarios, we calculate the spectral efficiency and discuss our results.Keywords: heterogeneous networks, multiple connectivity, small cell enhancement, stochastic geometry
Procedia PDF Downloads 33027765 Microfinance and Women Empowerment in Bangladesh: Impact in Economic Dimension
Authors: Abm Mostafa, Rumbidzai Mukono, Peijie Wang
Abstract:
Using 285 respondents from two microfinance institutions, this research aims to assess the impact of microfinance on women’s economic empowerment in Bangladesh. Empirical measures of economic empowerment used in this paper are underpinned by a bargaining theory of household. Questionnaire is used for data collection following purposive sampling. Descriptive statistics, chi-square test, Kruskal-Wallis test, binary, and ordinal logistic regressions are deployed for data analysis. The findings of this study show that around three quarters of respondents have increased household income. They have increased their savings overwhelmingly; nonetheless, many of them are found to have a very small amount of savings. Still, more than half of the respondents are reported to have increased their savings when it is checked against at least 500 BDT per month. On the contrary, the percentage of women is moderate in terms of increasing control over finances. Empirical findings demonstrate the evidence of a relationship between the amount of loan and women’s household income, their savings, and control over finances. Nonetheless, no relationship is found in women’s areas. This study infers that women’s access to financial resources is fundamental to empower them in economic dimension.Keywords: microfinance, women, economic, empowerment, Bangladesh
Procedia PDF Downloads 13127764 A Study of Growth Factors on Sustainable Manufacturing in Small and Medium-Sized Enterprises: Case Study of Japan Manufacturing
Authors: Tadayuki Kyoutani, Shigeyuki Haruyama, Ken Kaminishi, Zefry Darmawan
Abstract:
Japan’s semiconductor industries have developed greatly in recent years. Many were started from a Small and Medium-sized Enterprises (SMEs) that found at a good circumstance and now become the prosperous industries in the world. Sustainable growth factors that support the creation of spirit value inside the Japanese company were strongly embedded through performance. Those factors were not clearly defined among each company. A series of literature research conducted to explore quantitative text mining about the definition of sustainable growth factors. Sustainable criteria were developed from previous research to verify the definition of the factors. A typical frame work was proposed as a systematical approach to develop sustainable growth factor in a specific company. Result of approach was review in certain period shows that factors influenced in sustainable growth was importance for the company to achieve the goal.Keywords: SME, manufacture, sustainable, growth factor
Procedia PDF Downloads 24827763 Ecosystem Modeling along the Western Bay of Bengal
Authors: A. D. Rao, Sachiko Mohanty, R. Gayathri, V. Ranga Rao
Abstract:
Modeling on coupled physical and biogeochemical processes of coastal waters is vital to identify the primary production status under different natural and anthropogenic conditions. About 7, 500 km length of Indian coastline is occupied with number of semi enclosed coastal bodies such as estuaries, inlets, bays, lagoons, and other near shore, offshore shelf waters, etc. This coastline is also rich in wide varieties of ecosystem flora and fauna. Directly/indirectly extensive domestic and industrial sewage enter into these coastal water bodies affecting the ecosystem character and create environment problems such as water quality degradation, hypoxia, anoxia, harmful algal blooms, etc. lead to decline in fishery and other related biological production. The present study is focused on the southeast coast of India, starting from Pulicat to Gulf of Mannar, which is rich in marine diversity such as lagoon, mangrove and coral ecosystem. Three dimensional Massachusetts Institute of Technology general circulation model (MITgcm) along with Darwin biogeochemical module is configured for the western Bay of Bengal (BoB) to study the biogeochemistry over this region. The biogeochemical module resolves the cycling of carbon, phosphorous, nitrogen, silica, iron and oxygen through inorganic, living, dissolved and particulate organic phases. The model domain extends from 4°N-16.5°N and 77°E-86°E with a horizontal resolution of 1 km. The bathymetry is derived from General Bathymetric Chart of the Oceans (GEBCO), which has a resolution of 30 sec. The model is initialized by using the temperature, salinity filed from the World Ocean Atlas (WOA2013) of National Oceanographic Data Centre with a resolution of 0.25°. The model is forced by the surface wind stress from ASCAT and the photosynthetically active radiation from the MODIS-Aqua satellite. Seasonal climatology of nutrients (phosphate, nitrate and silicate) for the southwest BoB region are prepared using available National Institute of Oceanography (NIO) in-situ data sets and compared with the WOA2013 seasonal climatology data. The model simulations with the two different initial conditions viz., WOA2013 and the generated NIO climatology, showed evident changes in the concentration and the evolution of the nutrients in the study region. It is observed that the availability of nutrients is more in NIO data compared to WOA in the model domain. The model simulated primary productivity is compared with the spatially distributed satellite derived chlorophyll data and at various locations with the in-situ data. The seasonal variability of the model simulated primary productivity is also studied.Keywords: Bay of Bengal, Massachusetts Institute of Technology general circulation model, MITgcm, biogeochemistry, primary productivity
Procedia PDF Downloads 14027762 Data Disorders in Healthcare Organizations: Symptoms, Diagnoses, and Treatments
Authors: Zakieh Piri, Shahla Damanabi, Peyman Rezaii Hachesoo
Abstract:
Introduction: Healthcare organizations like other organizations suffer from a number of disorders such as Business Sponsor Disorder, Business Acceptance Disorder, Cultural/Political Disorder, Data Disorder, etc. As quality in healthcare care mostly depends on the quality of data, we aimed to identify data disorders and its symptoms in two teaching hospitals. Methods: Using a self-constructed questionnaire, we asked 20 questions in related to quality and usability of patient data stored in patient records. Research population consisted of 150 managers, physicians, nurses, medical record staff who were working at the time of study. We also asked their views about the symptoms and treatments for any data disorders they mentioned in the questionnaire. Using qualitative methods we analyzed the answers. Results: After classifying the answers, we found six main data disorders: incomplete data, missed data, late data, blurred data, manipulated data, illegible data. The majority of participants believed in their important roles in treatment of data disorders while others believed in health system problems. Discussion: As clinicians have important roles in producing of data, they can easily identify symptoms and disorders of patient data. Health information managers can also play important roles in early detection of data disorders by proactively monitoring and periodic check-ups of data.Keywords: data disorders, quality, healthcare, treatment
Procedia PDF Downloads 43127761 Analysis of Eco-Efficiency and the Determinants of Family Agriculture in Southeast Spain
Authors: Emilio Galdeano-Gómez, Ángeles Godoy-Durán, Juan C. Pérez-Mesa, Laura Piedra-Muñoz
Abstract:
Eco-efficiency is receiving ever-increasing interest as an indicator of sustainability, as it links environmental and economic performances in productive activities. In agriculture, these indicators and their determinants prove relevant due to the close relationships in this activity between the use of natural resources, which is generally limited, and the provision of basic goods to society. In this context, various analyses have focused on eco-efficiency by considering individual family farms as the basic production unit. However, not only must the measure of efficiency be taken into account, but also the existence of a series of factors which constitute socio-economic, political-institutional, and environmental determinants. Said factors have been studied to a lesser extent in the literature. The present work analyzes eco-efficiency at a micro level, focusing on small-scale family farms as the main decision-making units in horticulture in southeast Spain, a sector which represents about 30% of the fresh vegetables produced in the country and about 20% of those consumed in Europe. The objectives of this study are a) to obtain a series of eco-efficiency indicators by estimating several pressure ratios and economic value added in farming, b) to analyze the influence of specific social, economic and environmental variables on the aforementioned eco-efficiency indicators. The present work applies the method of Data Envelopment Analysis (DEA), which calculates different combinations of environmental pressures (water usage, phytosanitary contamination, waste management, etc.) and aggregate economic value. In a second stage, an analysis is conducted on the influence of the socio-economic and environmental characteristics of family farms on the eco-efficiency indicators, as endogeneous variables, through the use of truncated regression and bootstrapping techniques, following Simar-Wilson methodology. The results reveal considerable inefficiency in aspects such as waste management, while there is relatively little inefficiency in water usage and nitrogen balance. On the other hand, characteristics, such as product specialization, the adoption of quality certifications and belonging to a cooperative do have a positive impact on eco-efficiency. These results are deemed to be of interest to agri-food systems structured on small-scale producers, and they may prove useful to policy-makers as regards managing public environmental programs in agriculture.Keywords: data envelopment analysis, eco-efficiency, family farms, horticulture, socioeconomic features
Procedia PDF Downloads 19327760 Household Climate-Resilience Index Development for the Health Sector in Tanzania: Use of Demographic and Health Surveys Data Linked with Remote Sensing
Authors: Heribert R. Kaijage, Samuel N. A. Codjoe, Simon H. D. Mamuya, Mangi J. Ezekiel
Abstract:
There is strong evidence that climate has changed significantly affecting various sectors including public health. The recommended feasible solution is adopting development trajectories which combine both mitigation and adaptation measures for improving resilience pathways. This approach demands a consideration for complex interactions between climate and social-ecological systems. While other sectors such as agriculture and water have developed climate resilience indices, the public health sector in Tanzania is still lagging behind. The aim of this study was to find out how can we use Demographic and Health Surveys (DHS) linked with Remote Sensing (RS) technology and metrological information as tools to inform climate change resilient development and evaluation for the health sector. Methodological review was conducted whereby a number of studies were content analyzed to find appropriate indicators and indices for climate resilience household and their integration approach. These indicators were critically reviewed, listed, filtered and their sources determined. Preliminary identification and ranking of indicators were conducted using participatory approach of pairwise weighting by selected national stakeholders from meeting/conferences on human health and climate change sciences in Tanzania. DHS datasets were retrieved from Measure Evaluation project, processed and critically analyzed for possible climate change indicators. Other sources for indicators of climate change exposure were also identified. For the purpose of preliminary reporting, operationalization of selected indicators was discussed to produce methodological approach to be used in resilience comparative analysis study. It was found that household climate resilient index depends on the combination of three indices namely Household Adaptive and Mitigation Capacity (HC), Household Health Sensitivity (HHS) and Household Exposure Status (HES). It was also found that, DHS alone cannot complement resilient evaluation unless integrated with other data sources notably flooding data as a measure of vulnerability, remote sensing image of Normalized Vegetation Index (NDVI) and Metrological data (deviation from rainfall pattern). It can be concluded that if these indices retrieved from DHS data sets are computed and scientifically integrated can produce single climate resilience index and resilience maps could be generated at different spatial and time scales to enhance targeted interventions for climate resilient development and evaluations. However, further studies are need to test for the sensitivity of index in resilience comparative analysis among selected regions.Keywords: climate change, resilience, remote sensing, demographic and health surveys
Procedia PDF Downloads 16427759 Big Data and Analytics in Higher Education: An Assessment of Its Status, Relevance and Future in the Republic of the Philippines
Authors: Byron Joseph A. Hallar, Annjeannette Alain D. Galang, Maria Visitacion N. Gumabay
Abstract:
One of the unique challenges provided by the twenty-first century to Philippine higher education is the utilization of Big Data. The higher education system in the Philippines is generating burgeoning amounts of data that contains relevant data that can be used to generate the information and knowledge needed for accurate data-driven decision making. This study examines the status, relevance and future of Big Data and Analytics in Philippine higher education. The insights gained from the study may be relevant to other developing nations similarly situated as the Philippines.Keywords: big data, data analytics, higher education, republic of the philippines, assessment
Procedia PDF Downloads 34727758 DYVELOP Method Implementation for the Research Development in Small and Middle Enterprises
Authors: Jiří F. Urbánek, David Král
Abstract:
Small and Middle Enterprises (SME) have a specific mission, characteristics, and behavior in global business competitive environments. They must respect policy, rules, requirements and standards in all their inherent and outer processes of supply - customer chains and networks. Paper aims and purposes are to introduce computational assistance, which enables us the using of prevailing operation system MS Office (SmartArt...) for mathematical models, using DYVELOP (Dynamic Vector Logistics of Processes) method. It is providing for SMS´s global environment the capability and profit to achieve its commitment regarding the effectiveness of the quality management system in customer requirements meeting and also the continual improvement of the organization’s and SME´s processes overall performance and efficiency, as well as its societal security via continual planning improvement. DYVELOP model´s maps - the Blazons are able mathematically - graphically express the relationships among entities, actors, and processes, including the discovering and modeling of the cycling cases and their phases. The blazons need live PowerPoint presentation for better comprehension of this paper mission – added value analysis. The crisis management of SMEs is obliged to use the cycles for successful coping of crisis situations. Several times cycling of these cases is a necessary condition for the encompassment of the both the emergency event and the mitigation of organization´s damages. Uninterrupted and continuous cycling process is a good indicator and controlling actor of SME continuity and its sustainable development advanced possibilities.Keywords: blazons, computational assistance, DYVELOP method, small and middle enterprises
Procedia PDF Downloads 34027757 Climate Change, Agriculture and Food Security in Sub-Saharan Africa: What Effects and What Answers?
Authors: Abdoulahad Allamine
Abstract:
The objective of this study is to assess the impact of climate variability on agriculture and food security in 43 countries of sub-Saharan Africa. We use for this purpose the data from BADC bases, UNCTAD, and WDI FAOSTAT to estimate a VAR model on panel data. The sample is divided into three (03) agro-climatic zones, more explicitly the equatorial zone, the Sahel region and the semi-arid zone. This allows to highlight the differential impacts sustained by countries and appropriate responses to each group of countries. The results show that the sharp fluctuations in the volume of rainfall negatively affect agriculture and food security of countries in the equatorial zone, with heavy rainfall and high temperatures in the Sahel region. However, countries with low temperatures and low rainfall are the least affected. The hedging policies against the risks of climate variability must be more active in the first two groups of countries. On this basis and in general, we recommend integration of agricultural policies between countries is done to reduce the effects of climate variability on agriculture and food security. It would be logical to encourage regional and international closer collaboration on the development and dissemination of improved varieties, ecological intensification, and management of biotic and abiotic stresses facing these climate variability to sustainably increase food production. Small farmers also need training in agricultural risk hedging techniques related to climate variations; this requires an increase in state budgets allocated to agriculture.Keywords: agro-climatic zones, climate variability, food security, Sub-Saharan Africa, VAR on panel data
Procedia PDF Downloads 38627756 Data Management and Analytics for Intelligent Grid
Authors: G. Julius P. Roy, Prateek Saxena, Sanjeev Singh
Abstract:
Power distribution utilities two decades ago would collect data from its customers not later than a period of at least one month. The origin of SmartGrid and AMI has subsequently increased the sampling frequency leading to 1000 to 10000 fold increase in data quantity. This increase is notable and this steered to coin the tern Big Data in utilities. Power distribution industry is one of the largest to handle huge and complex data for keeping history and also to turn the data in to significance. Majority of the utilities around the globe are adopting SmartGrid technologies as a mass implementation and are primarily focusing on strategic interdependence and synergies of the big data coming from new information sources like AMI and intelligent SCADA, there is a rising need for new models of data management and resurrected focus on analytics to dissect data into descriptive, predictive and dictatorial subsets. The goal of this paper is to is to bring load disaggregation into smart energy toolkit for commercial usage.Keywords: data management, analytics, energy data analytics, smart grid, smart utilities
Procedia PDF Downloads 77827755 An Investigation on Opportunities and Obstacles on Implementation of Building Information Modelling for Pre-fabrication in Small and Medium Sized Construction Companies in Germany: A Practical Approach
Authors: Nijanthan Mohan, Rolf Gross, Fabian Theis
Abstract:
The conventional method used in the construction industries often resulted in significant rework since most of the decisions were taken onsite under the pressure of project deadlines and also due to the improper information flow, which results in ineffective coordination. However, today’s architecture, engineering, and construction (AEC) stakeholders demand faster and accurate deliverables, efficient buildings, and smart processes, which turns out to be a tall order. Hence, the building information modelling (BIM) concept was developed as a solution to fulfill the above-mentioned necessities. Even though BIM is successfully implemented in most of the world, it is still in the early stages in Germany, since the stakeholders are sceptical of its reliability and efficiency. Due to the huge capital requirement, the small and medium-sized construction companies are still reluctant to implement BIM workflow in their projects. The purpose of this paper is to analyse the opportunities and obstacles to implementing BIM for prefabrication. Among all other advantages of BIM, pre-fabrication is chosen for this paper because it plays a vital role in creating an impact on time as well as cost factors of a construction project. The positive impact of prefabrication can be explicitly observed by the project stakeholders and participants, which enables the breakthrough of the skepticism factor among the small scale construction companies. The analysis consists of the development of a process workflow for implementing prefabrication in building construction, followed by a practical approach, which was executed with two case studies. The first case study represents on-site prefabrication, and the second was done for off-site prefabrication. It was planned in such a way that the first case study gives a first-hand experience for the workers at the site on the BIM model so that they can make much use of the created BIM model, which is a better representation compared to the traditional 2D plan. The main aim of the first case study is to create a belief in the implementation of BIM models, which was succeeded by the execution of offshore prefabrication in the second case study. Based on the case studies, the cost and time analysis was made, and it is inferred that the implementation of BIM for prefabrication can reduce construction time, ensures minimal or no wastes, better accuracy, less problem-solving at the construction site. It is also observed that this process requires more planning time, better communication, and coordination between different disciplines such as mechanical, electrical, plumbing, architecture, etc., which was the major obstacle for successful implementation. This paper was carried out in the perspective of small and medium-sized mechanical contracting companies for the private building sector in Germany.Keywords: building information modelling, construction wastes, pre-fabrication, small and medium sized company
Procedia PDF Downloads 11127754 Religious Government Interaction in Urban Settings
Authors: Rebecca Sager, Gary Adler, Damon Mayrl, Jonathan Cooley
Abstract:
The United States’ unique constitutional structure and religious roots have fostered the flourishing of local communities through the close interaction of church and state. Today, these local relationships play out in these circumstances, including increased religious diversity and changing jurisprudence to more accommodating church-state interaction. This project seeks to understand the meanings of church-state interaction among diverse religious leaders in a variety of local settings. Using data from interviews with over 200 religious leaders in six states in the US, we examine how religious groups interact with various non-elected and elected government officials. We have interviewed local religious actors in eight communities characterized by the difference in location and religious homogeneity. These include a small city within a major metropolitan area, several religiously diverse cities in various areas across the country, a small college town with religious diversity set in a religiously-homogenous rural area, and a small farming community with minimal religious diversity. We identified three types of religious actors in each of our geographic areas: congregations, religious non-profit organizations, and clergy coalitions. Given the well-known difficulties in identifying religious organizations, we used the following to construct a local population list from which to sample: the Association of Religion Data Archives ProPublica’s Nonprofit Explorer, Guidestar, and the Internal Revenue Service Exempt Business Master File. Our sample for selecting interviewees were stratified by three criteria: religious tradition (Christian v. non-Christian), sectarian orientation (Mainline/Catholic v. Evangelical Protestant), and organizational form (congregation vs. other). Each interview included the elicitation of local church-state interactions experienced by the organization and organizational members, the enumeration of information sources for navigating church-state interactions, and the personal and community background of interviewees. We coded interviews to identify the cognitive schema of “church” and “state,” the models of legitimate relations between the two, and discretion rules for managing interaction and avoiding conflict. We also enumerate arenas in which and issues for which local state officials are engaged. In this paper, we focus on Korean religious groups and examine how their interactions differ from other congregations, including other immigrant congregations. These churches were particularly common in one large metropolitan area. We find that Korean churches are much more likely to be concerned about any governmental interactions and have fewer connections than non-Korean churches leading to more disconnection from their communities. We argue that due to their status as new immigrant churches without a lot of community ties for many members and being in a large city, Korean churches were particularly concerned about too much interaction with any type of government officials, even ones that could be potentially helpful. While other immigrant churches were somewhat willing to work with government groups, such as Latino-based Catholic groups, Korean churches were the least likely to want to create these connections. Understanding these churches and how immigrant church identity varies and creates different types of interaction is crucial to understanding how church/state interaction can be more meaningful over space and place.Keywords: religion, congregations, government, politics
Procedia PDF Downloads 8727753 Image Segmentation with Deep Learning of Prostate Cancer Bone Metastases on Computed Tomography
Authors: Joseph M. Rich, Vinay A. Duddalwar, Assad A. Oberai
Abstract:
Prostate adenocarcinoma is the most common cancer in males, with osseous metastases as the commonest site of metastatic prostate carcinoma (mPC). Treatment monitoring is based on the evaluation and characterization of lesions on multiple imaging studies, including Computed Tomography (CT). Monitoring of the osseous disease burden, including follow-up of lesions and identification and characterization of new lesions, is a laborious task for radiologists. Deep learning algorithms are increasingly used to perform tasks such as identification and segmentation for osseous metastatic disease and provide accurate information regarding metastatic burden. Here, nnUNet was used to produce a model which can segment CT scan images of prostate adenocarcinoma vertebral bone metastatic lesions. nnUNet is an open-source Python package that adds optimizations to deep learning-based UNet architecture but has not been extensively combined with transfer learning techniques due to the absence of a readily available functionality of this method. The IRB-approved study data set includes imaging studies from patients with mPC who were enrolled in clinical trials at the University of Southern California (USC) Health Science Campus and Los Angeles County (LAC)/USC medical center. Manual segmentation of metastatic lesions was completed by an expert radiologist Dr. Vinay Duddalwar (20+ years in radiology and oncologic imaging), to serve as ground truths for the automated segmentation. Despite nnUNet’s success on some medical segmentation tasks, it only produced an average Dice Similarity Coefficient (DSC) of 0.31 on the USC dataset. DSC results fell in a bimodal distribution, with most scores falling either over 0.66 (reasonably accurate) or at 0 (no lesion detected). Applying more aggressive data augmentation techniques dropped the DSC to 0.15, and reducing the number of epochs reduced the DSC to below 0.1. Datasets have been identified for transfer learning, which involve balancing between size and similarity of the dataset. Identified datasets include the Pancreas data from the Medical Segmentation Decathlon, Pelvic Reference Data, and CT volumes with multiple organ segmentations (CT-ORG). Some of the challenges of producing an accurate model from the USC dataset include small dataset size (115 images), 2D data (as nnUNet generally performs better on 3D data), and the limited amount of public data capturing annotated CT images of bone lesions. Optimizations and improvements will be made by applying transfer learning and generative methods, including incorporating generative adversarial networks and diffusion models in order to augment the dataset. Performance with different libraries, including MONAI and custom architectures with Pytorch, will be compared. In the future, molecular correlations will be tracked with radiologic features for the purpose of multimodal composite biomarker identification. Once validated, these models will be incorporated into evaluation workflows to optimize radiologist evaluation. Our work demonstrates the challenges of applying automated image segmentation to small medical datasets and lays a foundation for techniques to improve performance. As machine learning models become increasingly incorporated into the workflow of radiologists, these findings will help improve the speed and accuracy of vertebral metastatic lesions detection.Keywords: deep learning, image segmentation, medicine, nnUNet, prostate carcinoma, radiomics
Procedia PDF Downloads 9527752 Nanoparticle Induced Neurotoxicity Mediated by Mitochondria
Authors: Nandini Nalika, Suhel Parvez
Abstract:
Nanotechnology has emerged to play a vital role in developing all through the industrial world with an immense production of nanomaterials including nanoparticles (NPs). Many toxicological studies have confirmed that due to unique small size and physico-chemical properties of NPs (1-100nm), they can be potentially hazardous. Metallic NPs of small size have been shown to induce higher levels of cellular oxidative stress and can easily pass through the Blood Brain Barrier (BBB) and significantly accumulate in brain. With the wide applications of titanium dioxide nanoparticles (TNPs) in day-to-day life in form of cosmetics, paints, sterilisation and so on, there is growing concern regarding the deleterious effects of TNPs on central nervous system and mitochondria appear to be important cellular organelles targeted to the pro-oxidative effects of NPs and an important source that contribute significantly for the production of reactive oxygen species after some toxicity or an injury. The aim of our study was to elucidate the effect of TNPs in anatase form with different concentrations (5-50 µg/ml) following with various oxidative stress markers in isolated brain mitochondria as an in vitro model. Oxidative stress was determined by measuring the different oxidative stress markers like lipid peroxidation as well as the protein carbonyl content which was found to be significantly increased. Reduced glutathione content and major glutathione metabolizing enzymes were also modulated signifying the role of glutathione redox cycle in the pathophysiology of TNPs. The study also includes the mitochondrial enzymes (Complex 1, Complex II, complex IV, Complex V ) and the enzymes showed toxicity in a relatively short time due to the effect of TNPs. The study provide a range of concentration that were toxic to the neuronal cells and data pointing to a general toxicity in brain mitochondria by TNPs, therefore, it is in need to consider the proper utilization of NPs in the environment.Keywords: mitochondria, nanoparticles, brain, in vitro
Procedia PDF Downloads 39727751 Privacy Preserving Data Publishing Based on Sensitivity in Context of Big Data Using Hive
Authors: P. Srinivasa Rao, K. Venkatesh Sharma, G. Sadhya Devi, V. Nagesh
Abstract:
Privacy Preserving Data Publication is the main concern in present days because the data being published through the internet has been increasing day by day. This huge amount of data was named as Big Data by its size. This project deals the privacy preservation in the context of Big Data using a data warehousing solution called hive. We implemented Nearest Similarity Based Clustering (NSB) with Bottom-up generalization to achieve (v,l)-anonymity. (v,l)-Anonymity deals with the sensitivity vulnerabilities and ensures the individual privacy. We also calculate the sensitivity levels by simple comparison method using the index values, by classifying the different levels of sensitivity. The experiments were carried out on the hive environment to verify the efficiency of algorithms with Big Data. This framework also supports the execution of existing algorithms without any changes. The model in the paper outperforms than existing models.Keywords: sensitivity, sensitive level, clustering, Privacy Preserving Data Publication (PPDP), bottom-up generalization, Big Data
Procedia PDF Downloads 29327750 The Impact of Artificial Intelligence on Spare Parts Technology
Authors: Amir Andria Gad Shehata
Abstract:
Minimizing the inventory cost, optimizing the inventory quantities, and increasing system operational availability are the main motivations to enhance forecasting demand of spare parts in a major power utility company in Medina. This paper reports in an effort made to optimize the orders quantities of spare parts by improving the method of forecasting the demand. The study focuses on equipment that has frequent spare parts purchase orders with uncertain demand. The pattern of the demand considers a lumpy pattern which makes conventional forecasting methods less effective. A comparison was made by benchmarking various methods of forecasting based on experts’ criteria to select the most suitable method for the case study. Three actual data sets were used to make the forecast in this case study. Two neural networks (NN) approaches were utilized and compared, namely long short-term memory (LSTM) and multilayer perceptron (MLP). The results as expected, showed that the NN models gave better results than traditional forecasting method (judgmental method). In addition, the LSTM model had a higher predictive accuracy than the MLP model.Keywords: spare part, spare part inventory, inventory model, optimization, maintenanceneural network, LSTM, MLP, forecasting demand, inventory management
Procedia PDF Downloads 62