Search results for: input constraints
2719 Enhancement of Natural Convection Heat Transfer within Closed Enclosure Using Parallel Fins
Authors: F. A. Gdhaidh, K. Hussain, H. S. Qi
Abstract:
A numerical study of natural convection heat transfer in water filled cavity has been examined in 3D for single phase liquid cooling system by using an array of parallel plate fins mounted to one wall of a cavity. The heat generated by a heat source represents a computer CPU with dimensions of 37.5×37.5 mm mounted on substrate. A cold plate is used as a heat sink installed on the opposite vertical end of the enclosure. The air flow inside the computer case is created by an exhaust fan. A turbulent air flow is assumed and k-ε model is applied. The fins are installed on the substrate to enhance the heat transfer. The applied power energy range used is between 15- 40W. In order to determine the thermal behaviour of the cooling system, the effect of the heat input and the number of the parallel plate fins are investigated. The results illustrate that as the fin number increases the maximum heat source temperature decreases. However, when the fin number increases to critical value the temperature start to increase due to the fins are too closely spaced and that cause the obstruction of water flow. The introduction of parallel plate fins reduces the maximum heat source temperature by 10% compared to the case without fins. The cooling system maintains the maximum chip temperature at 64.68℃ when the heat input was at 40 W which is much lower than the recommended computer chips limit temperature of no more than 85℃ and hence the performance of the CPU is enhanced.Keywords: chips limit temperature, closed enclosure, natural convection, parallel plate, single phase liquid
Procedia PDF Downloads 2652718 Energy Use and Econometric Models of Soybean Production in Mazandaran Province of Iran
Authors: Majid AghaAlikhani, Mostafa Hojati, Saeid Satari-Yuzbashkandi
Abstract:
This paper studies energy use patterns and relationship between energy input and yield for soybean (Glycine max (L.) Merrill) in Mazandaran province of Iran. In this study, data were collected by administering a questionnaire in face-to-face interviews. Results revealed that the highest share of energy consumption belongs to chemical fertilizers (29.29%) followed by diesel (23.42%) and electricity (22.80%). Our investigations showed that a total energy input of 23404.1 MJ.ha-1 was consumed for soybean production. The energy productivity, specific energy, and net energy values were estimated as 0.12 kg MJ-1, 8.03 MJ kg-1, and 49412.71 MJ.ha-1, respectively. The ratio of energy outputs to energy inputs was 3.11. Obtained results indicated that direct, indirect, renewable and non-renewable energies were (56.83%), (43.17%), (15.78%) and (84.22%), respectively. Three econometric models were also developed to estimate the impact of energy inputs on yield. The results of econometric models revealed that impact of chemical, fertilizer, and water on yield were significant at 1% probability level. Also, direct and non-renewable energies were found to be rather high. Cost analysis revealed that total cost of soybean production per ha was around 518.43$. Accordingly, the benefit-cost ratio was estimated as 2.58. The energy use efficiency in soybean production was found as 3.11. This reveals that the inputs used in soybean production are used efficiently. However, due to higher rate of nitrogen fertilizer consumption, sustainable agriculture should be extended and extension staff could be proposed substitution of chemical fertilizer by biological fertilizer or green manure.Keywords: Cobbe Douglas function, economical analysis, energy efficiency, energy use patterns, soybean
Procedia PDF Downloads 3342717 Management Practices and Economic Performance of Smallholder Dairy Cattle Farms in Southern Vietnam
Authors: Ngoc-Hieu Vu
Abstract:
Although dairy production in Vietnam is a relatively new agricultural activity, milk production increased remarkably in recent years. Smallholders are still the main drivers for this development, especially in the southern part of the country. However, information on the farming practices is very limited. Therefore, this study aimed to characterize husbandry practices, educational experiences, decision-making practices, constraints, income and expenses of smallholder dairy farms in Southern Vietnam. A total of 200 farms, located in the regions Ho Chi Minh (HCM, N=80 farms), Lam Dong (N=40 farms), Binh Duong (N=40 farms) and Long An (N=40 farms) were included. Between October 2013 and December 2014 farmers were interviewed twice. On average, farms owned 3.200m2, 2.000m2, and 193m2 of pasture, cropping and housing area, respectively. The number of total, milking and dry cows, heifers, and calves were 20.4, 11.6, 4.7, 3.3, and 2.9 head. The number of lactating dairy cows was higher (p<0.001) in HCM (15.5) and Lam Dong (14.7) than in Binh Duong (6.7) and Long An (10.7). Animals were mainly crossbred Holstein-Friesian (HF) cows with at least 75% HF origin (84%), whereas a higher (P<0.001) percentage of purebred HF was found in HCM and Lam Dong and crossbreds in Binh Duong and Long An. Animals were mainly raised in tie-stalls (94%) and machine-milked (80%). Farmers used their own replacement animals (76%), and both genetic and phenotypic information (67%) for selecting sires. Farmers were predominantly educated at primary school level (53%). Major constraints for dairy farming were the lack of capital (43%), diseases (17%), marketing (22%), lack of knowledge (8%) and feed (7%). Monthly profit per lactating cow was superior in Lam Dong (2,817 thousand VND) and HCM (2,798 thousand VND) compared to other regions in Long An (2,597 thousand VND), and Binh Duong (1,775 thousand VND). Regional differences may be mainly attributed to environmental factors, urbanization, and particularly governmental support and the availability of extension and financial institutions. Results from this study provide important information on farming practices of smallholders in Southern Vietnam that are useful in determining regions that need to be addressed by authorities in order to improve dairy production.Keywords: dairy farms, milk yield, Southern Vietnam, socio-economics
Procedia PDF Downloads 4652716 Evaluation of the Safety Status of Beef Meat During Processing at Slaughterhouse in Bouira, Algeria
Authors: A. Ameur Ameur, H. Boukherrouba
Abstract:
In red meat slaughterhouses a significant number of organs and carcasses were seized because of the presence of lesions of various origins. The objective of this study is to characterize and evaluate the frequency of these lesions in the slaughterhouse of the Wilaya of BOUIRA. On cattle slaughtered in 2646 and inspected 72% of these carcasses have been no seizures against 28% who have undergone at least one entry. 325 lung (44%), 164 livers (22%), 149 hearts (21%) are the main saisis.38 kidneys members (5%), 33 breasts (4%) and 16 whole carcasses (2%) are less seizures parties. The main reasons are the input hydatid cyst for most seized organs such as the lungs (64.5%), livers (51.8%), hearts (23.2%), hydronephrosis for the kidneys (39.4%), and chronic mastitis (54%) for the breasts. Then we recorded second-degree pneumonia (16%) to the lungs, chronic fascioliasis (25%) for livers. A significant difference was observed (p < 0.0001) by sex, race, origin and age of all cattle having been saisie.une a specific input patterns and So pathology was recorded based on race. The local breed presented (75.2%) of hydatid cyst, (95%) and chronic fascioliasis (60%) pyelonephritis, for against the improved breed presented the entire respiratory lesions include pneumonia (64%) the chronic tuberculosis (64%) and mastitis (76%). These results are an important step in the implementation of the concept of risk assessment as the scientific basis of food legislation, by the identification and characterization of macroscopic damage leading withdrawals in meat and to establish the level of inclusion of these injuries within the recommended risk assessment systems (HACCP).Keywords: slaughterhouses, meat safety, seizure patterns, HACCP
Procedia PDF Downloads 4652715 Chatbots as Language Teaching Tools for L2 English Learners
Authors: Feiying Wu
Abstract:
Chatbots are computer programs that attempt to engage a human in a dialogue, which originated in the 1960s with MIT's Eliza. However, they have become widespread more recently as advances in language technology have produced chatbots with increasing linguistic quality and sophistication, leading to their potential to serve as a tool for Computer-Assisted Language Learning(CALL). The aim of this article is to assess the feasibility of using two chatbots, Mitsuku and CleverBot, as pedagogical tools for learning English as a second language by stimulating L2 learners with distinct English proficiencies. Speaking of the input of stimulated learners, they are measured by AntWordProfiler to match the user's expected vocabulary proficiency. Totally, there are four chat sessions as each chatbot will converse with both beginners and advanced learners. For evaluation, it focuses on chatbots' responses from a linguistic standpoint, encompassing vocabulary and sentence levels. The vocabulary level is determined by the vocabulary range and the reaction to misspelled words. Grammatical accuracy and responsiveness to poorly formed sentences are assessed for the sentence level. In addition, the assessment of this essay sets 25% lexical and grammatical incorrect input to determine chatbots' corrective ability towards different linguistic forms. Based on statistical evidence and illustration of examples, despite the small sample size, neither Mitsuku nor CleverBot is ideal as educational tools based on their performance through word range, grammatical accuracy, topic range, and corrective feedback for incorrect words and sentences, but rather as a conversational tool for beginners of L2 English.Keywords: chatbots, CALL, L2, corrective feedback
Procedia PDF Downloads 782714 Subtitling in the Classroom: Combining Language Mediation, ICT and Audiovisual Material
Authors: Rossella Resi
Abstract:
This paper describes a project carried out in an Italian school with English learning pupils combining three didactic tools which are attested to be relevant for the success of young learner’s language curriculum: the use of technology, the intralingual and interlingual mediation (according to CEFR) and the cultural dimension. Aim of this project was to test a technological hands-on translation activity like subtitling in a formal teaching context and to exploit its potential as motivational tool for developing listening and writing, translation and cross-cultural skills among language learners. The activities proposed involved the use of professional subtitling software called Aegisub and culture-specific films. The workshop was optional so motivation was entirely based on the pleasure of engaging in the use of a realistic subtitling program and on the challenge of meeting the constraints that a real life/work situation might involve. Twelve pupils in the age between 16 and 18 have attended the afternoon workshop. The workshop was organized in three parts: (i) An introduction where the learners were opened up to the concept and constraints of subtitling and provided with few basic rules on spotting and segmentation. During this session learners had also the time to familiarize with the main software features. (ii) The second part involved three subtitling activities in plenum or in groups. In the first activity the learners experienced the technical dimensions of subtitling. They were provided with a short video segment together with its transcription to be segmented and time-spotted. The second activity involved also oral comprehension. Learners had to understand and transcribe a video segment before subtitling it. The third activity embedded a translation activity of a provided transcription including segmentation and spotting of subtitles. (iii) The workshop ended with a small final project. At this point learners were able to master a short subtitling assignment (transcription, translation, segmenting and spotting) on their own with a similar video interview. The results of these assignments were above expectations since the learners were highly motivated by the authentic and original nature of the assignment. The subtitled videos were evaluated and watched in the regular classroom together with other students who did not take part to the workshop.Keywords: ICT, L2, language learning, language mediation, subtitling
Procedia PDF Downloads 4162713 Competitiveness of Animation Industry: The Case of Thailand
Authors: T. Niracharapa
Abstract:
The research studied and examined the competitiveness of the animation industry in Thailand. Data were collected based on articles, related reports and websites, news, research, and interviews of key persons from both public and private sectors. The diamond model was used to analyze the study. The major factor driving the Thai animation industry forward includes a quality workforce, their creativity and strong associations. However, discontinuity in government support, infrastructure, marketing, IP creation and financial constraints were factors keeping the Thai animation industry less competitive in the global market.Keywords: animation, competitiveness, government, Thailand, market
Procedia PDF Downloads 4442712 Literacy in First and Second Language: Implication for Language Education
Authors: Inuwa Danladi Bawa
Abstract:
One of the challenges of African states in the development of education in the past and the present is the problem of literacy. Literacy in the first language is seen as a strong base for the development of second language; they are mostly the language of education. Language development is an offshoot of language planning; so the need to develop literacy in both first and second language affects language education and predicts the extent of achievement of the entire education sector. The need to balance literacy acquisition in first language for good conditioning the acquisition of second language is paramount. Likely constraints that includes; non-standardization, underdeveloped and undeveloped first languages are among many. Solutions to some of these include the development of materials and use of the stages and levels of literacy acquisition. This is with believed that a child writes well in second language if he has literacy in the first language.Keywords: first language, second language, literacy, english language, linguistics
Procedia PDF Downloads 4532711 OptiBaha: Design of a Web Based Analytical Tool for Enhancing Quality of Education at AlBaha University
Authors: Nadeem Hassan, Farooq Ahmad
Abstract:
The quality of education has a direct impact on individual, family, society, economy in general and the mankind as a whole. Because of that thousands of research papers and articles are written on the quality of education, billions of dollars are spent and continuously being spent on research and enhancing the quality of education. Academic programs accredited agencies define the various criterion of quality of education; academic institutions obtain accreditation from these agencies to ensure degree programs offered at their institution are of international standards. This R&D aims to build a web based analytical tool (OptiBaha) that finds the gaps in AlBaha University education system by taking input from stakeholders, including students, faculty, staff and management. The input/online-data collected by this tool will be analyzed on core areas of education as proposed by accredited agencies, CAC of ABET and NCAAA of KSA, including student background, language, culture, motivation, curriculum, teaching methodology, assessment and evaluation, performance and progress, facilities, availability of teaching materials, faculty qualification, monitoring, policies and procedures, and more. Based on different analytical reports, gaps will be highlighted, and remedial actions will be proposed. If the tool is implemented and made available through a continuous process the quality of education at AlBaha University can be enhanced, it will also help in fulfilling criterion of accreditation agencies. The tool will be generic in nature and ultimately can be used by any academic institution.Keywords: academic quality, accreditation agencies, higher education, policies and procedures
Procedia PDF Downloads 3012710 Creating and Questioning Research-Oriented Digital Outputs to Manuscript Metadata: A Case-Based Methodological Investigation
Authors: Diandra Cristache
Abstract:
The transition of traditional manuscript studies into the digital framework closely affects the methodological premises upon which manuscript descriptions are modeled, created, and questioned for the purpose of research. This paper intends to explore the issue by presenting a methodological investigation into the process of modeling, creating, and questioning manuscript metadata. The investigation is founded on a close observation of the Polonsky Greek Manuscripts Project, a collaboration between the Universities of Cambridge and Heidelberg. More than just providing a realistic ground for methodological exploration, along with a complete metadata set for computational demonstration, the case study also contributes to a broader purpose: outlining general methodological principles for making the most out of manuscript metadata by means of research-oriented digital outputs. The analysis mainly focuses on the scholarly approach to manuscript descriptions, in the specific instance where the act of metadata recording does not have a programmatic research purpose. Close attention is paid to the encounter of 'traditional' practices in manuscript studies with the formal constraints of the digital framework: does the shift in practices (especially from the straight narrative of free writing towards the hierarchical constraints of the TEI encoding model) impact the structure of metadata and its capability to respond specific research questions? It is argued that flexible structure of TEI and traditional approaches to manuscript description lead to a proliferation of markup: does an 'encyclopedic' descriptive approach ensure the epistemological relevance of the digital outputs to metadata? To provide further insight on the computational approach to manuscript metadata, the metadata of the Polonsky project are processed with techniques of distant reading and data networking, thus resulting in a new group of digital outputs (relational graphs, geographic maps). The computational process and the digital outputs are thoroughly illustrated and discussed. Eventually, a retrospective analysis evaluates how the digital outputs respond to the scientific expectations of research, and the other way round, how the requirements of research questions feed back into the creation and enrichment of metadata in an iterative loop.Keywords: digital manuscript studies, digital outputs to manuscripts metadata, metadata interoperability, methodological issues
Procedia PDF Downloads 1402709 Simulation of Optimal Runoff Hydrograph Using Ensemble of Radar Rainfall and Blending of Runoffs Model
Authors: Myungjin Lee, Daegun Han, Jongsung Kim, Soojun Kim, Hung Soo Kim
Abstract:
Recently, the localized heavy rainfall and typhoons are frequently occurred due to the climate change and the damage is becoming bigger. Therefore, we may need a more accurate prediction of the rainfall and runoff. However, the gauge rainfall has the limited accuracy in space. Radar rainfall is better than gauge rainfall for the explanation of the spatial variability of rainfall but it is mostly underestimated with the uncertainty involved. Therefore, the ensemble of radar rainfall was simulated using error structure to overcome the uncertainty and gauge rainfall. The simulated ensemble was used as the input data of the rainfall-runoff models for obtaining the ensemble of runoff hydrographs. The previous studies discussed about the accuracy of the rainfall-runoff model. Even if the same input data such as rainfall is used for the runoff analysis using the models in the same basin, the models can have different results because of the uncertainty involved in the models. Therefore, we used two models of the SSARR model which is the lumped model, and the Vflo model which is a distributed model and tried to simulate the optimum runoff considering the uncertainty of each rainfall-runoff model. The study basin is located in Han river basin and we obtained one integrated runoff hydrograph which is an optimum runoff hydrograph using the blending methods such as Multi-Model Super Ensemble (MMSE), Simple Model Average (SMA), Mean Square Error (MSE). From this study, we could confirm the accuracy of rainfall and rainfall-runoff model using ensemble scenario and various rainfall-runoff model and we can use this result to study flood control measure due to climate change. Acknowledgements: This work is supported by the Korea Agency for Infrastructure Technology Advancement(KAIA) grant funded by the Ministry of Land, Infrastructure and Transport (Grant 18AWMP-B083066-05).Keywords: radar rainfall ensemble, rainfall-runoff models, blending method, optimum runoff hydrograph
Procedia PDF Downloads 2802708 Solving the Transportation Problem for Warehouses and Dealers in Bangalore City
Authors: S. Aditya, K. T. Nideesh, N. Guruprasad
Abstract:
Being a subclass of linear programing problem, the Transportation Problem is a classic Operations Research problem where the objective is to determine the schedule for transporting goods from source to destination in a way that minimizes the shipping cost while satisfying supply and demand constraints. In this paper, we are representing the transportation problem for various warehouses along with various dealers situated in Bangalore city to reduce the transportation cost incurred by them as of now. The problem is solved by obtaining the Initial Basic feasible Solution through various methods and further proceeding to obtain optimal cost.Keywords: NW method, optimum utilization, transportation problem, Vogel’s approximation method
Procedia PDF Downloads 4382707 Technological Innovation and Efficiency of Production of the Greek Aquaculture Industry
Authors: C. Nathanailides, S. Anastasiou, A. Dimitroglou, P. Logothetis, G. Kanlis
Abstract:
In the present work we reviewed historical data of the Greek Marine aquaculture industry including adoption of new methods and technological innovation. The results indicate that the industry exhibited a rapid rise in production efficiency, employment and adoption of new technologies which reduced outbreaks of diseases, reduced production risk and the price of the farmed fish. The improvements of total quality practices and technological input on the Greek Aquaculture industry include improved survival, growth and body shape of farmed fish, which resulted from development of new aquaculture feeds and the genetic selection of the bloodstock. Also improvements in the quality of the final product were achieved via technological input in the methods and technology applied during harvesting, packaging, and transportation-preservation of farmed fish ensuring high quality of the product from the fish farm to the plate of the consumers. These parameters (health management, nutrition, genetics, harvesting and post-harvesting methods and technology) changed significantly over the last twenty years and the results of these improvements are reflected in the production efficiency of the Aquaculture industry and the quality of the final product. It is concluded that the Greek aquaculture industry exhibited a rapid growth, adoption of technologies and supply was stabilized after the global financial crisis, nevertheless, the development of the Greek aquaculture industry is currently limited by international trade sanctions, credit crunch, and increased taxation and not by limited technology or resources.Keywords: innovation, aquaculture, total quality, management
Procedia PDF Downloads 3722706 On the Topological Entropy of Nonlinear Dynamical Systems
Authors: Graziano Chesi
Abstract:
The topological entropy plays a key role in linear dynamical systems, allowing one to establish the existence of stabilizing feedback controllers for linear systems in the presence of communications constraints. This paper addresses the determination of a robust value of the topological entropy in nonlinear dynamical systems, specifically the largest value of the topological entropy over all linearized models in a region of interest of the state space. It is shown that a sufficient condition for establishing upper bounds of the sought robust value of the topological entropy can be given in terms of a semidefinite program (SDP), which belongs to the class of convex optimization problems.Keywords: non-linear system, communication constraint, topological entropy
Procedia PDF Downloads 3232705 Generalized Chaplygin Gas and Varying Bulk Viscosity in Lyra Geometry
Authors: A. K. Sethi, R. N. Patra, B. Nayak
Abstract:
In this paper, we have considered Friedmann-Robertson-Walker (FRW) metric with generalized Chaplygin gas which has viscosity in the context of Lyra geometry. The viscosity is considered in two different ways (i.e. zero viscosity, non-constant r (rho)-dependent bulk viscosity) using constant deceleration parameter which concluded that, for a special case, the viscous generalized Chaplygin gas reduces to modified Chaplygin gas. The represented model indicates on the presence of Chaplygin gas in the Universe. Observational constraints are applied and discussed on the physical and geometrical nature of the Universe.Keywords: bulk viscosity, lyra geometry, generalized chaplygin gas, cosmology
Procedia PDF Downloads 1762704 Calculation of the Normalized Difference Vegetation Index and the Spectral Signature of Coffee Crops: Benefits of Image Filtering on Mixed Crops
Authors: Catalina Albornoz, Giacomo Barbieri
Abstract:
Crop monitoring has shown to reduce vulnerability to spreading plagues and pathologies in crops. Remote sensing with Unmanned Aerial Vehicles (UAVs) has made crop monitoring more precise, cost-efficient and accessible. Nowadays, remote monitoring involves calculating maps of vegetation indices by using different software that takes either Truecolor (RGB) or multispectral images as an input. These maps are then used to segment the crop into management zones. Finally, knowing the spectral signature of a crop (the reflected radiation as a function of wavelength) can be used as an input for decision-making and crop characterization. The calculation of vegetation indices using software such as Pix4D has high precision for monoculture plantations. However, this paper shows that using this software on mixed crops may lead to errors resulting in an incorrect segmentation of the field. Within this work, authors propose to filter all the elements different from the main crop before the calculation of vegetation indices and the spectral signature. A filter based on the Sobel method for border detection is used for filtering a coffee crop. Results show that segmentation into management zones changes with respect to the traditional situation in which a filter is not applied. In particular, it is shown how the values of the spectral signature change in up to 17% per spectral band. Future work will quantify the benefits of filtering through the comparison between in situ measurements and the calculated vegetation indices obtained through remote sensing.Keywords: coffee, filtering, mixed crop, precision agriculture, remote sensing, spectral signature
Procedia PDF Downloads 3882703 Code Embedding for Software Vulnerability Discovery Based on Semantic Information
Authors: Joseph Gear, Yue Xu, Ernest Foo, Praveen Gauravaran, Zahra Jadidi, Leonie Simpson
Abstract:
Deep learning methods have been seeing an increasing application to the long-standing security research goal of automatic vulnerability detection for source code. Attention, however, must still be paid to the task of producing vector representations for source code (code embeddings) as input for these deep learning models. Graphical representations of code, most predominantly Abstract Syntax Trees and Code Property Graphs, have received some use in this task of late; however, for very large graphs representing very large code snip- pets, learning becomes prohibitively computationally expensive. This expense may be reduced by intelligently pruning this input to only vulnerability-relevant information; however, little research in this area has been performed. Additionally, most existing work comprehends code based solely on the structure of the graph at the expense of the information contained by the node in the graph. This paper proposes Semantic-enhanced Code Embedding for Vulnerability Discovery (SCEVD), a deep learning model which uses semantic-based feature selection for its vulnerability classification model. It uses information from the nodes as well as the structure of the code graph in order to select features which are most indicative of the presence or absence of vulnerabilities. This model is implemented and experimentally tested using the SARD Juliet vulnerability test suite to determine its efficacy. It is able to improve on existing code graph feature selection methods, as demonstrated by its improved ability to discover vulnerabilities.Keywords: code representation, deep learning, source code semantics, vulnerability discovery
Procedia PDF Downloads 1592702 Examination of Public Hospital Unions Technical Efficiencies Using Data Envelopment Analysis and Machine Learning Techniques
Authors: Songul Cinaroglu
Abstract:
Regional planning in health has gained speed for developing countries in recent years. In Turkey, 89 different Public Hospital Unions (PHUs) were conducted based on provincial levels. In this study technical efficiencies of 89 PHUs were examined by using Data Envelopment Analysis (DEA) and machine learning techniques by dividing them into two clusters in terms of similarities of input and output indicators. Number of beds, physicians and nurses determined as input variables and number of outpatients, inpatients and surgical operations determined as output indicators. Before performing DEA, PHUs were grouped into two clusters. It is seen that the first cluster represents PHUs which have higher population, demand and service density than the others. The difference between clusters was statistically significant in terms of all study variables (p ˂ 0.001). After clustering, DEA was performed for general and for two clusters separately. It was found that 11% of PHUs were efficient in general, additionally 21% and 17% of them were efficient for the first and second clusters respectively. It is seen that PHUs, which are representing urban parts of the country and have higher population and service density, are more efficient than others. Random forest decision tree graph shows that number of inpatients is a determinative factor of efficiency of PHUs, which is a measure of service density. It is advisable for public health policy makers to use statistical learning methods in resource planning decisions to improve efficiency in health care.Keywords: public hospital unions, efficiency, data envelopment analysis, random forest
Procedia PDF Downloads 1262701 Using Soil Texture Field Observations as Ordinal Qualitative Variables for Digital Soil Mapping
Authors: Anne C. Richer-De-Forges, Dominique Arrouays, Songchao Chen, Mercedes Roman Dobarco
Abstract:
Most of the digital soil mapping (DSM) products rely on machine learning (ML) prediction models and/or the use or pedotransfer functions (PTF) in which calibration data come from soil analyses performed in labs. However, many other observations (often qualitative, nominal, or ordinal) could be used as proxies of lab measurements or as input data for ML of PTF predictions. DSM and ML are briefly described with some examples taken from the literature. Then, we explore the potential of an ordinal qualitative variable, i.e., the hand-feel soil texture (HFST) estimating the mineral particle distribution (PSD): % of clay (0-2µm), silt (2-50µm) and sand (50-2000µm) in 15 classes. The PSD can also be measured by lab measurements (LAST) to determine the exact proportion of these particle-sizes. However, due to cost constraints, HFST are much more numerous and spatially dense than LAST. Soil texture (ST) is a very important soil parameter to map as it is controlling many of the soil properties and functions. Therefore, comes an essential question: is it possible to use HFST as a proxy of LAST for calibration and/or validation of DSM predictions of ST? To answer this question, the first step is to compare HFST with LAST on a representative set where both information are available. This comparison was made on ca 17,400 samples representative of a French region (34,000 km2). The accuracy of HFST was assessed, and each HFST class was characterized by a probability distribution function (PDF) of its LAST values. This enables to randomly replace HFST observations by LAST values while respecting the PDF previously calculated and results in a very large increase of observations available for the calibration or validation of PTF and ML predictions. Some preliminary results are shown. First, the comparison between HFST classes and LAST analyses showed that accuracies could be considered very good when compared to other studies. The causes of some inconsistencies were explored and most of them were well explained by other soil characteristics. Then we show some examples applying these relationships and the increase of data to several issues related to DSM. The first issue is: do the PDF functions that were established enable to use HSFT class observations to improve the LAST soil texture prediction? For this objective, we replaced all HFST for topsoil by values from the PDF 100 time replicates). Results were promising for the PTF we tested (a PTF predicting soil water holding capacity). For the question related to the ML prediction of LAST soil texture on the region, we did the same kind of replacement, but we implemented a 10-fold cross-validation using points where we had LAST values. We obtained only preliminary results but they were rather promising. Then we show another example illustrating the potential of using HFST as validation data. As in numerous countries, the HFST observations are very numerous; these promising results pave the way to an important improvement of DSM products in all the countries of the world.Keywords: digital soil mapping, improvement of digital soil mapping predictions, potential of using hand-feel soil texture, soil texture prediction
Procedia PDF Downloads 2252700 Comparative Analysis of Two Modeling Approaches for Optimizing Plate Heat Exchangers
Authors: Fábio A. S. Mota, Mauro A. S. S. Ravagnani, E. P. Carvalho
Abstract:
In the present paper the design of plate heat exchangers is formulated as an optimization problem considering two mathematical modeling. The number of plates is the objective function to be minimized, considering implicitly some parameters configuration. Screening is the optimization method used to solve the problem. Thermal and hydraulic constraints are verified, not viable solutions are discarded and the method searches for the convergence to the optimum, case it exists. A case study is presented to test the applicability of the developed algorithm. Results show coherency with the literature.Keywords: plate heat exchanger, optimization, modeling, simulation
Procedia PDF Downloads 5182699 AS-Geo: Arbitrary-Sized Image Geolocalization with Learnable Geometric Enhancement Resizer
Authors: Huayuan Lu, Chunfang Yang, Ma Zhu, Baojun Qi, Yaqiong Qiao, Jiangqian Xu
Abstract:
Image geolocalization has great application prospects in fields such as autonomous driving and virtual/augmented reality. In practical application scenarios, the size of the image to be located is not fixed; it is impractical to train different networks for all possible sizes. When its size does not match the size of the input of the descriptor extraction model, existing image geolocalization methods usually directly scale or crop the image in some common ways. This will result in the loss of some information important to the geolocalization task, thus affecting the performance of the image geolocalization method. For example, excessive down-sampling can lead to blurred building contour, and inappropriate cropping can lead to the loss of key semantic elements, resulting in incorrect geolocation results. To address this problem, this paper designs a learnable image resizer and proposes an arbitrary-sized image geolocation method. (1) The designed learnable image resizer employs the self-attention mechanism to enhance the geometric features of the resized image. Firstly, it applies bilinear interpolation to the input image and its feature maps to obtain the initial resized image and the resized feature maps. Then, SKNet (selective kernel net) is used to approximate the best receptive field, thus keeping the geometric shapes as the original image. And SENet (squeeze and extraction net) is used to automatically select the feature maps with strong contour information, enhancing the geometric features. Finally, the enhanced geometric features are fused with the initial resized image, to obtain the final resized images. (2) The proposed image geolocalization method embeds the above image resizer as a fronting layer of the descriptor extraction network. It not only enables the network to be compatible with arbitrary-sized input images but also enhances the geometric features that are crucial to the image geolocalization task. Moreover, the triplet attention mechanism is added after the first convolutional layer of the backbone network to optimize the utilization of geometric elements extracted by the first convolutional layer. Finally, the local features extracted by the backbone network are aggregated to form image descriptors for image geolocalization. The proposed method was evaluated on several mainstream datasets, such as Pittsburgh30K, Tokyo24/7, and Places365. The results show that the proposed method has excellent size compatibility and compares favorably to recently mainstream geolocalization methods.Keywords: image geolocalization, self-attention mechanism, image resizer, geometric feature
Procedia PDF Downloads 2142698 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 2482697 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images
Authors: Sophia Shi
Abstract:
Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG
Procedia PDF Downloads 1312696 Testing the Simplification Hypothesis in Constrained Language Use: An Entropy-Based Approach
Authors: Jiaxin Chen
Abstract:
Translations have been labeled as more simplified than non-translations, featuring less diversified and more frequent lexical items and simpler syntactic structures. Such simplified linguistic features have been identified in other bilingualism-influenced language varieties, including non-native and learner language use. Therefore, it has been proposed that translation could be studied within a broader framework of constrained language, and simplification is one of the universal features shared by constrained language varieties due to similar cognitive-physiological and social-interactive constraints. Yet contradicting findings have also been presented. To address this issue, this study intends to adopt Shannon’s entropy-based measures to quantify complexity in language use. Entropy measures the level of uncertainty or unpredictability in message content, and it has been adapted in linguistic studies to quantify linguistic variance, including morphological diversity and lexical richness. In this study, the complexity of lexical and syntactic choices will be captured by word-form entropy and pos-form entropy, and a comparison will be made between constrained and non-constrained language use to test the simplification hypothesis. The entropy-based method is employed because it captures both the frequency of linguistic choices and their evenness of distribution, which are unavailable when using traditional indices. Another advantage of the entropy-based measure is that it is reasonably stable across languages and thus allows for a reliable comparison among studies on different language pairs. In terms of the data for the present study, one established (CLOB) and two self-compiled corpora will be used to represent native written English and two constrained varieties (L2 written English and translated English), respectively. Each corpus consists of around 200,000 tokens. Genre (press) and text length (around 2,000 words per text) are comparable across corpora. More specifically, word-form entropy and pos-form entropy will be calculated as indicators of lexical and syntactical complexity, and ANOVA tests will be conducted to explore if there is any corpora effect. It is hypothesized that both L2 written English and translated English have lower entropy compared to non-constrained written English. The similarities and divergences between the two constrained varieties may provide indications of the constraints shared by and peculiar to each variety.Keywords: constrained language use, entropy-based measures, lexical simplification, syntactical simplification
Procedia PDF Downloads 942695 A Case Study on the Seismic Performance Assessment of the High-Rise Setback Tower Under Multiple Support Excitations on the Basis of TBI Guidelines
Authors: Kamyar Kildashti, Rasoul Mirghaderi
Abstract:
This paper describes the three-dimensional seismic performance assessment of a high-rise steel moment-frame setback tower, designed and detailed per the 2010 ASCE7, under multiple support excitations. The vulnerability analyses are conducted based on nonlinear history analyses under a set of multi-directional strong ground motion records which are scaled to design-based site-specific spectrum in accordance with ASCE41-13. Spatial variation of input motions between far distant supports of each part of the tower is considered by defining time lag. Plastic hinge monotonic and cyclic behavior for prequalified steel connections, panel zones, as well as steel columns is obtained from predefined values presented in TBI Guidelines, PEER/ATC72 and FEMA P440A to include stiffness and strength degradation. Inter-story drift ratios, residual drift ratios, as well as plastic hinge rotation demands under multiple support excitations, are compared to those obtained from uniform support excitations. Performance objectives based on acceptance criteria declared by TBI Guidelines are compared between uniform and multiple support excitations. The results demonstrate that input motion discrepancy results in detrimental effects on the local and global response of the tower.Keywords: high-rise building, nonlinear time history analysis, multiple support excitation, performance-based design
Procedia PDF Downloads 2852694 Aggregating Buyers and Sellers for E-Commerce: How Demand and Supply Meet in Fairs
Authors: Pierluigi Gallo, Francesco Randazzo, Ignazio Gallo
Abstract:
In recent years, many new and interesting models of successful online business have been developed. Many of these are based on the competition between users, such as online auctions, where the product price is not fixed and tends to rise. Other models, including group-buying, are based on cooperation between users, characterized by a dynamic price of the product that tends to go down. There is not yet a business model in which both sellers and buyers are grouped in order to negotiate on a specific product or service. The present study investigates a new extension of the group-buying model, called fair, which allows aggregation of demand and supply for price optimization, in a cooperative manner. Additionally, our system also aggregates products and destinations for shipping optimization. We introduced the following new relevant input parameters in order to implement a double-side aggregation: (a) price-quantity curves provided by the seller; (b) waiting time, that is, the longer buyers wait, the greater discount they get; (c) payment time, which determines if the buyer pays before, during or after receiving the product; (d) the distance between the place where products are available and the place of shipment, provided in advance by the buyer or dynamically suggested by the system. To analyze the proposed model we implemented a system prototype and a simulator that allows studying effects of changing some input parameters. We analyzed the dynamic price model in fairs having one single seller and a combination of selected sellers. The results are very encouraging and motivate further investigation on this topic.Keywords: auction, aggregation, fair, group buying, social buying
Procedia PDF Downloads 2942693 A 3-Dimensional Memory-Based Model for Planning Working Postures Reaching Specific Area with Postural Constraints
Authors: Minho Lee, Donghyun Back, Jaemoon Jung, Woojin Park
Abstract:
The current 3-dimensional (3D) posture prediction models commonly provide only a few optimal postures to achieve a specific objective. The problem with such models is that they are incapable of rapidly providing several optimal posture candidates according to various situations. In order to solve this problem, this paper presents a 3D memory-based posture planning (3D MBPP) model, which is a new digital human model that can analyze the feasible postures in 3D space for reaching tasks that have postural constraints and specific reaching space. The 3D MBPP model can be applied to the types of works that are done with constrained working postures and have specific reaching space. The examples of such works include driving an excavator, driving automobiles, painting buildings, working at an office, pitching/batting, and boxing. For these types of works, a limited amount of space is required to store all of the feasible postures, as the hand reaches boundary can be determined prior to perform the task. This prevents computation time from increasing exponentially, which has been one of the major drawbacks of memory-based posture planning model in 3D space. This paper validates the utility of 3D MBPP model using a practical example of analyzing baseball batting posture. In baseball, batters swing with both feet fixed to the ground. This motion is appropriate for use with the 3D MBPP model since the player must try to hit the ball when the ball is located inside the strike zone (a limited area) in a constrained posture. The results from the analysis showed that the stored and the optimal postures vary depending on the ball’s flying path, the hitting location, the batter’s body size, and the batting objective. These results can be used to establish the optimal postural strategies for achieving the batting objective and performing effective hitting. The 3D MBPP model can also be applied to various domains to determine the optimal postural strategies and improve worker comfort.Keywords: baseball, memory-based, posture prediction, reaching area, 3D digital human models
Procedia PDF Downloads 2162692 Network Conditioning and Transfer Learning for Peripheral Nerve Segmentation in Ultrasound Images
Authors: Harold Mauricio Díaz-Vargas, Cristian Alfonso Jimenez-Castaño, David Augusto Cárdenas-Peña, Guillermo Alberto Ortiz-Gómez, Alvaro Angel Orozco-Gutierrez
Abstract:
Precise identification of the nerves is a crucial task performed by anesthesiologists for an effective Peripheral Nerve Blocking (PNB). Now, anesthesiologists use ultrasound imaging equipment to guide the PNB and detect nervous structures. However, visual identification of the nerves from ultrasound images is difficult, even for trained specialists, due to artifacts and low contrast. The recent advances in deep learning make neural networks a potential tool for accurate nerve segmentation systems, so addressing the above issues from raw data. The most widely spread U-Net network yields pixel-by-pixel segmentation by encoding the input image and decoding the attained feature vector into a semantic image. This work proposes a conditioning approach and encoder pre-training to enhance the nerve segmentation of traditional U-Nets. Conditioning is achieved by the one-hot encoding of the kind of target nerve a the network input, while the pre-training considers five well-known deep networks for image classification. The proposed approach is tested in a collection of 619 US images, where the best C-UNet architecture yields an 81% Dice coefficient, outperforming the 74% of the best traditional U-Net. Results prove that pre-trained models with the conditional approach outperform their equivalent baseline by supporting learning new features and enriching the discriminant capability of the tested networks.Keywords: nerve segmentation, U-Net, deep learning, ultrasound imaging, peripheral nerve blocking
Procedia PDF Downloads 1062691 Response Analysis of a Steel Reinforced Concrete High-Rise Building during the 2011 Tohoku Earthquake
Authors: Naohiro Nakamura, Takuya Kinoshita, Hiroshi Fukuyama
Abstract:
The 2011 off The Pacific Coast of Tohoku Earthquake caused considerable damage to wide areas of eastern Japan. A large number of earthquake observation records were obtained at various places. To design more earthquake-resistant buildings and improve earthquake disaster prevention, it is necessary to utilize these data to analyze and evaluate the behavior of a building during an earthquake. This paper presents an earthquake response simulation analysis (hereafter a seismic response analysis) that was conducted using data recorded during the main earthquake (hereafter the main shock) as well as the earthquakes before and after it. The data were obtained at a high-rise steel-reinforced concrete (SRC) building in the bay area of Tokyo. We first give an overview of the building, along with the characteristics of the earthquake motion and the building during the main shock. The data indicate that there was a change in the natural period before and after the earthquake. Next, we present the results of our seismic response analysis. First, the analysis model and conditions are shown, and then, the analysis result is compared with the observational records. Using the analysis result, we then study the effect of soil-structure interaction on the response of the building. By identifying the characteristics of the building during the earthquake (i.e., the 1st natural period and the 1st damping ratio) by the Auto-Regressive eXogenous (ARX) model, we compare the analysis result with the observational records so as to evaluate the accuracy of the response analysis. In this study, a lumped-mass system SR model was used to conduct a seismic response analysis using observational data as input waves. The main results of this study are as follows: 1) The observational records of the 3/11 main shock put it between a level 1 and level 2 earthquake. The result of the ground response analysis showed that the maximum shear strain in the ground was about 0.1% and that the possibility of liquefaction occurring was low. 2) During the 3/11 main shock, the observed wave showed that the eigenperiod of the building became longer; this behavior could be generally reproduced in the response analysis. This prolonged eigenperiod was due to the nonlinearity of the superstructure, and the effect of the nonlinearity of the ground seems to have been small. 3) As for the 4/11 aftershock, a continuous analysis in which the subject seismic wave was input after the 3/11 main shock was input was conducted. The analyzed values generally corresponded well with the observed values. This means that the effect of the nonlinearity of the main shock was retained by the building. It is important to consider this when conducting the response evaluation. 4) The first period and the damping ratio during a vibration were evaluated by an ARX model. Our results show that the response analysis model in this study is generally good at estimating a change in the response of the building during a vibration.Keywords: ARX model, response analysis, SRC building, the 2011 off the Pacific Coast of Tohoku Earthquake
Procedia PDF Downloads 1642690 Survey Paper on Graph Coloring Problem and Its Application
Authors: Prateek Chharia, Biswa Bhusan Ghosh
Abstract:
Graph coloring is one of the prominent concepts in graph coloring. It can be defined as a coloring of the various regions of the graph such that all the constraints are fulfilled. In this paper various graphs coloring approaches like greedy coloring, Heuristic search for maximum independent set and graph coloring using edge table is described. Graph coloring can be used in various real time applications like student time tabling generation, Sudoku as a graph coloring problem, GSM phone network.Keywords: graph coloring, greedy coloring, heuristic search, edge table, sudoku as a graph coloring problem
Procedia PDF Downloads 540