Search results for: expensive function
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5683

Search results for: expensive function

2533 Thyroid-Stimulating Hormone as a Stress Biomarker in Thyroidectomy Patients: A Cohort Study

Authors: Jeonghun Lee

Abstract:

In this study, we investigated the relationship between stress and thyroid dysfunction in such patients who underwent thyroidectomy. This study included 101 patients who underwent thyroidectomy from January 2015 to June 2020 and experienced hypothyroidism. The included patients had good drug compliance with the same dosage of levothyroxine (LT4). The male-to-female ratio was 1:4.6, and the mean age was 45.4 years at surgery and 50.2 years at stressful events. Eighteen patients underwent lobectomies and, of these, 12 did not take LT4. The mean follow-up period was 49(8-93) months. Statistical analyses were performed using the paired t-test, Wilcoxon signed-rank test, and McNemer test using PROC MIXED with SAS 9.4. Forty-five patients (44.6%) had hypothyroidism with thyroid-stimulating hormone (TSH) >10 μIU/mL. There was distress in 81 patients and eustress in 10 patients. TSH levels increased during a mean 5.8 months (min 1, max 12) in 24 patients who specified the date of their life events. Even though each patient took the same dose of LT4, when the patients were under stress, both the free T4 and T3 decreased and TSH increased, regardless of whether the patient experienced distress or eustress (P <0.001). While adjusting for the effect of the free T4 and T3, TSH increased significantly in the patients after stress (P <0.001). For patients with thyroid cancer who are simultaneously experiencing life events, TSH may be used as a stress biomarker to enable the implementation of appropriate treatment and counseling strategies.

Keywords: endocrine, thyroid, thyroid function, biomarker, stress

Procedia PDF Downloads 67
2532 In vivo Spectroscopic Study on the Effects of Ionising and Non-Ionising Radiation on Some Biophysical Properties of Rat Blood

Authors: S. H. Allehyani, H. S. Ibrahim, F. M. Ali, E. Sayd, T. Abou Aiad

Abstract:

The present study aimed to analyse the radiation risk associated with the exposure of haemoglobin (Hb) of rat red blood cells (rbcs) exposed to a 50-Hz 6-kV/m electric field, a fast neutron dose of 1 mSv, and mixed radiation from fast neutrons and an electric field distributed over a period of three weeks at a rate of 5 days/week and 8 hours/day. The dielectric measurements and the absorption spectra for the haemoglobin molecule in the frequency range of 1 kHz to 5 MHz were measured for all of the samples. The dielectric relaxation results demonstrated an increase in the dielectric increment (∆ε) for the rbcs from all of the irradiated animals, which indicates an increase in the electric dipole. Moreover, the results revealed a decrease in the relaxation time (τ) and the molecular radius (r) of the irradiated molecules, which indicates that the increase in ∆ε is mainly due to a pronounced increase in the centre of mass of the charge on the electric dipole of the Hb molecule. The results from the absorption spectra indicate that the ratio of met-haemoglobin to oxy-haemoglobin is altered by irradiation. Moreover, the results from the delayed effect studies show that the structure and function of the newly generated Hb molecules are altered and dissimilar to that of healthy Hb.

Keywords: rat red blood cell haemoglobin, dielectric properties, absorption spectra, biochemical analysis

Procedia PDF Downloads 353
2531 Automated Evaluation Approach for Time-Dependent Question Answering Pairs on Web Crawler Based Question Answering System

Authors: Shraddha Chaudhary, Raksha Agarwal, Niladri Chatterjee

Abstract:

This work demonstrates a web crawler-based generalized end-to-end open domain Question Answering (QA) system. An efficient QA system requires a significant amount of domain knowledge to answer any question with the aim to find an exact and correct answer in the form of a number, a noun, a short phrase, or a brief piece of text for the user's questions. Analysis of the question, searching the relevant document, and choosing an answer are three important steps in a QA system. This work uses a web scraper (Beautiful Soup) to extract K-documents from the web. The value of K can be calibrated on the basis of a trade-off between time and accuracy. This is followed by a passage ranking process using the MS-Marco dataset trained on 500K queries to extract the most relevant text passage, to shorten the lengthy documents. Further, a QA system is used to extract the answers from the shortened documents based on the query and return the top 3 answers. For evaluation of such systems, accuracy is judged by the exact match between predicted answers and gold answers. But automatic evaluation methods fail due to the linguistic ambiguities inherent in the questions. Moreover, reference answers are often not exhaustive or are out of date. Hence correct answers predicted by the system are often judged incorrect according to the automated metrics. One such scenario arises from the original Google Natural Question (GNQ) dataset which was collected and made available in the year 2016. Use of any such dataset proves to be inefficient with respect to any questions that have time-varying answers. For illustration, if the query is where will be the next Olympics? Gold Answer for the above query as given in the GNQ dataset is “Tokyo”. Since the dataset was collected in the year 2016, and the next Olympics after 2016 were in 2020 that was in Tokyo which is absolutely correct. But if the same question is asked in 2022 then the answer is “Paris, 2024”. Consequently, any evaluation based on the GNQ dataset will be incorrect. Such erroneous predictions are usually given to human evaluators for further validation which is quite expensive and time-consuming. To address this erroneous evaluation, the present work proposes an automated approach for evaluating time-dependent question-answer pairs. In particular, it proposes a metric using the current timestamp along with top-n predicted answers from a given QA system. To test the proposed approach GNQ dataset has been used and the system achieved an accuracy of 78% for a test dataset comprising 100 QA pairs. This test data was automatically extracted using an analysis-based approach from 10K QA pairs of the GNQ dataset. The results obtained are encouraging. The proposed technique appears to have the possibility of developing into a useful scheme for gathering precise, reliable, and specific information in a real-time and efficient manner. Our subsequent experiments will be guided towards establishing the efficacy of the above system for a larger set of time-dependent QA pairs.

Keywords: web-based information retrieval, open domain question answering system, time-varying QA, QA evaluation

Procedia PDF Downloads 88
2530 HyDUS Project; Seeking a Wonder Material for Hydrogen Storage

Authors: Monica Jong, Antonios Banos, Tom Scott, Chris Webster, David Fletcher

Abstract:

Hydrogen, as a clean alternative to methane, is relatively easy to make, either from water using electrolysis or from methane using steam reformation. However, hydrogen is much trickier to store than methane, and without effective storage, it simply won’t pass muster as a suitable methane substitute. Physical storage of hydrogen is quite inefficient. Storing hydrogen as a compressed gas at pressures up to 900 times atmospheric is volumetrically inefficient and carries safety implications, whilst storing it as a liquid requires costly and constant cryogenic cooling to minus 253°C. This is where DU steps in as a possible solution. Across the periodic table, there are many different metallic elements that will react with hydrogen to form a chemical compound known as a hydride (or metal hydride). From a chemical perspective, the ‘king’ of the hydride forming metals is palladium because it offers the highest hydrogen storage volumetric capacity. However, this material is simply too expensive and scarce to be used in a scaled-up bulk hydrogen storage solution. Depleted Uranium is the second most volumetrically efficient hydride-forming metal after palladium. The UK has accrued a significant amount of DU because of manufacturing nuclear fuel for many decades, and that is currently without real commercial use. Uranium trihydride (UH3) contains three hydrogen atoms for every uranium atom and can chemically store hydrogen at ambient pressure and temperature at more than twice the density of pure liquid hydrogen for the same volume. To release the hydrogen from the hydride, all you do is heat it up. At temperatures above 250°C, the hydride starts to thermally decompose, releasing hydrogen as a gas and leaving the Uranium as a metal again. The reversible nature of this reaction allows the hydride to be formed and unformed again and again, enabling its use as a high-density hydrogen storage material which is already available in large quantities because of its stockpiling as a ‘waste’ by-product. Whilst the tritium storage credentials of Uranium have been rigorously proven at the laboratory scale and at the fusion demonstrator JET for over 30 years, there is a need to prove the concept for depleted uranium hydrogen storage (HyDUS) at scales towards that which is needed to flexibly supply our national power grid with energy. This is exactly the purpose of the HyDUS project, a collaborative venture involving EDF as the interested energy vendor, Urenco as the owner of the waste DU, and the University of Bristol with the UKAEA as the architects of the technology. The team will embark on building and proving the world’s first pilot scale demonstrator of bulk chemical hydrogen storage using depleted Uranium. Within 24 months, the team will attempt to prove both the technical and commercial viability of this technology as a longer duration energy storage solution for the UK. The HyDUS project seeks to enable a true by-product to wonder material story for depleted Uranium, demonstrating that we can think sustainably about unlocking the potential value trapped inside nuclear waste materials.

Keywords: hydrogen, long duration storage, storage, depleted uranium, HyDUS

Procedia PDF Downloads 130
2529 Two-Photon-Exchange Effects in the Electromagnetic Production of Pions

Authors: Hui-Yun Cao, Hai-Qing Zhou

Abstract:

The high precision measurements and experiments play more and more important roles in particle physics and atomic physics. To analyse the precise experimental data sets, the corresponding precise and reliable theoretical calculations are necessary. Until now, the form factors of elemental constituents such as pion and proton are still attractive issues in current Quantum Chromodynamics (QCD). In this work, the two-photon-exchange (TPE) effects in ep→enπ⁺ at small -t are discussed within a hadronic model. Under the pion dominance approximation and the limit mₑ→0, the TPE contribution to the amplitude can be described by a scalar function. We calculate TPE contributions to the amplitude, and the unpolarized differential cross section with the only elastic intermediate state is considered. The results show that the TPE corrections to the unpolarized differential cross section are about from -4% to -20% at Q²=1-1.6 GeV². After considering the TPE corrections to the experimental data sets of unpolarized differential cross section, we analyze the TPE corrections to the separated cross sections σ(L,T,LT,TT). We find that the TPE corrections (at Q²=1-1.6 GeV²) to σL are about from -10% to -30%, to σT are about 20%, and to σ(LT,TT) are much larger. By these analyses, we conclude that the TPE contributions in ep→enπ⁺ at small -t are important to extract the separated cross sections σ(L,T,LT,TT) and the electromagnetic form factor of π⁺ in the experimental analysis.

Keywords: differential cross section, form factor, hadronic, two-photon

Procedia PDF Downloads 116
2528 Composition and Catalytic Behaviour of Biogenic Iron Containing Materials Obtained by Leptothrix Bacteria Cultivation in Different Growth Media

Authors: M. Shopska, D. Paneva, G. Kadinov, Z. Cherkezova-Zheleva, I. Mitov

Abstract:

The iron containing materials are used as catalysts in different processes. The chemical methods of their synthesis use toxic and expensive chemicals; sophisticated devices; energy consumption processes that raise their cost. Besides, dangerous waste products are formed. At present time such syntheses are out of date and wasteless technologies are indispensable. The bioinspired technologies are consistent with the ecological requirements. Different microorganisms participate in the biomineralization of the iron and some phytochemicals are involved, too. The methods for biogenic production of iron containing materials are clean, simple, nontoxic, realized at ambient temperature and pressure, cheaper. The biogenic iron materials embrace different iron compounds. Due to their origin these substances are nanosized, amorphous or poorly crystalline, porous and have number of useful properties like SPM, high magnetism, low toxicity, biocompatibility, absorption of microwaves, high surface area/volume ratio, active sites on the surface with unusual coordination that distinguish them from the bulk materials. The biogenic iron materials are applied in the heterogeneous catalysis in different roles - precursor, active component, support, immobilizer. The application of biogenic iron oxide materials gives rise to increased catalytic activity in comparison with those of abiotic origin. In our study we investigated the catalytic behavior of biomasses obtained by cultivation of Leptothrix bacteria in three nutrition media – Adler, Fedorov, and Lieske. The biomass composition was studied by Moessbauer spectroscopy and transmission IRS. Catalytic experiments on CO oxidation were carried out using in situ DRIFTS. Our results showed that: i) the used biomasses contain α-FeOOH, γ-FeOOH, γ-Fe2O3 in different ratios; ii) the biomass formed in Adler medium contains γ-FeOOH as main phase. The CO conversion was about 50% as evaluated by decreased integrated band intensity in the gas mixture spectra during the reaction. The main phase in the spent sample is γ-Fe2O3; iii) the biomass formed in Lieske medium contains α-FeOOH. The CO conversion was about 20%. The main phase in the spent sample is α-Fe2O3; iv) the biomass formed in Fedorov medium contains γ-Fe2O3 as main phase. CO conversion in the test reaction was about 19%. The results showed that the catalytic activity up to 200°C resulted predominantly from α-FeOOH and γ-FeOOH. The catalytic activity at temperatures higher than 200°C was due to the formation of γ-Fe2O3. The oxyhydroxides, which are the principal compounds in the biomass, have low catalytic activity in the used reaction; the maghemite has relatively good catalytic activity; the hematite has activity commensurate with that of the oxyhydroxides. Moreover it can be affirmed that catalytic activity is inherent in maghemite, which is obtained by transformation of the biogenic lepidocrocite, i.e. it has biogenic precursor.

Keywords: nanosized biogenic iron compounds, catalytic behavior in reaction of CO oxidation, in situ DRIFTS, Moessbauer spectroscopy

Procedia PDF Downloads 359
2527 Changing from Crude (Rudimentary) to Modern Method of Cassava Processing in the Ngwo Village of Njikwa Sub Division of North West Region of Cameroon

Authors: Loveline Ambo Angwah

Abstract:

The processing of cassava from tubers or roots into food using crude and rudimentary method (hand peeling, grating, frying and to sun drying) is a very cumbersome and difficult process. The crude methods are time consuming and labour intensive. While on the other hand, modern processing method, that is using machines to perform the various processes as washing, peeling, grinding, oven drying, fermentation and frying is easier, less time consuming, and less labour intensive. Rudimentarily, cassava roots are processed into numerous products and utilized in various ways according to local customs and preferences. For the people of Ngwo village, cassava is transformed locally into flour or powder form called ‘cumcum’. It is also sucked into water to give a kind of food call ‘water fufu’ and fried to give ‘garri’. The leaves are consumed as vegetables. Added to these, its relative high yields; ability to stay underground after maturity for long periods give cassava considerable advantage as a commodity that is being used by poor rural folks in the community, to fight poverty. It plays a major role in efforts to alleviate the food crisis because of its efficient production of food energy, year-round availability, tolerance to extreme stress conditions, and suitability to present farming and food systems in Africa. Improvement of cassava processing and utilization techniques would greatly increase labor efficiency, incomes, and living standards of cassava farmers and the rural poor, as well as enhance the-shelf life of products, facilitate their transportation, increase marketing opportunities, and help improve human and livestock nutrition. This paper presents a general overview of crude ways in cassava processing and utilization methods now used by subsistence and small-scale farmers in Ngwo village of the North West region in Cameroon, and examine the opportunities of improving processing technologies. Cassava needs processing because the roots cannot be stored for long because they rot within 3-4 days of harvest. They are bulky with about 70% moisture content, and therefore transportation of the tubers to markets is difficult and expensive. The roots and leaves contain varying amounts of cyanide which is toxic to humans and animals, while the raw cassava roots and uncooked leaves are not palatable. Therefore, cassava must be processed into various forms in order to increase the shelf life of the products, facilitate transportation and marketing, reduce cyanide content and improve palatability.

Keywords: cassava roots, crude ways, food system, poverty

Procedia PDF Downloads 151
2526 Internationalization Strategies and Firm Productivity: Manufacturing Firm-Level Evidence from Ethiopia

Authors: Soressa Tolcha Jarra

Abstract:

Looking into firm-level internationalization strategies and their effects on firms' productivity is needed in order to understand the role of firms’ participation in trading activities on the one hand and the effects of firms’ internalization strategies on firm-level productivity on the other. Thus, this study aims to investigate firms' imports of intermediates and export strategies and their impact on firm productivity using an establishment-level panel dataset from Ethiopian manufacturing firms over the period 2011–2020. Methodologically, the joint firm’s decision to import intermediates and estimate exports is undertaken by system GMM using Wooldridge's approach. The translog-production function is used to estimate firm-level productivity by considering a general Markov process. The size of the firm is used in a mediating role. The result indicates evidence of the self-selection of more productive firms into exporting and importing intermediates, which is indicative of sizable export and import market entry costs. Furthermore, there is evidence in favor of learning by exporting (LBE) and learning by importing (LBI) hypotheses for smaller and medium Ethiopian manufacturing firms. However, for large firms, there is only evidence in support of the learning by exporting (LBE) hypothesis.

Keywords: Ethiopia, export, firm productivity, intermediate imports

Procedia PDF Downloads 9
2525 Drinking Water Quality Assessment Using Fuzzy Inference System Method: A Case Study of Rome, Italy

Authors: Yas Barzegar, Atrin Barzegar

Abstract:

Drinking water quality assessment is a major issue today; technology and practices are continuously improving; Artificial Intelligence (AI) methods prove their efficiency in this domain. The current research seeks a hierarchical fuzzy model for predicting drinking water quality in Rome (Italy). The Mamdani fuzzy inference system (FIS) is applied with different defuzzification methods. The Proposed Model includes three fuzzy intermediate models and one fuzzy final model. Each fuzzy model consists of three input parameters and 27 fuzzy rules. The model is developed for water quality assessment with a dataset considering nine parameters (Alkalinity, Hardness, pH, Ca, Mg, Fluoride, Sulphate, Nitrates, and Iron). Fuzzy-logic-based methods have been demonstrated to be appropriate to address uncertainty and subjectivity in drinking water quality assessment; it is an effective method for managing complicated, uncertain water systems and predicting drinking water quality. The FIS method can provide an effective solution to complex systems; this method can be modified easily to improve performance.

Keywords: water quality, fuzzy logic, smart cities, water attribute, fuzzy inference system, membership function

Procedia PDF Downloads 62
2524 A Robust Model Predictive Control for a Photovoltaic Pumping System Subject to Actuator Saturation Nonlinearity and Parameter Uncertainties: A Linear Matrix Inequality Approach

Authors: Sofiane Bououden, Ilyes Boulkaibet

Abstract:

In this paper, a robust model predictive controller (RMPC) for uncertain nonlinear system under actuator saturation is designed to control a DC-DC buck converter in PV pumping application, where this system is subject to actuator saturation and parameter uncertainties. The considered nonlinear system contains a linear constant part perturbed by an additive state-dependent nonlinear term. Based on the saturating actuator property, an appropriate linear feedback control law is constructed and used to minimize an infinite horizon cost function within the framework of linear matrix inequalities. The proposed approach has successfully provided a solution to the optimization problem that can stabilize the nonlinear plants. Furthermore, sufficient conditions for the existence of the proposed controller guarantee the robust stability of the system in the presence of polytypic uncertainties. In addition, the simulation results have demonstrated the efficiency of the proposed control scheme.

Keywords: PV pumping system, DC-DC buck converter, robust model predictive controller, nonlinear system, actuator saturation, linear matrix inequality

Procedia PDF Downloads 164
2523 Developing a Web GIS Tool for the Evaluation of Soil Erosion of a Watershed

Authors: Y. Fekir, K. Mederbal, M. A. Hamadouche, D. Anteur

Abstract:

The soil erosion by water has become one of the biggest problems of the environment in the world, threatening the majority of countries. There are several models to evaluate erosion. These models are still a simplified representation of reality. They permit the analysis of complex systems, measurements are complementary to allow an extrapolation in time and space and may combine different factors. The empirical model of soil loss proposed by Wischmeier and Smith (Universal Soil Loss Equation), is widely used in many countries. He considers that erosion is a multiplicative function of five factors: rainfall erosivity (the R factor) the soil erodibility factor (K), topography (LS), the erosion control practices (P) and vegetation cover and agricultural practices (C). In this work, we tried to develop a tool based on Web GIS functionality to evaluate soil losses caused by erosion taking into account five factors. This tool allows the user to integrate all the data needed for the evaluation (DEM, Land use, rainfall ...) in the form of digital layers to calculate the five factors taken into account in the USLE equation (R, K, C, P, LS). Accordingly, and after treatment of the integrated data set, a map of the soil losses will be achieved as a result. We tested the proposed tool on a watershed basin located in the weste of Algeria where a dataset was collected and prepared.

Keywords: USLE, erosion, web gis, Algeria

Procedia PDF Downloads 318
2522 A Preliminary Study on Factors Determining the Success of High Conservation Value Area in Oil Palm Plantations

Authors: Yanto Santosa, Rozza Tri Kwatrina

Abstract:

High Conservation Value (HCV) is an area with conservation function within oil palm plantation. Despite the important role of HCV area in biodiversity conservation and various studies on HCV, there was a lack of research studying the factors determining its success. A preliminary study was conducted to identify the determinant factor of HCV that affected the diversity. Line transect method was used to calculate the species diversity of butterfly, birds, mammals, and herpetofauna species as well as their richness. Specifically for mammals, camera traps were also used. The research sites comprised of 12 HCV areas in 3 provinces of Indonesia (Central Kalimantan, Riau, and Palembang). The relationship between the HCV biophysical factor with the species number and species diversity for each wildlife class was identified using Chi-Square analysis with Cross tab (contingency table). Results of the study revealed that species diversity varied by research locations. Four factors determining the success of HCV area in relations to the number and diversity of wildlife species are land cover types for mammals, the width of area and distance to rivers for birds, and distance to settlements for butterflies.

Keywords: wildlife diversity, oil palm plantation, high conservation value area, ecological factors

Procedia PDF Downloads 137
2521 A Stepped Care mHealth-Based Approach for Obesity with Type 2 Diabetes in Clinical Health Psychology

Authors: Gianluca Castelnuovo, Giada Pietrabissa, Gian Mauro Manzoni, Margherita Novelli, Emanuele Maria Giusti, Roberto Cattivelli, Enrico Molinari

Abstract:

Diabesity could be defined as a new global epidemic of obesity and being overweight with many complications and chronic conditions. Such conditions include not only type 2 diabetes, but also cardiovascular diseases, hypertension, dyslipidemia, hypercholesterolemia, cancer, and various psychosocial and psychopathological disorders. The financial direct and indirect burden (considering also the clinical resources involved and the loss of productivity) is a real challenge in many Western health-care systems. Recently the Lancet journal defined diabetes as a 21st-century challenge. In order to promote patient compliance in diabesity treatment reducing costs, evidence-based interventions to improve weight-loss, maintain a healthy weight, and reduce related comorbidities combine different treatment approaches: dietetic, nutritional, physical, behavioral, psychological, and, in some situations, pharmacological and surgical. Moreover, new technologies can provide useful solutions in this multidisciplinary approach, above all in maintaining long-term compliance and adherence in order to ensure clinical efficacy. Psychological therapies with diet and exercise plans could better help patients in achieving weight loss outcomes, both inside hospitals and clinical centers and during out-patient follow-up sessions. In the management of chronic diseases clinical psychology play a key role due to the need of working on psychological conditions of patients, their families and their caregivers. mHealth approach could overcome limitations linked with the traditional, restricted and highly expensive in-patient treatment of many chronic pathologies: one of the best up-to-date application is the management of obesity with type 2 diabetes, where mHealth solutions can provide remote opportunities for enhancing weight reduction and reducing complications from clinical, organizational and economic perspectives. A stepped care mHealth-based approach is an interesting perspective in chronic care management of obesity with type 2 diabetes. One promising future direction could be treating obesity, considered as a chronic multifactorial disease, using a stepped-care approach: -mhealth or traditional based lifestyle psychoeducational and nutritional approach. -health professionals-driven multidisciplinary protocols tailored for each patient. -inpatient approach with the inclusion of drug therapies and other multidisciplinary treatments. -bariatric surgery with psychological and medical follow-up In the chronic care management of globesity mhealth solutions cannot substitute traditional approaches, but they can supplement some steps in clinical psychology and medicine both for obesity prevention and for weight loss management.

Keywords: clinical health psychology, mhealth, obesity, type 2 diabetes, stepped care, chronic care management

Procedia PDF Downloads 328
2520 Numerical Simulation of Unsteady Natural Convective Nanofluid Flow within a Trapezoidal Enclosure Using Meshfree Method

Authors: S. Nandal, R. Bhargava

Abstract:

The paper contains a numerical study of the unsteady magneto-hydrodynamic natural convection flow of nanofluids within a symmetrical wavy walled trapezoidal enclosure. The length and height of enclosure are both considered equal to L. Two-phase nanofluid model is employed. The governing equations of nanofluid flow along with boundary conditions are non-dimensionalized and are solved using one of Meshfree technique (EFGM method). Meshfree numerical technique does not require a predefined mesh for discretization purpose. The bottom wavy wall of the enclosure is defined using a cosine function. Element free Galerkin method (EFGM) does not require the domain. The effects of various parameters namely time t, amplitude of bottom wavy wall a, Brownian motion parameter Nb and thermophoresis parameter Nt is examined on rate of heat and mass transfer to get a visualization of cooling and heating effects. Such problems have important applications in heat exchangers or solar collectors, as wavy walled enclosures enhance heat transfer in comparison to flat walled enclosures.

Keywords: heat transfer, meshfree methods, nanofluid, trapezoidal enclosure

Procedia PDF Downloads 147
2519 New Two-Way Map-Reduce Join Algorithm: Hash Semi Join

Authors: Marwa Hussein Mohamed, Mohamed Helmy Khafagy, Samah Ahmed Senbel

Abstract:

Map Reduce is a programming model used to handle and support massive data sets. Rapidly increasing in data size and big data are the most important issue today to make an analysis of this data. map reduce is used to analyze data and get more helpful information by using two simple functions map and reduce it's only written by the programmer, and it includes load balancing , fault tolerance and high scalability. The most important operation in data analysis are join, but map reduce is not directly support join. This paper explains two-way map-reduce join algorithm, semi-join and per split semi-join, and proposes new algorithm hash semi-join that used hash table to increase performance by eliminating unused records as early as possible and apply join using hash table rather than using map function to match join key with other data table in the second phase but using hash tables isn't affecting on memory size because we only save matched records from the second table only. Our experimental result shows that using a hash table with hash semi-join algorithm has higher performance than two other algorithms while increasing the data size from 10 million records to 500 million and running time are increased according to the size of joined records between two tables.

Keywords: map reduce, hadoop, semi join, two way join

Procedia PDF Downloads 500
2518 Steepest Descent Method with New Step Sizes

Authors: Bib Paruhum Silalahi, Djihad Wungguli, Sugi Guritman

Abstract:

Steepest descent method is a simple gradient method for optimization. This method has a slow convergence in heading to the optimal solution, which occurs because of the zigzag form of the steps. Barzilai and Borwein modified this algorithm so that it performs well for problems with large dimensions. Barzilai and Borwein method results have sparked a lot of research on the method of steepest descent, including alternate minimization gradient method and Yuan method. Inspired by previous works, we modified the step size of the steepest descent method. We then compare the modification results against the Barzilai and Borwein method, alternate minimization gradient method and Yuan method for quadratic function cases in terms of the iterations number and the running time. The average results indicate that the steepest descent method with the new step sizes provide good results for small dimensions and able to compete with the results of Barzilai and Borwein method and the alternate minimization gradient method for large dimensions. The new step sizes have faster convergence compared to the other methods, especially for cases with large dimensions.

Keywords: steepest descent, line search, iteration, running time, unconstrained optimization, convergence

Procedia PDF Downloads 530
2517 A Novel Approach to Asynchronous State Machine Modeling on Multisim for Avoiding Function Hazards

Authors: Parisi L., Hamili D., Azlan N.

Abstract:

The aim of this study was to design and simulate a particular type of Asynchronous State Machine (ASM), namely a ‘traffic light controller’ (TLC), operated at a frequency of 0.5 Hz. The design task involved two main stages: firstly, designing a 4-bit binary counter using J-K flip flops as the timing signal and subsequently, attaining the digital logic by deploying ASM design process. The TLC was designed such that it showed a sequence of three different colours, i.e. red, yellow and green, corresponding to set thresholds by deploying the least number of AND, OR and NOT gates possible. The software Multisim was deployed to design such circuit and simulate it for circuit troubleshooting in order for it to display the output sequence of the three different colours on the traffic light in the correct order. A clock signal, an asynchronous 4-bit binary counter that was designed through the use of J-K flip flops along with an ASM were used to complete this sequence, which was programmed to be repeated indefinitely. Eventually, the circuit was debugged and optimized, thus displaying the correct waveforms of the three outputs through the logic analyzer. However, hazards occurred when the frequency was increased to 10 MHz. This was attributed to delays in the feedback being too high.

Keywords: asynchronous state machine, traffic light controller, circuit design, digital electronics

Procedia PDF Downloads 416
2516 Stability Analysis of Three-Dimensional Flow and Heat Transfer over a Permeable Shrinking Surface in a Cu-Water Nanofluid

Authors: Roslinda Nazar, Amin Noor, Khamisah Jafar, Ioan Pop

Abstract:

In this paper, the steady laminar three-dimensional boundary layer flow and heat transfer of a copper (Cu)-water nanofluid in the vicinity of a permeable shrinking flat surface in an otherwise quiescent fluid is studied. The nanofluid mathematical model in which the effect of the nanoparticle volume fraction is taken into account is considered. The governing nonlinear partial differential equations are transformed into a system of nonlinear ordinary differential equations using a similarity transformation which is then solved numerically using the function bvp4c from Matlab. Dual solutions (upper and lower branch solutions) are found for the similarity boundary layer equations for a certain range of the suction parameter. A stability analysis has been performed to show which branch solutions are stable and physically realizable. The numerical results for the skin friction coefficient and the local Nusselt number as well as the velocity and temperature profiles are obtained, presented and discussed in detail for a range of various governing parameters.

Keywords: heat transfer, nanofluid, shrinking surface, stability analysis, three-dimensional flow

Procedia PDF Downloads 267
2515 A Comparative Study of Additive and Nonparametric Regression Estimators and Variable Selection Procedures

Authors: Adriano Z. Zambom, Preethi Ravikumar

Abstract:

One of the biggest challenges in nonparametric regression is the curse of dimensionality. Additive models are known to overcome this problem by estimating only the individual additive effects of each covariate. However, if the model is misspecified, the accuracy of the estimator compared to the fully nonparametric one is unknown. In this work the efficiency of completely nonparametric regression estimators such as the Loess is compared to the estimators that assume additivity in several situations, including additive and non-additive regression scenarios. The comparison is done by computing the oracle mean square error of the estimators with regards to the true nonparametric regression function. Then, a backward elimination selection procedure based on the Akaike Information Criteria is proposed, which is computed from either the additive or the nonparametric model. Simulations show that if the additive model is misspecified, the percentage of time it fails to select important variables can be higher than that of the fully nonparametric approach. A dimension reduction step is included when nonparametric estimator cannot be computed due to the curse of dimensionality. Finally, the Boston housing dataset is analyzed using the proposed backward elimination procedure and the selected variables are identified.

Keywords: additive model, nonparametric regression, variable selection, Akaike Information Criteria

Procedia PDF Downloads 251
2514 A Contrastive Analysis on Hausa and Yoruba Adjectival Phrases

Authors: Abubakar Maikudi

Abstract:

Contrastive analysis is the method of analyzing the structure of any two languages with a view to determining the possible differential aspects of their systems irrespective of their genetic affinity or level of development. Contrastive analysis of two languages becomes useful when it is adequately describing the sound structure and grammatical structure of two languages, with comparative statements giving emphasis to the compatible items in the two systems. This research work uses comparative analysis theory to analyze adjective and adjectival phrases in Hausa and Yorùbá languages. The Hausa language belongs to the Chadic family of the Afro-Asiatic phylum, while the Yorùbá language belongs to the Benue-Congo family of the Niger-Congo phylum. The findings of the research clearly demonstrated that there are significant similarities in the adjectival phrase constructions of the two languages, i.e., nominal (Head) and post-nominal (Post-Head) use of the adjective, predicative function of an adjective, use of the reduplicative adjective, use of the comparative and superlative adjective, etc. However, there are dissimilarities in the adjectival phrase of the two languages in gender/number agreement and pre-nominal (Post-Head) use of adjectives.

Keywords: genetic affinity, contrastive analysis, phylum, pre-head, post-head

Procedia PDF Downloads 209
2513 An Ant Colony Optimization Approach for the Pollution Routing Problem

Authors: P. Parthiban, Sonu Rajak, N. Kannan, R. Dhanalakshmi

Abstract:

This paper deals with the Vehicle Routing Problem (VRP) with environmental considerations which is called Pollution Routing Problem (PRP). The objective is to minimize the operational and environmental costs. It consists of routing a number of vehicles to serve a set of customers, and determining fuel consumption, driver wages and their speed on each route segment, while respecting the capacity constraints and time windows. In this context, we presented an Ant Colony Optimization (ACO) approach, combined with a Speed Optimization Algorithm (SOA) to solve the PRP. The proposed solution method consists of two stages. Stage one is to solve a Vehicle Routing Problem with Time Window (VRPTW) using ACO and in the second stage a SOA is run on the resulting VRPTW solutions. Given a vehicle route, the SOA consists of finding the optimal speed on each arc of the route in order to minimize an objective function comprising fuel consumption costs and driver wages. The proposed algorithm tested on benchmark problem, the preliminary results show that the proposed algorithm is able to provide good solutions.

Keywords: ant colony optimization, CO2 emissions, combinatorial optimization, speed optimization, vehicle routing

Procedia PDF Downloads 305
2512 Heteroscedastic Parametric and Semiparametric Smooth Coefficient Stochastic Frontier Application to Technical Efficiency Measurement

Authors: Rebecca Owusu Coffie, Atakelty Hailu

Abstract:

Variants of production frontier models have emerged, however, only a limited number of them are applied in empirical research. Hence the effects of these alternative frontier models are not well understood, particularly within sub-Saharan Africa. In this paper, we apply recent advances in the production frontier to examine levels of technical efficiency and efficiency drivers. Specifically, we compare the heteroscedastic parametric and the semiparametric stochastic smooth coefficient (SPSC) models. Using rice production data from Ghana, our empirical estimates reveal that alternative specification of efficiency estimators results in either downward or upward bias in the technical efficiency estimates. Methodologically, we find that the SPSC model is more suitable and generates high-efficiency estimates. Within the parametric framework, we find that parameterization of both the mean and variance of the pre-truncated function is the best model. For the drivers of technical efficiency, we observed that longer farm distances increase inefficiency through a reduction in labor productivity. High soil quality, however, increases productivity through increased land productivity.

Keywords: pre-truncated, rice production, smooth coefficient, technical efficiency

Procedia PDF Downloads 426
2511 An Error Analysis of English Communication of Suan Sunandha Rajabhat University Students

Authors: Chantima Wangsomchok

Abstract:

The main purposes of this study are (1) to test the students’ communicative competence within six main functions: greeting, parting, thanking, offering, requesting and suggesting, (2) to employ error analysis in the students’ communicative competence within those functions, and (3) to compare the characteristics of the error found from the investigation. The subjects of the study is 328 first-year undergraduates taking the Foundation English course in the first semester of the 2008 academic year at Suan Sunandha Rajabhat University. This study found that while the subjects showed high communicative competence in the use of the following three functions: greeting, thanking, and offering, they seemed to show poor communicative competence in suggesting, requesting and parting instead. In addition, this study found that the grammatical errors were likely to be most frequently found in the parting function. In the same way, the type of errors which were less frequently found was in the functions of thanking and requesting respectively. Instead, the students tended to have high pragmatic failure in the use of greeting and suggesting functions.

Keywords: error analysis, functions of English language, communicative competence, cognitive science

Procedia PDF Downloads 420
2510 Strength Performance and Microstructure Characteristics of Natural Bonded Fiber Composites from Malaysian Bamboo

Authors: Shahril Anuar Bahari, Mohd Azrie Mohd Kepli, Mohd Ariff Jamaludin, Kamarulzaman Nordin, Mohamad Jani Saad

Abstract:

Formaldehyde release from wood-based panel composites can be very toxicity and may increase the risk of human health as well as environmental problems. A new bio-composites product without synthetic adhesive or resin is possible to be developed in order to reduce these problems. Apart from formaldehyde release, adhesive is also considered to be expensive, especially in the manufacturing of composite products. Natural bonded composites can be termed as a panel product composed with any type of cellulosic materials without the addition of synthetic resins. It is composed with chemical content activation in the cellulosic materials. Pulp and paper making method (chemical pulping) was used as a general guide in the composites manufacturing. This method will also generally reduce the manufacturing cost and the risk of formaldehyde emission and has potential to be used as an alternative technology in fiber composites industries. In this study, the natural bonded bamboo fiber composite was produced from virgin Malaysian bamboo fiber (Bambusa vulgaris). The bamboo culms were chipped and digested into fiber using this pulping method. The black liquor collected from the pulping process was used as a natural binding agent in the composition. Then the fibers were mixed and blended with black liquor without any resin addition. The amount of black liquor used per composite board was 20%, with approximately 37% solid content. The composites were fabricated using a hot press machine at two different board densities, 850 and 950 kg/m³, with two sets of hot pressing time, 25 and 35 minutes. Samples of the composites from different densities and hot pressing times were tested in flexural strength and internal bonding (IB) for strength performance according to British Standard. Modulus of elasticity (MOE) and modulus of rupture (MOR) was determined in flexural test, while tensile force perpendicular to the surface was recorded in IB test. Results show that the strength performance of the composites with 850 kg/m³ density were significantly higher than 950 kg/m³ density, especially for samples from 25 minutes hot pressing time. Strength performance of composites from 25 minutes hot pressing time were generally greater than 35 minutes. Results show that the maximum mean values of strength performance were recorded from composites with 850 kg/m³ density and 25 minutes pressing time. The maximum mean values for MOE, MOR and IB were 3251.84, 16.88 and 0.27 MPa, respectively. Only MOE result has conformed to high density fiberboard (HDF) standard (2700 MPa) in British Standard for Fiberboard Specification, BS EN 622-5: 2006. Microstructure characteristics of composites can also be related to the strength performance of the composites, in which, the observed fiber damage in composites from 950 kg/m³ density and overheat of black liquor led to the low strength properties, especially in IB test.

Keywords: bamboo fiber, natural bonded, black liquor, mechanical tests, microstructure observations

Procedia PDF Downloads 241
2509 The Legal Position of Criminal Prevention in the Metaverse World

Authors: Andi Intan Purnamasari, Supriyadi, Sulbadana, Aminuddin Kasim

Abstract:

Law functions as social control. Providing arrangements not only for legal certainty, but also in the scope of justice and expediency. The three values ​​achieved by law essentially function to bring comfort to each individual in carrying out daily activities. However, it is undeniable that global conditions have changed the orientation of people's lifestyles. Some people want to ensure their existence in the digital world which is popularly known as the metaverse. Some countries even project their city to be a metaverse city. The order of life is no longer limited to the real space, but also to the cyber world. Not infrequently, legal events that occur in the cyber world also force the law to position its position and even prevent crime in cyberspace. Through this research, conceptually it provides a view of the legal position in crime prevention in the Metaverse world. when the law acts to regulate the situation in the virtual world, of course some people will feel disturbed, this is due to the thought that the virtual world is a world in which an avatar can do things that cannot be done in the real world, or can be called a world without boundaries. Therefore, when the law is present to provide boundaries, of course the concept of the virtual world itself becomes no longer a cyber world that is not limited by space and time, it becomes a new order of life. approach, approach, approach, approach, and approach will certainly be the method used in this research.

Keywords: crime, cyber, metaverse, law

Procedia PDF Downloads 135
2508 Does Mirror Therapy Improve Motor Recovery After Stroke? A Meta-Analysis of Randomized Controlled Trials

Authors: Hassan Abo Salem, Guo Feng, Xiaolin Huang

Abstract:

The objective of this study is to determine the effectiveness of mirror therapy on motor recovery and functional abilities after stroke. The following databases were searched from inception to May 2014: Cochrane Stroke, Cochrane Central Register of Controlled Trials, MEDLINE, EMBASE, CINAHL, AMED, PsycINFO, and PEDro. Two reviewers independently screened and selected all randomized controlled trials that evaluate the effect of mirror therapy in stroke rehabilitation.12 randomized controlled trials studies met the inclusion criteria; 10 studies utilized the effect of mirror therapy for the upper limb and 2 studies for the lower limb. Mirror therapy had a positive effect on motor recover and function; however, we found no consistent influence on activity of daily living, Spasticity and balance. This meta-analysis suggests that, Mirror therapy has additional effect on motor recovery but has a small positive effect on functional abilities after stroke. Further high-quality studies with greater statistical power are required in order to accurately determine the effectiveness of mirror therapy following stroke.

Keywords: mirror therapy, motor recovery, stroke, balance

Procedia PDF Downloads 540
2507 Structural, Magnetic, Electrical and Dielectric Properties of Pr0.8Na0.2MnO3 Manganite

Authors: H. Ben Khlifa, W. Cheikhrouhou, R. M'nassri

Abstract:

The Orthorhombic Pr0.8Na0.2MnO3 ceramic was prepared in Polycrystalline form by a Pechini sol–gel method and its structural, magnetic, electrical, and dielectric properties were investigated experimentally. A structural study confirms that the sample is a single phase. Magnetic measurements show that the sample is a charge ordered Manganite. The sample undergoes two successive magnetic phase transitions with the variation of temperature: a charge ordering transition occurred at TCO = 212 K followed by a Paramagnetic (PM) to ferromagnetic (FM) transition around TC = 115 K. From an electrical point of view, a saturation region was marked in the conductivity as a function of Temperature s(T) curves at a specific temperature. The dc-conductivity (sdc) reaches a maximum value at 240 K. The obtained results are in good agreement with the temperature dependence of the average normalized change (ANC). We found that the conduction mechanism was governed by small polaron hopping (SPH) in the high-temperature region and by variable range hopping (VRH) in the low-temperature region. Complex impedance analysis indicates the presence of a non-Debye relaxation phenomenon in the system. Also, the compound was modeled by an electrical equivalent circuit. Then, the contribution of the grain boundary in the transport properties was confirmed.

Keywords: manganites, preparation methods, magnetization, magnetocaloric effect, electrical and dielectric

Procedia PDF Downloads 150
2506 Blade-Coating Deposition of Semiconducting Polymer Thin Films: Light-To-Heat Converters

Authors: M. Lehtihet, S. Rosado, C. Pradère, J. Leng

Abstract:

Poly(3,4-ethylene dioxythiophene) polystyrene sulfonate (PEDOT: PSS), is a polymer mixture well-known for its semiconducting properties and is widely used in the coating industry for its visible transparency and high electronic conductivity (up to 4600 S/cm) as a transparent non-metallic electrode and in organic light-emitting diodes (OLED). It also possesses strong absorption properties in the Near Infra-Red (NIR) range (λ ranging between 900 nm to 2.5 µm). In the present work, we take advantage of this absorption to explore its potential use as a transparent light-to-heat converter. PEDOT: PSS aqueous dispersions are deposited onto a glass substrate using a blade-coating technique in order to produce uniform coatings with controlled thicknesses ranging in ≈ 400 nm to 2 µm. Blade-coating technique allows us good control of the deposit thickness and uniformity by the tuning of several experimental conditions (blade velocity, evaporation rate, temperature, etc…). This liquid coating technique is a well-known, non-expensive technique to realize thin film coatings on various substrates. For coatings on glass substrates destined to solar insulation applications, the ideal coating would be made of a material able to transmit all the visible range while reflecting the NIR range perfectly, but materials possessing similar properties still have unsatisfactory opacity in the visible too (for example, titanium dioxide nanoparticles). NIR absorbing thin films is a more realistic alternative for such an application. Under solar illumination, PEDOT: PSS thin films heat up due to absorption of NIR light and thus act as planar heaters while maintaining good transparency in the visible range. Whereas they screen some NIR radiation, they also generate heat which is then conducted into the substrate that re-emits this energy by thermal emission in every direction. In order to quantify the heating power of these coatings, a sample (coating on glass) is placed in a black enclosure and illuminated with a solar simulator, a lamp emitting a calibrated radiation very similar to the solar spectrum. The temperature of the rear face of the substrate is measured in real-time using thermocouples and a black-painted Peltier sensor measures the total entering flux (sum of transmitted and re-emitted fluxes). The heating power density of the thin films is estimated from a model of the thin film/glass substrate describing the system, and we estimate the Solar Heat Gain Coefficient (SHGC) to quantify the light-to-heat conversion efficiency of such systems. Eventually, the effect of additives such as dimethyl sulfoxide (DMSO) or optical scatterers (particles) on the performances are also studied, as the first one can alter the IR absorption properties of PEDOT: PSS drastically and the second one can increase the apparent optical path of light within the thin film material.

Keywords: PEDOT: PSS, blade-coating, heat, thin-film, Solar spectrum

Procedia PDF Downloads 145
2505 Statistical Inferences for GQARCH-It\^{o} - Jumps Model Based on The Realized Range Volatility

Authors: Fu Jinyu, Lin Jinguan

Abstract:

This paper introduces a novel approach that unifies two types of models: one is the continuous-time jump-diffusion used to model high-frequency data, and the other is discrete-time GQARCH employed to model low-frequency financial data by embedding the discrete GQARCH structure with jumps in the instantaneous volatility process. This model is named “GQARCH-It\^{o} -Jumps mode.” We adopt the realized range-based threshold estimation for high-frequency financial data rather than the realized return-based volatility estimators, which entail the loss of intra-day information of the price movement. Meanwhile, a quasi-likelihood function for the low-frequency GQARCH structure with jumps is developed for the parametric estimate. The asymptotic theories are mainly established for the proposed estimators in the case of finite activity jumps. Moreover, simulation studies are implemented to check the finite sample performance of the proposed methodology. Specifically, it is demonstrated that how our proposed approaches can be practically used on some financial data.

Keywords: It\^{o} process, GQARCH, leverage effects, threshold, realized range-based volatility estimator, quasi-maximum likelihood estimate

Procedia PDF Downloads 140
2504 Cerebrovascular Modeling: A Vessel Network Approach for Fluid Distribution

Authors: Karla E. Sanchez-Cazares, Kim H. Parker, Jennifer H. Tweedy

Abstract:

The purpose of this work is to develop a simple compartmental model of cerebral fluid balance including blood and cerebrospinal-fluid (CSF). At the first level the cerebral arteries and veins are modelled as bifurcating trees with constant scaling factors between generations which are connected through a homogeneous microcirculation. The arteries and veins are assumed to be non-rigid and the cross-sectional area, resistance and mean pressure in each generation are determined as a function of blood volume flow rate. From the mean pressure and further assumptions about the variation of wall permeability, the transmural fluid flux can be calculated. The results suggest the next level of modelling where the cerebral vasculature is divided into three compartments; the large arteries, the small arteries, the capillaries and the veins with effective compliances and permeabilities derived from the detailed vascular model. These vascular compartments are then linked to other compartments describing the different CSF spaces, the cerebral ventricles and the subarachnoid space. This compartmental model is used to calculate the distribution of fluid in the cranium. Known volumes and flows for normal conditions are used to determine reasonable parameters for the model, which can then be used to help understand pathological behaviour and suggest clinical interventions.

Keywords: cerebrovascular, compartmental model, CSF model, vascular network

Procedia PDF Downloads 264