Search results for: SURF(Speed-Up Robust Features)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5019

Search results for: SURF(Speed-Up Robust Features)

1419 Central Finite Volume Methods Applied in Relativistic Magnetohydrodynamics: Applications in Disks and Jets

Authors: Raphael de Oliveira Garcia, Samuel Rocha de Oliveira

Abstract:

We have developed a new computer program in Fortran 90, in order to obtain numerical solutions of a system of Relativistic Magnetohydrodynamics partial differential equations with predetermined gravitation (GRMHD), capable of simulating the formation of relativistic jets from the accretion disk of matter up to his ejection. Initially we carried out a study on numerical methods of unidimensional Finite Volume, namely Lax-Friedrichs, Lax-Wendroff, Nessyahu-Tadmor method and Godunov methods dependent on Riemann problems, applied to equations Euler in order to verify their main features and make comparisons among those methods. It was then implemented the method of Finite Volume Centered of Nessyahu-Tadmor, a numerical schemes that has a formulation free and without dimensional separation of Riemann problem solvers, even in two or more spatial dimensions, at this point, already applied in equations GRMHD. Finally, the Nessyahu-Tadmor method was possible to obtain stable numerical solutions - without spurious oscillations or excessive dissipation - from the magnetized accretion disk process in rotation with respect to a central black hole (BH) Schwarzschild and immersed in a magnetosphere, for the ejection of matter in the form of jet over a distance of fourteen times the radius of the BH, a record in terms of astrophysical simulation of this kind. Also in our simulations, we managed to get substructures jets. A great advantage obtained was that, with the our code, we got simulate GRMHD equations in a simple personal computer.

Keywords: finite volume methods, central schemes, fortran 90, relativistic astrophysics, jet

Procedia PDF Downloads 434
1418 Integrating Explicit Instruction and Problem-Solving Approaches for Efficient Learning

Authors: Slava Kalyuga

Abstract:

There are two opposing major points of view on the optimal degree of initial instructional guidance that is usually discussed in the literature by the advocates of the corresponding learning approaches. Using unguided or minimally guided problem-solving tasks prior to explicit instruction has been suggested by productive failure and several other instructional theories, whereas an alternative approach - using fully guided worked examples followed by problem solving - has been demonstrated as the most effective strategy within the framework of cognitive load theory. An integrated approach discussed in this paper could combine the above frameworks within a broader theoretical perspective which would allow bringing together their best features and advantages in the design of learning tasks for STEM education. This paper represents a systematic review of the available empirical studies comparing the above alternative sequences of instructional methods to explore effects of several possible moderating factors. The paper concludes that different approaches and instructional sequences should coexist within complex learning environments. Selecting optimal sequences depends on such factors as specific goals of learner activities, types of knowledge to learn, levels of element interactivity (task complexity), and levels of learner prior knowledge. This paper offers an outline of a theoretical framework for the design of complex learning tasks in STEM education that would integrate explicit instruction and inquiry (exploratory, discovery) learning approaches in ways that depend on a set of defined specific factors.

Keywords: cognitive load, explicit instruction, exploratory learning, worked examples

Procedia PDF Downloads 109
1417 Parametric Analysis of Lumped Devices Modeling Using Finite-Difference Time-Domain

Authors: Felipe M. de Freitas, Icaro V. Soares, Lucas L. L. Fortes, Sandro T. M. Gonçalves, Úrsula D. C. Resende

Abstract:

The SPICE-based simulators are quite robust and widely used for simulation of electronic circuits, their algorithms support linear and non-linear lumped components and they can manipulate an expressive amount of encapsulated elements. Despite the great potential of these simulators based on SPICE in the analysis of quasi-static electromagnetic field interaction, that is, at low frequency, these simulators are limited when applied to microwave hybrid circuits in which there are both lumped and distributed elements. Usually the spatial discretization of the FDTD (Finite-Difference Time-Domain) method is done according to the actual size of the element under analysis. After spatial discretization, the Courant Stability Criterion calculates the maximum temporal discretization accepted for such spatial discretization and for the propagation velocity of the wave. This criterion guarantees the stability conditions for the leapfrogging of the Yee algorithm; however, it is known that for the field update, the stability of the complete FDTD procedure depends on factors other than just the stability of the Yee algorithm, because the FDTD program needs other algorithms in order to be useful in engineering problems. Examples of these algorithms are Absorbent Boundary Conditions (ABCs), excitation sources, subcellular techniques, grouped elements, and non-uniform or non-orthogonal meshes. In this work, the influence of the stability of the FDTD method in the modeling of concentrated elements such as resistive sources, resistors, capacitors, inductors and diode will be evaluated. In this paper is proposed, therefore, the electromagnetic modeling of electronic components in order to create models that satisfy the needs for simulations of circuits in ultra-wide frequencies. The models of the resistive source, the resistor, the capacitor, the inductor, and the diode will be evaluated, among the mathematical models for lumped components in the LE-FDTD method (Lumped-Element Finite-Difference Time-Domain), through the parametric analysis of Yee cells size which discretizes the lumped components. In this way, it is sought to find an ideal cell size so that the analysis in FDTD environment is in greater agreement with the expected circuit behavior, maintaining the stability conditions of this method. Based on the mathematical models and the theoretical basis of the required extensions of the FDTD method, the computational implementation of the models in Matlab® environment is carried out. The boundary condition Mur is used as the absorbing boundary of the FDTD method. The validation of the model is done through the comparison between the obtained results by the FDTD method through the electric field values and the currents in the components, and the analytical results using circuit parameters.

Keywords: hybrid circuits, LE-FDTD, lumped element, parametric analysis

Procedia PDF Downloads 136
1416 Effect of Zingerone on High-Fructose Diet-Indeuced Metabolic Derangements in Growing Sprague-Dawley Rats

Authors: Nondumiso Lushozi, Busisani Lembede, Eliton Chivandi

Abstract:

Consumption of fructose increases the risk of obesity, nonalcoholic fatty liver disease (NAFLD) and metabolic syndrome in children. Zingerone which is found in ginger has antidiabetic and antiobesogenic properties. Therefore, the aim of the study was to investigate the potential of orally administered zingerone to protect growing Sprague-Dawley rats (mimicking growing children) against high-fructose diet-induced metabolic derangements. Forty, 21-day old female Sprague-Dawley rats were randomly allocated and administered the following four treatments for 12 weeks: group I: standard rat chow (SR) + plain water (PW) + plain gelatine cube (PC). group II: SR + 20% (w/v) fructose solution (FS) + PC. group III: SR + FS + 100 mg/kg/day of fenofibrate in gelatine cube. group IV: SR+ FS + 20 mg/kg/day of zingerone in gelatine cube. The rats’ triglyceride, cholesterol, insulin & adiponectin concentration, visceral fat liver lipid content, homoeostasis model assessment of insulin resistance (HOMA-IR) and ability to handle glucose were determined. Oral administration of zingerone significantly increased (P<0.001) visceral fat and liver lipid content (P<0.001), respectively. Results from the study revealed that administration of 20% fructose solution did not induce metabolic dysfunction, however the zingerone treatment increased visceral fat and liver lipid content, all these lipid abnormalities are typical features of the metabolic syndrome, therefore the current study suggests that zingerone has no effect on metabolic dysfunction in adolescent females.

Keywords: antidiabetic, metabolic syndrome, zingerone, antiobesogenic

Procedia PDF Downloads 103
1415 Japanese Quail Breeding: The Second in Poultry Industry

Authors: A. Smaï, H. Idouhar-Saadi, S. Zenia, F. Haddadj, A. Aboun, S.Doumandji

Abstract:

The quail is the smallest member of the order fowl. His captive breeding has been practiced for centuries by the Japanese. Knowing that in the literature, it is mentioned that the end of lay is noted for the age of 6 months, our work has revealed a good egg production by females aged up to 35 weeks of age. In the same vein, our study focused on various parameters such as weight, diet and the number of eggs laid and this in order to better know the potential production and reproduction of domestic quail. Egg production has started from the 8th week of age of breeding, crop them and their counts are conducted daily basis until the age of 35 weeks. Indeed, biometric parameters are studied such as weight, length, and the largest diameter, the shape index, the index of shell, in order to analyze the physical condition of eggs by females of age. Until the age of 22 weeks, the eggs have maintained good biometric features. Japanese quail are best producing eggs. Hatchability is also considered. They are excellent poultry yields, since they begin laying eggs in two months and can provide abundant nesting with females over 8 months in our study. Other farms results reveal conclusions. Indeed, one aspect remains to be developed; it is the analysis of nutritional and therapeutic values of eggs over the age of females. The latter, given their wealth is a dietary supplement of animal origin with dietary value (it contains 0 cholesterol) that characterizes the quail eggs. Raising quail among other reproduction requires minimal when compared to other domestic birds space, this is the second breeding, in terms of importance after the chicken. Therefore, in the case of a farm that works exclusively in the production of eggs, requires minimal work and free space, as well as reduced costs.

Keywords: Japanese quail, reproduction, eggs, biometrics, reproductive age

Procedia PDF Downloads 268
1414 Impacts and Management of Oil Spill Pollution along the Chabahar Bay by ESI Mapping, Iran

Authors: M. Sanjarani, A. Danehkar, A. Mashincheyan, A. H. Javid, S. M. R. Fatemi

Abstract:

The oil spill in marine water has direct impact on coastal resources and community. Environmental Sensitivity Index (ESI) map is the first step to assess the potential impact of an oil spill and minimize the damage of coastal resources. In order to create Environmental Sensitivity Maps for the Chabahar bay (Iran), information has been collected in three different layers (Shoreline Classification, Biological and Human- uses resources) by means of field observations and measurements of beach morphology, personal interviews with professionals of different areas and the collection of bibliographic information. In this paper an attempt made to prepare an ESI map for sensitivity to oil spills of Chabahar bay coast. The Chabahar bay is subjected to high threaten to oil spill because of port, dense mangrove forest,only coral spot in Oman Sea and many industrial activities. Mapping the coastal resources, shoreline and coastal structures was carried out using Satellite images and GIS technology. The coastal features classified into three major categories as: Shoreline Classification, Biological and Human uses resources. The important resources classified into mangrove, Exposed tidal flats, sandy beach, etc. The sensitivity of shore was ranked as low to high (1 = low sensitivity,10 = high sensitivity) based on geomorphology of Chabahar bay coast using NOAA standards (sensitivity to oil, ease of clean up, etc). Eight ESI types were found in the area namely; ESI 1A, 1C, 3A, 6B, 7, 8B,9A and 10D. Therefore, in the study area, 50% were defined as High sensitivity, less than 1% as Medium, and 49% as low sensitivity areas. The ESI maps are useful to the oil spill responders, coastal managers and contingency planners. The overall ESI mapping product can provide a valuable management tool not only for oil spill response but for better integrated coastal zone management.

Keywords: ESI, oil spill, GIS, Chabahar Bay, Iran

Procedia PDF Downloads 343
1413 The New World Kirkpatrick Model as an Evaluation Tool for a Publication Writing Programme

Authors: Eleanor Nel

Abstract:

Research output is an indicator of institutional performance (and quality), resulting in increased pressure on academic institutions to perform in the research arena. Research output is further utilised to obtain research funding. Resultantly, academic institutions face significant pressure from governing bodies to provide evidence on the return for research investments. Research output has thus become a substantial discourse within institutions, mainly due to the processes linked to evaluating research output and the associated allocation of research funding. This focus on research outputs often surpasses the development of robust, widely accepted tools to additionally measure research impact at institutions. A publication writing programme, for enhancing research output, was launched at a South African university in 2011. Significant amounts of time, money, and energy have since been invested in the programme. Although participants provided feedback after each session, no formal review was conducted to evaluate the research output directly associated with the programme. Concerns in higher education about training costs, learning results, and the effect on society have increased the focus on value for money and the need to improve training, research performance, and productivity. Furthermore, universities rely on efficient and reliable monitoring and evaluation systems, in addition to the need to demonstrate accountability. While publishing does not occur immediately, achieving a return on investment from the intervention is critical. A multi-method study, guided by the New World Kirkpatrick Model (NWKM), was conducted to determine the impact of the publication writing programme for the period of 2011 to 2018. Quantitative results indicated a total of 314 academics participating in 72 workshops over the study period. To better understand the quantitative results, an open-ended questionnaire and semi-structured interviews were conducted with nine participants from a particular faculty as a convenience sample. The purpose of the research was to collect information to develop a comprehensive framework for impact evaluation that could be used to enhance the current design and delivery of the programme. The qualitative findings highlighted the critical role of a multi-stakeholder strategy in strengthening support before, during, and after a publication writing programme to improve the impact and research outputs. Furthermore, monitoring on-the-job learning is critical to ingrain the new skills academics have learned during the writing workshops and to encourage them to be accountable and empowered. The NWKM additionally provided essential pointers on how to link the results more effectively from publication writing programmes to institutional strategic objectives to improve research performance and quality, as well as what should be included in a comprehensive evaluation framework.

Keywords: evaluation, framework, impact, research output

Procedia PDF Downloads 64
1412 Eco-Environmental Vulnerability Evaluation in Mountain Regions Using Remote Sensing and Geographical Information System: A Case Study of Pasol Gad Watershed of Garhwal Himalaya, India

Authors: Suresh Kumar Bandooni, Mirana Laishram

Abstract:

The Mid Himalaya of Garhwal Himalaya in Uttarakhand (India) has a complex Physiographic features withdiversified climatic conditions and therefore it is suspect to environmental vulnerability. Thenatural disasters and also anthropogenic activities accelerate the rate of environmental vulnerability. To analyse the environmental vulnerability, we have used geoinformatics technologies and numerical models and it is adoptedby using Spatial Principal Component Analysis (SPCA). The model consist of many factors such as slope, landuse/landcover, soil, forest fire risk, landslide susceptibility zone, human population density and vegetation index. From this model, the environmental vulnerability integrated index (EVSI) is calculated for Pasol Gad Watershed of Garhwal Himalaya for the years 1987, 2000, and 2013 and the Vulnerability is classified into five levelsi.e. Very low, low, medium, high and very highby means of cluster principle. The resultsforeco-environmental vulnerability distribution in study area shows that medium, high and very high levels are dominating in the area and it is mainly caused by the anthropogenic activities and natural disasters. Therefore, proper management forconservation of resources is utmost necessity of present century. It is strongly believed that participation at community level along with social worker, institutions and Non-governmental organization (NGOs) have become a must to conserve and protect the environment.

Keywords: eco-environment vulnerability, spatial principal component analysis, remote sensing, geographic information system, institutions, Himalaya

Procedia PDF Downloads 242
1411 Query Task Modulator: A Computerized Experimentation System to Study Media-Multitasking Behavior

Authors: Premjit K. Sanjram, Gagan Jakhotiya, Apoorv Goyal, Shanu Shukla

Abstract:

In psychological research, laboratory experiments often face the trade-off issue between experimental control and mundane realism. With the advent of Immersive Virtual Environment Technology (IVET), this issue seems to be at bay. However there is a growing challenge within the IVET itself to design and develop system or software that captures the psychological phenomenon of everyday lives. One such phenomena that is of growing interest is ‘media-multitasking’ To aid laboratory researches in media-multitasking this paper introduces Query Task Modulator (QTM), a computerized experimentation system to study media-multitasking behavior in a controlled laboratory environment. The system provides a computerized platform in conducting an experiment for experimenters to study media-multitasking in which participants will be involved in a query task. The system has Instant Messaging, E-mail, and Voice Call features. The answers to queries are provided on the left hand side information panel where participants have to search for it and feed the information in the respective communication media blocks as fast as possible. On the whole the system will collect multitasking behavioral data. To analyze performance there is a separate output table that records the reaction times and responses of the participants individually. Information panel and all the media blocks will appear on a single window in order to ensure multi-modality feature in media-multitasking and equal emphasis on all the tasks (thus avoiding prioritization to a particular task). The paper discusses the development of QTM in the light of current techniques of studying media-multitasking.

Keywords: experimentation system, human performance, media-multitasking, query-task

Procedia PDF Downloads 543
1410 Landslide Susceptibility Analysis in the St. Lawrence Lowlands Using High Resolution Data and Failure Plane Analysis

Authors: Kevin Potoczny, Katsuichiro Goda

Abstract:

The St. Lawrence lowlands extend from Ottawa to Quebec City and are known for large deposits of sensitive Leda clay. Leda clay deposits are responsible for many large landslides, such as the 1993 Lemieux and 2010 St. Jude (4 fatalities) landslides. Due to the large extent and sensitivity of Leda clay, regional hazard analysis for landslides is an important tool in risk management. A 2018 regional study by Farzam et al. on the susceptibility of Leda clay slopes to landslide hazard uses 1 arc second topographical data. A qualitative method known as Hazus is used to estimate susceptibility by checking for various criteria in a location and determine a susceptibility rating on a scale of 0 (no susceptibility) to 10 (very high susceptibility). These criteria are slope angle, geological group, soil wetness, and distance from waterbodies. Given the flat nature of St. Lawrence lowlands, the current assessment fails to capture local slopes, such as the St. Jude site. Additionally, the data did not allow one to analyze failure planes accurately. This study majorly improves the analysis performed by Farzam et al. in two aspects. First, regional assessment with high resolution data allows for identification of local locations that may have been previously identified as low susceptibility. This then provides the opportunity to conduct a more refined analysis on the failure plane of the slope. Slopes derived from 1 arc second data are relatively gentle (0-10 degrees) across the region; however, the 1- and 2-meter resolution 2022 HRDEM provided by NRCAN shows that short, steep slopes are present. At a regional level, 1 arc second data can underestimate the susceptibility of short, steep slopes, which can be dangerous as Leda clay landslides behave retrogressively and travel upwards into flatter terrain. At the location of the St. Jude landslide, slope differences are significant. 1 arc second data shows a maximum slope of 12.80 degrees and a mean slope of 4.72 degrees, while the HRDEM data shows a maximum slope of 56.67 degrees and a mean slope of 10.72 degrees. This equates to a difference of three susceptibility levels when the soil is dry and one susceptibility level when wet. The use of GIS software is used to create a regional susceptibility map across the St. Lawrence lowlands at 1- and 2-meter resolutions. Failure planes are necessary to differentiate between small and large landslides, which have so far been ignored in regional analysis. Leda clay failures can only retrogress as far as their failure planes, so the regional analysis must be able to transition smoothly into a more robust local analysis. It is expected that slopes within the region, once previously assessed at low susceptibility scores, contain local areas of high susceptibility. The goal is to create opportunities for local failure plane analysis to be undertaken, which has not been possible before. Due to the low resolution of previous regional analyses, any slope near a waterbody could be considered hazardous. However, high-resolution regional analysis would allow for more precise determination of hazard sites.

Keywords: hazus, high-resolution DEM, leda clay, regional analysis, susceptibility

Procedia PDF Downloads 56
1409 The Discussion on the Composition of Feng Shui by the Environmental Planning Viewpoint

Authors: Jhuang Jin-Jhong, Hsieh Wei-Fan

Abstract:

Climate change causes natural disasters persistently. Therefore, nowadays environmental planning objective tends to the issues of respecting nature and coexisting with nature. As a result, the natural environment analysis, e.g., the analysis of topography, soil, hydrology, climate, vegetation, is highly emphasized. On the other hand, Feng Shui has been a criterion of site selection for residence in Eastern since the ancient times and has had farther influence on site selection for castles and even for temples and tombs. The primary criterion of site selection is judging the quality of Long: mountain range, Sha: nearby mountains, Shui: hydrology, Xue: foundation, Xiang: aspect, which are similar to the environmental variables of mountain range, topography, hydrology and aspect. For the reason, a lot researchers attempt to probe into the connection between the criterion of Feng Shui and environmental planning factors. Most researches only discussed with the composition and theory of space of Feng Shui, but there is no research which explained Feng Shui through the environmental field. Consequently, this study reviewed the theory of Feng Shui through the environmental planning viewpoint and assembled essential composition factors of Feng Shui. The results of this study point. From literature review and comparison of theoretical meanings, we find that the ideal principles for planning the Feng Shui environment can also be used for environmental planning. Therefore, this article uses 12 ideal environmental features used in Feng Shui to contrast the natural aspects of the environment and make comparisons with previous research and classifies the environmental factors into climate, topography, hydrology, vegetation, and soil.

Keywords: the composition of Feng Shui, environmental planning, site selection, main components of the Feng Shui environment

Procedia PDF Downloads 498
1408 Generating a Functional Grammar for Architectural Design from Structural Hierarchy in Combination of Square and Equal Triangle

Authors: Sanaz Ahmadzadeh Siyahrood, Arghavan Ebrahimi, Mohammadjavad Mahdavinejad

Abstract:

Islamic culture was accountable for a plethora of development in astronomy and science in the medieval term, and in geometry likewise. Geometric patterns are reputable in a considerable number of cultures, but in the Islamic culture the patterns have specific features that connect the Islamic faith to mathematics. In Islamic art, three fundamental shapes are generated from the circle shape: triangle, square and hexagon. Originating from their quiddity, each of these geometric shapes has its own specific structure. Even though the geometric patterns were generated from such simple forms as the circle and the square, they can be combined, duplicated, interlaced, and arranged in intricate combinations. So in order to explain geometrical interaction principles between square and equal triangle, in the first definition step, all types of their linear forces individually and in the second step, between them, would be illustrated. In this analysis, some angles will be created from intersection of their directions. All angles are categorized to some groups and the mathematical expressions among them are analyzed. Since the most geometric patterns in Islamic art and architecture are based on the repetition of a single motif, the evaluation results which are obtained from a small portion, is attributable to a large-scale domain while the development of infinitely repeating patterns can represent the unchanging laws. Geometric ornamentation in Islamic art offers the possibility of infinite growth and can accommodate the incorporation of other types of architectural layout as well, so the logic and mathematical relationships which have been obtained from this analysis are applicable in designing some architecture layers and developing the plan design.

Keywords: angle, equal triangle, square, structural hierarchy

Procedia PDF Downloads 181
1407 Detection of Trends and Break Points in Climatic Indices: The Case of Umbria Region in Italy

Authors: A. Flammini, R. Morbidelli, C. Saltalippi

Abstract:

The increase of air surface temperature at global scale is a fact, with values around 0.85 ºC since the late nineteen century, as well as a significant change in main features of rainfall regime. Nevertheless, the detected climatic changes are not equally distributed all over the world, but exhibit specific characteristics in different regions. Therefore, studying the evolution of climatic indices in different geographical areas with a prefixed standard approach becomes very useful in order to analyze the existence of climatic trend and compare results. In this work, a methodology to investigate the climatic change and its effects on a wide set of climatic indices is proposed and applied at regional scale in the case study of a Mediterranean area, Umbria region in Italy. From data of the available temperature stations, nine temperature indices have been obtained and the existence of trends has been checked by applying the non-parametric Mann-Kendall test, while the non-parametric Pettitt test and the parametric Standard Normal Homogeneity Test (SNHT) have been applied to detect the presence of break points. In addition, aimed to characterize the rainfall regime, data from 11 rainfall stations have been used and a trend analysis has been performed on cumulative annual rainfall depth, daily rainfall, rainy days, and dry periods length. The results show a general increase in any temperature indices, even if with a trend pattern dependent of indices and stations, and a general decrease of cumulative annual rainfall and average daily rainfall, with a time rainfall distribution over the year different from the past.

Keywords: climatic change, temperature, rainfall regime, trend analysis

Procedia PDF Downloads 99
1406 A Novel Hybrid Deep Learning Architecture for Predicting Acute Kidney Injury Using Patient Record Data and Ultrasound Kidney Images

Authors: Sophia Shi

Abstract:

Acute kidney injury (AKI) is the sudden onset of kidney damage in which the kidneys cannot filter waste from the blood, requiring emergency hospitalization. AKI patient mortality rate is high in the ICU and is virtually impossible for doctors to predict because it is so unexpected. Currently, there is no hybrid model predicting AKI that takes advantage of two types of data. De-identified patient data from the MIMIC-III database and de-identified kidney images and corresponding patient records from the Beijing Hospital of the Ministry of Health were collected. Using data features including serum creatinine among others, two numeric models using MIMIC and Beijing Hospital data were built, and with the hospital ultrasounds, an image-only model was built. Convolutional neural networks (CNN) were used, VGG and Resnet for numeric data and Resnet for image data, and they were combined into a hybrid model by concatenating feature maps of both types of models to create a new input. This input enters another CNN block and then two fully connected layers, ending in a binary output after running through Softmax and additional code. The hybrid model successfully predicted AKI and the highest AUROC of the model was 0.953, achieving an accuracy of 90% and F1-score of 0.91. This model can be implemented into urgent clinical settings such as the ICU and aid doctors by assessing the risk of AKI shortly after the patient’s admission to the ICU, so that doctors can take preventative measures and diminish mortality risks and severe kidney damage.

Keywords: Acute kidney injury, Convolutional neural network, Hybrid deep learning, Patient record data, ResNet, Ultrasound kidney images, VGG

Procedia PDF Downloads 116
1405 Perceived and Performed E-Health Literacy: Survey and Simulated Performance Test

Authors: Efrat Neter, Esther Brainin, Orna Baron-Epel

Abstract:

Background: Connecting end-users to newly developed ICT technologies and channeling patients to new products requires an assessment of compatibility. End user’s assessment is conveyed in the concept of eHealth literacy. The study examined the association between perceived and performed eHealth literacy (EHL) in a heterogeneous age sample in Israel. Methods: Participants included 100 Israeli adults (mean age 43,SD 13.9) who were first phone interviewed and then tested on a computer simulation of health-related Internet tasks. Performed, perceived and evaluated EHL were assessed. Levels of successful completion of tasks represented EHL performance and evaluated EHL included observed motivation, confidence, and amount of help provided. Results: The skills of accessing, understanding, appraising, applying, and generating new information had a decreasing successful completion rate with increase in complexity of the task. Generating new information, though highly correlated with all other skills, was least correlated with the other skills. Perceived and performed EHL were correlated (r=.40, P=.001), while facets of performance (i.e, digital literacy and EHL) were highly correlated (r=.89, P<.001). Participants low and high in performed EHL were significantly different: low performers were older, had attained less education, used the Internet for less time and perceived themselves as less healthy. They also encountered more difficulties, required more assistance, were less confident in their conduct and exhibited less motivation than high performers. Conclusions: The association in this age-hetrogenous ample was larger than in previous age-homogenous samples. The moderate association between perceived and performed EHL indicates that the two are associated yet distinct, the latter requiring separate assessment. Features of future rapid performed EHL tools are discussed.

Keywords: eHealth, health literacy, performance, simulation

Procedia PDF Downloads 220
1404 Isosorbide Bis-Methyl Carbonate: Opportunities for an Industrial Model Based on Biomass

Authors: Olga Gomez De Miranda, Jose R. Ochoa-Gomez, Stefaan De Wildeman, Luciano Monsegue, Soraya Prieto, Leire Lorenzo, Cristina Dineiro

Abstract:

The chemical industry is facing a new revolution. As long as processes based on the exploitation of fossil resources emerged with force in the XIX century, Society currently demands a new radical change that will lead to the complete and irreversible implementation of a circular sustainable economic model. The implementation of biorefineries will be essential for this. There, renewable raw materials as sugars and other biomass resources are exploited for the development of new materials that will partially replace their petroleum-derived homologs in a safer, and environmentally more benign approach. Isosorbide, (1,4:3,6-dianhydro-d-glucidol) is a primary bio-based derivative obtained from the plant (poly) saccharides and a very interesting example of a useful chemical produced in biorefineries. It can, in turn, be converted to other secondary monomers as isosorbide bis-methyl carbonate (IBMC), whose main field of application can be as a key biodegradable intermediary substitute of bisphenol-A in the manufacture of polycarbonates, or as an alternative to the toxic isocyanates in the synthesis of new polyurethanes (non-isocyanate polyurethanes) both with a huge application market. New products will present advantageous mechanical or optical properties, as well as improved behavior in non-toxicity and biodegradability aspects in comparison to their petro-derived alternatives. A robust production process of IBMC, a biomass-derived chemical, is here presented. It can be used with different raw material qualities using dimethyl carbonate (DMC) as both co-reactant and solvent. It consists of the transesterification of isosorbide with DMC under soft operational conditions, using different basic catalysts, always active with the isosorbide characteristics and purity. Appropriate isolation processes have been also developed to obtain crude IBMC yields higher than 90%, with oligomers production lower than 10%, independently of the quality of the isosorbide considered. All of them are suitable to be used in polycondensation reactions for polymers obtaining. If higher qualities of IBMC are needed, a purification treatment based on nanofiltration membranes has been also developed. The IBMC reaction-isolation conditions established in the laboratory have been successfully modeled using appropriate software programs and moved to a pilot-scale (production of 100 kg of IBMC). It has been demonstrated that a highly efficient IBMC production process able to be up-scaled under suitable market conditions has been obtained. Operational conditions involved the production of IBMC involve soft temperature and energy needs, no additional solvents, and high operational efficiency. All of them are according to green manufacturing rules.

Keywords: biomass, catalyst, isosorbide bis-methyl carbonate, polycarbonate, polyurethane, transesterification

Procedia PDF Downloads 118
1403 Evolution of Predator-prey Body-size Ratio: Spatial Dimensions of Foraging Space

Authors: Xin Chen

Abstract:

It has been widely observed that marine food webs have significantly larger predator–prey body-size ratios compared with their terrestrial counterparts. A number of hypotheses have been proposed to account for such difference on the basis of primary productivity, trophic structure, biophysics, bioenergetics, habitat features, energy efficiency, etc. In this study, an alternative explanation is suggested based on the difference in the spatial dimensions of foraging arenas: terrestrial animals primarily forage in two dimensional arenas, while marine animals mostly forage in three dimensional arenas. Using 2-dimensional and 3-dimensional random walk simulations, it is shown that marine predators with 3-dimensional foraging would normally have a greater foraging efficiency than terrestrial predators with 2-dimensional foraging. Marine prey with 3-dimensional dispersion usually has greater swarms or aggregations than terrestrial prey with 2-dimensional dispersion, which again favours a greater predator foraging efficiency in marine animals. As an analytical tool, a Lotka-Volterra based adaptive dynamical model is developed with the predator-prey ratio embedded as an adaptive variable. The model predicts that high predator foraging efficiency and high prey conversion rate will dynamically lead to the evolution of a greater predator-prey ratio. Therefore, marine food webs with 3-dimensional foraging space, which generally have higher predator foraging efficiency, will evolve a greater predator-prey ratio than terrestrial food webs.

Keywords: predator-prey, body size, lotka-volterra, random walk, foraging efficiency

Procedia PDF Downloads 64
1402 The Superior Performance of Investment Bank-Affiliated Mutual Funds

Authors: Michelo Obrey

Abstract:

Traditionally, mutual funds have long been esteemed as stand-alone entities in the U.S. However, the prevalence of the fund families’ affiliation to financial conglomerates is eroding this striking feature. Mutual fund families' affiliation with financial conglomerates can potentially be an important source of superior performance or cost to the affiliated mutual fund investors. On the one hand, financial conglomerates affiliation offers the mutual funds access to abundant resources, better research quality, private material information, and business connections within the financial group. On the other hand, conflict of interest is bound to arise between the financial conglomerate relationship and fund management. Using a sample of U.S. domestic equity mutual funds from 1994 to 2017, this paper examines whether fund family affiliation to an investment bank help the affiliated mutual funds deliver superior performance through private material information advantage possessed by the investment banks or it costs affiliated mutual fund shareholders due to the conflict of interest. Robust to alternative risk adjustments and cross-section regression methodologies, this paper finds that the investment bank-affiliated mutual funds significantly outperform those of the mutual funds that are not affiliated with an investment bank. Interestingly the paper finds that the outperformance is confined to holding return, a return measure that captures the investment talent that is uninfluenced by transaction costs, fees, and other expenses. Further analysis shows that the investment bank-affiliated mutual funds specialize in hard-to-value stocks, which are not more likely to be held by unaffiliated funds. Consistent with the information advantage hypothesis, the paper finds that affiliated funds holding covered stocks outperform affiliated funds without covered stocks lending no support to the hypothesis that affiliated mutual funds attract superior stock-picking talent. Overall, the paper findings are consistent with the idea that investment banks maximize fee income by monopolistically exploiting their private information, thus strategically transferring performance to their affiliated mutual funds. This paper contributes to the extant literature on the agency problem in mutual fund families. It adds to this stream of research by showing that the agency problem is not only prevalent in fund families but also in financial organizations such as investment banks that have affiliated mutual fund families. The results show evidence of exploitation of synergies such as private material information sharing that benefit mutual fund investors due to affiliation with a financial conglomerate. However, this research has a normative dimension, allowing such incestuous behavior of insider trading and exploitation of superior information not only negatively affect the unaffiliated fund investors but also led to an unfair and unleveled playing field in the financial market.

Keywords: mutual fund performance, conflicts of interest, informational advantage, investment bank

Procedia PDF Downloads 171
1401 Online Allocation and Routing for Blood Delivery in Conditions of Variable and Insufficient Supply: A Case Study in Thailand

Authors: Pornpimol Chaiwuttisak, Honora Smith, Yue Wu

Abstract:

Blood is a perishable product which suffers from physical deterioration with specific fixed shelf life. Although its value during the shelf life is constant, fresh blood is preferred for treatment. However, transportation costs are a major factor to be considered by administrators of Regional Blood Centres (RBCs) which act as blood collection and distribution centres. A trade-off must therefore be reached between transportation costs and short-term holding costs. In this paper we propose a number of algorithms for online allocation and routing of blood supplies, for use in conditions of variable and insufficient blood supply. A case study in northern Thailand provides an application of the allocation and routing policies tested. The plan proposed for daily allocation and distribution of blood supplies consists of two components: firstly, fixed routes are determined for the supply of hospitals which are far from an RBC. Over the planning period of one week, each hospital on the fixed routes is visited once. A robust allocation of blood is made to hospitals on the fixed routes that can be guaranteed on a suitably high percentage of days, despite variable supplies. Secondly, a variable daily route is employed for close-by hospitals, for which more than one visit per week may be needed to fulfil targets. The variable routing takes into account the amount of blood available for each day’s deliveries, which is only known on the morning of delivery. For hospitals on the variables routes, the day and amounts of deliveries cannot be guaranteed but are designed to attain targets over the six-day planning horizon. In the conditions of blood shortage encountered in Thailand, and commonly in other developing countries, it is often the case that hospitals request more blood than is needed, in the knowledge that only a proportion of all requests will be met. Our proposal is for blood supplies to be allocated and distributed to each hospital according to equitable targets based on historical demand data, calculated with regard to expected daily blood supplies. We suggest several policies that could be chosen by the decision makes for the daily distribution of blood. The different policies provide different trade-offs between transportation and holding costs. Variations in the costs of transportation, such as the price of petrol, could make different policies the most beneficial at different times. We present an application of the policies applied to a realistic case study in the RBC at Chiang Mai province which is located in Northern region of Thailand. The analysis includes a total of more than 110 hospitals, with 29 hospitals considered in the variable route. The study is expected to be a pilot for other regions of Thailand. Computational experiments are presented. Concluding remarks include the benefits gained by the online methods and future recommendations.

Keywords: online algorithm, blood distribution, developing country, insufficient blood supply

Procedia PDF Downloads 321
1400 Spatial Distribution and Time Series Analysis of COVID-19 Pandemic in Italy: A Geospatial Perspective

Authors: Muhammad Farhan Ul Moazzam, Tamkeen Urooj Paracha, Ghani Rahman, Byung Gul Lee, Nasir Farid, Adnan Arshad

Abstract:

The novel coronavirus pandemic disease (COVID-19) affected the whole globe, though there is a lack of clinical studies and its epidemiological features. But as per the observation, it has been seen that most of the COVID-19 infected patients show mild to moderate symptoms, and they get better without any medical assistance due to a better immune system to generate antibodies against the novel coronavirus. In this study, the active cases, serious cases, recovered cases, deaths and total confirmed cases had been analyzed using the geospatial inverse distance weightage technique (IDW) within the time span of 2nd March to 3rd June 2020. As of 3rd June, the total number of COVID-19 cases in Italy were 231,238, total deaths 33,310, serious cases 350, recovered cases 158,951, and active cases were 39,177, which has been reported by the Ministry of Health, Italy. March 2nd-June 3rd, 2020 a sum of 231,238 cases has been reported in Italy out of which 38.68% cases reported in the Lombardia region with a death rate of 18%, which is high from its national mortality rate followed by Emilia-Romagna (14.89% deaths), Piemonte (12.68% deaths), and Vento (10% deaths). As per the total cases in the region, the highest number of recoveries has been observed in Umbria (92.52%), followed by Basilicata (87%), Valle d'Aosta (86.85%), and Trento (84.54%). The COVID-19 evolution in Italy has been particularly found in the major urban area, i.e., Rome, Milan, Naples, Bologna, and Florence. Geospatial technology played a vital role in this pandemic by tracking infected patient, active cases, and recovered cases. Geospatial techniques are very important in terms of monitoring and planning to control the pandemic spread in the country.

Keywords: COVID-19, public health, geospatial analysis, IDW, Italy

Procedia PDF Downloads 135
1399 Vertebral Pain Features in Women of Different Age Depending on Body Mass Index

Authors: Vladyslav Povoroznyuk, Tetiana Orlуk, Nataliia Dzerovych

Abstract:

Introduction: Back pain is an extremely common health care problem worldwide. Many studies show a link between an obesity and risk of lower back pain. The aim is to study correlation and peculiarities of vertebral pain in women of different age depending on their anthropometric indicators. Materials: 1886 women aged 25-89 years were examined. The patients were divided into groups according to age (25-44, 45-59, 60-74, 75-89 years old) and body mass index (BMI: to 18.4 kg/m2 (underweight), 18.5-24.9 kg/m2 (normal), 25-30 kg/m2 (overweight) and more than 30.1 kg/m2 (obese). Methods: The presence and intensity of pain was evaluated in the thoracic and lumbar spine using a visual analogue scale (VAS). BMI is calculated by the standard formula based on body weight and height measurements. Statistical analysis was performed using parametric and nonparametric methods. Significant changes were considered as p <0.05. Results: The intensity of pain in the thoracic spine was significantly higher in the underweight women in the age groups of 25-44 years (p = 0.04) and 60-74 years (p=0.005). The intensity of pain in the lumbar spine was significantly higher in the women of 45-59 years (p = 0.001) and 60-74 years (p = 0.0003) with obesity. In the women of 45-74 years BMI was significantly positively correlated with the level of pain in the lumbar spine. Obesity significantly increases the relative risk of pain in the lumbar region (RR=0.07 (95% CI: 1.03-1.12; p=0.002)), while underweight significantly increases the risk of pain in the thoracic region (RR=1.21 (95% CI: 1.00-1.46; p=0.05)). Conclusion: In women, vertebral pain syndrome may be related to the anthropometric characteristics (e.g., BMI). Underweight may indirectly influence the development of pain in the thoracic spine and increase the risk of pain in this part by 1.21 times. Obesity influences the development of pain in the lumbar spine increasing the risk by 1.07 times.

Keywords: body mass index, age, pain in thoracic and lumbar spine, women

Procedia PDF Downloads 354
1398 Assessment of Isatin as Surface Recognition Group: Design, Synthesis and Anticancer Evaluation of Hydroxamates as Novel Histone Deacetylase Inhibitors

Authors: Harish Rajak, Kamlesh Raghuwanshi

Abstract:

Histone deacetylase (HDAC) are promising target for cancer treatment. The panobinostat (Farydak; Novartis; approved by USFDA in 2015) and chidamide (Epidaza; Chipscreen Biosciences; approved by China FDA in 2014) are the novel HDAC inhibitors ratified for the treatment of patients with multiple myeloma and peripheral T cell lymphoma, respectively. On the other hand, two other HDAC inhibitors, Vorinostat (SAHA; approved by USFDA in 2006) and Romidepsin (FK228; approved by USFDA in 2009) are already in market for the treatment of cutaneous T-cell lymphoma. Several hydroxamic acid based HDAC inhibitors i.e., belinostat, givinostat, PCI24781 and JNJ26481585 are in clinical trials. HDAC inhibitors consist of three pharmacophoric features - an aromatic cap group, zinc binding group (ZBG) and a linker chain connecting cap group to ZBG. Herein, we report synthesis, characterization and biological evaluation of HDAC inhibitors possessing substituted isatin moiety as cap group which recognize the surface of active enzyme pocket and thiosemicarbazide moiety incorporated as linker group responsible for connecting cap group to ZBG (hydroxamic acid). Several analogues were found to inhibit HDAC and cellular proliferation of Hela cervical cancer cells with GI50 values in the micro molar range. Some of the compounds exhibited promising results in vitro antiproliferative studies. Attempts were also made to establish the structure activity relationship among synthesized HDAC inhibitors.

Keywords: HDAC inhibitors, hydroxamic acid derivatives, isatin derivatives, antiproliferative activity, docking

Procedia PDF Downloads 297
1397 Investigations of Heavy Metals Pollution in Sediments of Small Urban Lakes in Karelia Republic

Authors: Aleksandr Medvedev, Zakhar Slukovsii

Abstract:

Waterbodies, which are located either within urban areas or nearby towns, permanently undergo anthropogenic load. The extent of the load can be determined via investigations of chemical composition of both water and sediments. Lakes, as a rule, are considered as a landscape depressions, hence they are capable of natural material accumulating, which has been delivered from the catchment area through rivers as well as temporary flows. As a result, lacustrine sediments (especially closed-basin lakes sediments) are considered as perfect archives, which are served for reconstructing past sedimentation process, assessment of the modern contamination level, and prognostication of possible ways of changing in the future. The purposes of the survey are to define a heavy metals content in lake sediments cores, which were retrieved from four urban lakes located in the southern part of Karelia Republic, and to ascertain the main sources of heavy metals input to these waterbodies. It is really crucial to be aware of heavy metals content in environment, because chemical composition of a landscape may have a significant effect on living organisms and people’s health. Sediment columns were sampled in a field with 2-cm intervals by a gravitational corer called «Limnos». The sediment samples were analyzed by inductively coupled plasma spectrometry (ICP MS) for 8 chemical elements (Pb, Cd, Zn, Cr, Ni, Cu, Mn, V). The highest concentrations of trace elements were established in the upper and middle layers of the cores. It has also been ascertained that the extent of contamination mostly depends on a remoteness of a lake from various pollution sources and features of the sources.

Keywords: bottom sediments, environmental pollution, heavy metals, lakes

Procedia PDF Downloads 130
1396 Electronic Device Robustness against Electrostatic Discharges

Authors: Clara Oliver, Oibar Martinez

Abstract:

This paper is intended to reveal the severity of electrostatic discharge (ESD) effects in electronic and optoelectronic devices by performing sensitivity tests based on Human Body Model (HBM) standard. We explain here the HBM standard in detail together with the typical failure modes associated with electrostatic discharges. In addition, a prototype of electrostatic charge generator has been designed, fabricated, and verified to stress electronic devices, which features a compact high voltage source. This prototype is inexpensive and enables one to do a battery of pre-compliance tests aimed at detecting unexpected weaknesses to static discharges at the component level. Some tests with different devices were performed to illustrate the behavior of the proposed generator. A set of discharges was applied according to the HBM standard to commercially available bipolar transistors, complementary metal-oxide-semiconductor transistors and light emitting diodes. It is observed that high current and voltage ratings in electronic devices not necessarily provide a guarantee that the device will withstand high levels of electrostatic discharges. We have also compared the result obtained by performing the sensitivity tests based on HBM with a real discharge generated by a human. For this purpose, the charge accumulated in the person is monitored, and a direct discharge against the devices is generated by touching them. Every test has been performed under controlled relative humidity conditions. It is believed that this paper can be of interest for research teams involved in the development of electronic and optoelectronic devices which need to verify the reliability of their devices in terms of robustness to electrostatic discharges.

Keywords: human body model, electrostatic discharge, sensitivity tests, static charge monitoring

Procedia PDF Downloads 134
1395 Fake Accounts Detection in Twitter Based on Minimum Weighted Feature Set

Authors: Ahmed ElAzab, Amira M. Idrees, Mahmoud A. Mahmoud, Hesham Hefny

Abstract:

Social networking sites such as Twitter and Facebook attracts over 500 million users across the world, for those users, their social life, even their practical life, has become interrelated. Their interaction with social networking has affected their life forever. Accordingly, social networking sites have become among the main channels that are responsible for vast dissemination of different kinds of information during real time events. This popularity in Social networking has led to different problems including the possibility of exposing incorrect information to their users through fake accounts which results to the spread of malicious content during life events. This situation can result to a huge damage in the real world to the society in general including citizens, business entities, and others. In this paper, we present a classification method for detecting fake accounts on Twitter. The study determines the minimized set of the main factors that influence the detection of the fake accounts on Twitter, then the determined factors have been applied using different classification techniques, a comparison of the results for these techniques has been performed and the most accurate algorithm is selected according to the accuracy of the results. The study has been compared with different recent research in the same area, this comparison has proved the accuracy of the proposed study. We claim that this study can be continuously applied on Twitter social network to automatically detect the fake accounts, moreover, the study can be applied on different Social network sites such as Facebook with minor changes according to the nature of the social network which are discussed in this paper.

Keywords: fake accounts detection, classification algorithms, twitter accounts analysis, features based techniques

Procedia PDF Downloads 387
1394 Rapid Classification of Soft Rot Enterobacteriaceae Phyto-Pathogens Pectobacterium and Dickeya Spp. Using Infrared Spectroscopy and Machine Learning

Authors: George Abu-Aqil, Leah Tsror, Elad Shufan, Shaul Mordechai, Mahmoud Huleihel, Ahmad Salman

Abstract:

Pectobacterium and Dickeya spp which negatively affect a wide range of crops are the main causes of the aggressive diseases of agricultural crops. These aggressive diseases are responsible for a huge economic loss in agriculture including a severe decrease in the quality of the stored vegetables and fruits. Therefore, it is important to detect these pathogenic bacteria at their early stages of infection to control their spread and consequently reduce the economic losses. In addition, early detection is vital for producing non-infected propagative material for future generations. The currently used molecular techniques for the identification of these bacteria at the strain level are expensive and laborious. Other techniques require a long time of ~48 h for detection. Thus, there is a clear need for rapid, non-expensive, accurate and reliable techniques for early detection of these bacteria. In this study, infrared spectroscopy, which is a well-known technique with all its features, was used for rapid detection of Pectobacterium and Dickeya spp. at the strain level. The bacteria were isolated from potato plants and tubers with soft rot symptoms and measured by infrared spectroscopy. The obtained spectra were analyzed using different machine learning algorithms. The performances of our approach for taxonomic classification among the bacterial samples were evaluated in terms of success rates. The success rates for the correct classification of the genus, species and strain levels were ~100%, 95.2% and 92.6% respectively.

Keywords: soft rot enterobacteriaceae (SRE), pectobacterium, dickeya, plant infections, potato, solanum tuberosum, infrared spectroscopy, machine learning

Procedia PDF Downloads 86
1393 Taphonomy and Paleoecology of Cenomanian Oysters (Mollusca: Bivalvia) from Egypt

Authors: Ahmed El-Sabbagh, Heba Mansour, Magdy El-Hedeny

Abstract:

This study provided a taphonomic alteration and paleoecology of Cenomanian oysters from the Musabaa Salama area, south western Sinai, Egypt. Three oyster zones can be recognized in the studied area, a lower one of Amphidonte (Ceratostreon) flabellatum (lower-middle Cenomanian), a middle zone of Ilymatogyra (Afrogyra) africana (upper Cenomanian) and an upper one of Exogyra (Costagyra) olisiponensis (upper Cenomanian). Taphonomic features including disarticulation, fragmentation, encrustation and bioerosion were subjected to multivariate statistical analyses. The analyses showed that the distributions of the identified ichnospecies were greatly similar within the identified oyster zones in the Musabaa Salama section. With rare exceptions, Entobia cretacea, Gastrochaenolites torpedo and Maeandropolydora decipiens are considered as common to abundant ichnospecies within the three recorded oyster zones. In contrast, and with some exceptions, E. ovula, E. retiformis and Rogerella pattei are considered as frequent to common ichnospecies within the identified oyster zones. Other ichnospecies, including Caulostrepsis cretacea, G. orbicularis, Trypanites solitarius, E. geometrica and C. taeniola, are mostly recorded in rare to frequent occurrences. Careful investigation of these host shells and the preserved encrusters and/or bioerosion sculptures provided data concerning: 1) the substrate characteristics, 2) time of encrustation and bioerosion, 3) rate of sedimentation, 4) the planktonic productivity level, and 5) the general bathymetry and the rate of transgression across the substrate.

Keywords: oysters, Cenomanian, taphonomy, palaeoecology, Sinai, Egypt

Procedia PDF Downloads 292
1392 Application of Bayesian Model Averaging and Geostatistical Output Perturbation to Generate Calibrated Ensemble Weather Forecast

Authors: Muhammad Luthfi, Sutikno Sutikno, Purhadi Purhadi

Abstract:

Weather forecast has necessarily been improved to provide the communities an accurate and objective prediction as well. To overcome such issue, the numerical-based weather forecast was extensively developed to reduce the subjectivity of forecast. Yet the Numerical Weather Predictions (NWPs) outputs are unfortunately issued without taking dynamical weather behavior and local terrain features into account. Thus, NWPs outputs are not able to accurately forecast the weather quantities, particularly for medium and long range forecast. The aim of this research is to aid and extend the development of ensemble forecast for Meteorology, Climatology, and Geophysics Agency of Indonesia. Ensemble method is an approach combining various deterministic forecast to produce more reliable one. However, such forecast is biased and uncalibrated due to its underdispersive or overdispersive nature. As one of the parametric methods, Bayesian Model Averaging (BMA) generates the calibrated ensemble forecast and constructs predictive PDF for specified period. Such method is able to utilize ensemble of any size but does not take spatial correlation into account. Whereas space dependencies involve the site of interest and nearby site, influenced by dynamic weather behavior. Meanwhile, Geostatistical Output Perturbation (GOP) reckons the spatial correlation to generate future weather quantities, though merely built by a single deterministic forecast, and is able to generate an ensemble of any size as well. This research conducts both BMA and GOP to generate the calibrated ensemble forecast for the daily temperature at few meteorological sites nearby Indonesia international airport.

Keywords: Bayesian Model Averaging, ensemble forecast, geostatistical output perturbation, numerical weather prediction, temperature

Procedia PDF Downloads 268
1391 Incorporating Lexical-Semantic Knowledge into Convolutional Neural Network Framework for Pediatric Disease Diagnosis

Authors: Xiaocong Liu, Huazhen Wang, Ting He, Xiaozheng Li, Weihan Zhang, Jian Chen

Abstract:

The utilization of electronic medical record (EMR) data to establish the disease diagnosis model has become an important research content of biomedical informatics. Deep learning can automatically extract features from the massive data, which brings about breakthroughs in the study of EMR data. The challenge is that deep learning lacks semantic knowledge, which leads to impracticability in medical science. This research proposes a method of incorporating lexical-semantic knowledge from abundant entities into a convolutional neural network (CNN) framework for pediatric disease diagnosis. Firstly, medical terms are vectorized into Lexical Semantic Vectors (LSV), which are concatenated with the embedded word vectors of word2vec to enrich the feature representation. Secondly, the semantic distribution of medical terms serves as Semantic Decision Guide (SDG) for the optimization of deep learning models. The study evaluate the performance of LSV-SDG-CNN model on four kinds of Chinese EMR datasets. Additionally, CNN, LSV-CNN, and SDG-CNN are designed as baseline models for comparison. The experimental results show that LSV-SDG-CNN model outperforms baseline models on four kinds of Chinese EMR datasets. The best configuration of the model yielded an F1 score of 86.20%. The results clearly demonstrate that CNN has been effectively guided and optimized by lexical-semantic knowledge, and LSV-SDG-CNN model improves the disease classification accuracy with a clear margin.

Keywords: convolutional neural network, electronic medical record, feature representation, lexical semantics, semantic decision

Procedia PDF Downloads 118
1390 Studying the Establishment of Knowledge Management Background Factors at Islamic Azad University, Behshahr Branch

Authors: Mohammad Reza Bagherzadeh, Mohammad Hossein Taheri

Abstract:

Knowledge management serves as one of the great breakthroughs in information and knowledge era and given its outstanding features, successful organizations tends to adopt it. Therefore, to deal with knowledge management establishment in universities is of special importance. In this regard, the present research aims to shed lights on factors background knowledge management establishment at Islamic Azad University, Behshahr Branch (Northern Iran). Considering three factors information technology system, knowledge process system and organizational culture as a fundamental of knowledge management infrastructure, foregoing factors were evaluated individually. The present research was conducted in descriptive-survey manner and participants included all staffs and faculty members, so that according to Krejcie & Morgan table a sample size proportional to the population size was considered. The measurement tools included survey questionnaire whose reliability was calculated to 0.83 according to Cronbachs alpha. To data analysis, descriptive statistics such as frequency and its percentage tables, column charts, mean, standard deviation and as for inferential statistics Kolomogrov- Smirnov test and single T-test were used. The findings show that despite the good corporate culture as one of the three factors background the establishment of the knowledge management at Islamic Azad University Behshahr Branch, other two ones, including IT systems, and knowledge processes systems are characterized with adverse status. As a result, these factors have caused no necessary conditions for the establishment of Knowledge Management in the university provided.

Keywords: knowledge management, information technology, knowledge processes, organizational culture, educational institutions

Procedia PDF Downloads 504