Search results for: criteria of similarity
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3324

Search results for: criteria of similarity

2514 A Review of Feature Selection Methods Implemented in Neural Stem Cells

Authors: Natasha Petrovska, Mirjana Pavlovic, Maria M. Larrondo-Petrie

Abstract:

Neural stem cells (NSCs) are multi-potent, self-renewing cells that generate new neurons. Three subtypes of NSCs can be separated regarding the stages of NSC lineage: quiescent neural stem cells (qNSCs), activated neural stem cells (aNSCs) and neural progenitor cells (NPCs), but their gene expression signatures are not utterly understood yet. Single-cell examinations have started to elucidate the complex structure of NSC populations. Nevertheless, there is a lack of thorough molecular interpretation of the NSC lineage heterogeneity and an increasing need for tools to analyze and improve the efficiency and correctness of single-cell sequencing data. Feature selection and ordering can identify and classify the gene expression signatures of these subtypes and can discover novel subpopulations during the NSCs activation and differentiation processes. The aim here is to review the implementation of the feature selection technique on NSC subtypes and the classification techniques that have been used for the identification of gene expression signatures.

Keywords: feature selection, feature similarity, neural stem cells, genes, feature selection methods

Procedia PDF Downloads 152
2513 The Effect of Vertical Integration on Operational Performance: Evaluating Physician Employment in Hospitals

Authors: Gary Young, David Zepeda, Gilbert Nyaga

Abstract:

This study investigated whether vertical integration of hospitals and physicians is associated with better care for patients with cardiac conditions. A dramatic change in the U.S. hospital industry is the integration of hospital and physicians through hospital acquisition of physician practices. Yet, there is little evidence regarding whether this form of vertical integration leads to better operational performance of hospitals. The study was conducted as an observational investigation based on a pooled, cross-sectional database. The study sample comprised over hospitals in the State of California. The time frame for the study was 2010 to 2012. The key performance measure was hospitals’ degree of compliance with performance criteria set out by the federal government for managing patients with cardiac conditions. These criteria relate to the types of clinical tests and medications that hospitals should follow for cardiac patients but hospital compliance requires the cooperation of a hospital’s physicians. Data for this measure was obtained from a federal website that presents performance scores for U.S. hospitals. The key independent variable was the percentage of cardiologists that a hospital employs (versus cardiologists who are affiliated but not employed by the hospital). Data for this measure was obtained from the State of California which requires hospitals to report financial and operation data each year including numbers of employed physicians. Other characteristics of hospitals (e.g., information technology for cardiac care, volume of cardiac patients) were also evaluated as possible complements or substitutes for physician employment by hospitals. Additional sources of data included the American Hospital Association and the U.S. Census. Empirical models were estimated with generalized estimating equations (GEE). Findings suggest that physician employment is positively associated with better hospital performance for cardiac care. However, findings also suggest that information technology is a substitute for physician employment.

Keywords: physician employment, hospitals, verical integration, cardiac care

Procedia PDF Downloads 395
2512 Demand-Oriented Supplier Integration in Agile New Product Development Projects

Authors: Guenther Schuh, Stephan Schroeder, Marcel Faulhaber

Abstract:

Companies are facing an increasing pressure to innovate faster, cheaper and more radical in last years, due to shrinking product lifecycles and higher volatility of markets and customer demands. Especially established companies struggle meeting those demands. Thus, many producing companies are adapting their development processes to address this increasing pressure. One approach taken by many companies is the use of agile, highly iterative development processes to reduce development times and costs as well as to increase the fulfilled customer requirements and the realized level of innovation. At the same time decreasing depths of added value and increasing focus on core competencies as well as a growing product complexity result in a high dependency on suppliers and external development partners during the product development. Thus, a successful introduction of agile development methods into the development of physical products requires also a successful integration of the necessary external partners and suppliers into the new processes and procedures and an adaption of the organizational interfaces to external partners according to the new circumstances and requirements of agile development processes. For an effective and efficient product development, the design of customer-supplier-relationships should be demand-oriented. A significant influence on the required design has the characteristics of the procurement object. Examples therefore are the complexity of technical interfaces between supply object and final product or the importance of the supplied component for the major product functionalities. Thus, this paper presents an approach to derive general requirements on the design of supplier integration according to the characteristics of supply objects. First, therefore the most relevant evaluation criteria and characteristics have been identified based on a thorough literature review. Subsequently the resulting requirements on the design of the supplier integration were derived depending on the different possible values of these criteria.

Keywords: iterative development processes, agile new product development, procurement, supplier integration

Procedia PDF Downloads 172
2511 Topology Enhancement of a Straight Fin Using a Porous Media Computational Fluid Dynamics Simulation Approach

Authors: S. Wakim, M. Nemer, B. Zeghondy, B. Ghannam, C. Bouallou

Abstract:

Designing the optimal heat exchanger is still an essential objective to be achieved. Parametrical optimization involves the evaluation of the heat exchanger dimensions to find those that best satisfy certain objectives. This method contributes to an enhanced design rather than an optimized one. On the contrary, topology optimization finds the optimal structure that satisfies the design objectives. The huge development in metal additive manufacturing allowed topology optimization to find its way into engineering applications especially in the aerospace field to optimize metal structures. Using topology optimization in 3d heat and mass transfer problems requires huge computational time, therefore coupling it with CFD simulations can reduce this it. However, existed CFD models cannot be coupled with topology optimization. The CFD model must allow creating a uniform mesh despite the initial geometry complexity and also to swap the cells from fluid to solid and vice versa. In this paper, a porous media approach compatible with topology optimization criteria is developed. It consists of modeling the fluid region of the heat exchanger as porous media having high porosity and similarly the solid region is modeled as porous media having low porosity. The switching from fluid to solid cells required by topology optimization is simply done by changing each cell porosity using a user defined function. This model is tested on a plate and fin heat exchanger and validated by comparing its results to experimental data and simulations results. Furthermore, this model is used to perform a material reallocation based on local criteria to optimize a plate and fin heat exchanger under a constant heat duty constraint. The optimized fin uses 20% fewer materials than the first while the pressure drop is reduced by about 13%.

Keywords: computational methods, finite element method, heat exchanger, porous media, topology optimization

Procedia PDF Downloads 154
2510 The Impacts of Green Logistics Management Practices on Sustainability Performance in Nigeria

Authors: Ozoemelam Ikechukwu Lazarus, Nizamuddin B. Zainuddin, Abdul Kafi

Abstract:

Numerous studies have been carried out on Green Logistics Management Practices (GLMPs) across the globe. The study on the practices and performance of green chain practices in Africa in particular has not gained enough scholarly attention. Again, the majority of supply chain sustainability research being conducted focus on environmental sustainability. Logistics has been a major cause of supply chain resource waste and environmental damage. Many sectors of the economy that engage in logistical operations significantly rely on vehicles, which emit pollutants into the environment. Due to urbanization and industrialization, the logistical operations of manufacturing companies represent a serious hazard to the society and human life, making the sector one of the fastest expanding in the world today. Logistics companies are faced with numerous difficulties when attempting to implement logistics practices along their supply chains. In Nigeria, manufacturing companies aspire to implement reverse logistics in response to stakeholders’ requirements to reduce negative environmental consequences. However, implementing this is impeded by a criteria framework, and necessitates the careful analysis of how such criteria interact with each other in the presence of uncertainty. This study integrates most of the green logistics management practices (GLMPs) into the Nigerian firms to improve generalizability, and credibility. It examines the effect of Green Logistics Management Practices on environmental performance, social performance, market performance, and financial performance in the logistics industries. It seeks to identify the critical success factors in order to develop a model that incorporates different factors from the perspectives of the technology, organization, human and environment to inform the adoption and use of technologies for logistics supply chain social sustainability in Nigeria. It uses exploratory research approach to collect and analyse the data.

Keywords: logistics, managemernt, suatainability, environment, operations

Procedia PDF Downloads 62
2509 Selection of Pichia kudriavzevii Strain for the Production of Single-Cell Protein from Cassava Processing Waste

Authors: Phakamas Rachamontree, Theerawut Phusantisampan, Natthakorn Woravutthikul, Peerapong Pornwongthong, Malinee Sriariyanun

Abstract:

A total of 115 yeast strains isolated from local cassava processing wastes were measured for crude protein content. Among these strains, the strain MSY-2 possessed the highest protein concentration (>3.5 mg protein/mL). By using molecular identification tools, it was identified to be a strain of Pichia kudriavzevii based on similarity of D1/D2 domain of 26S rDNA region. In this study, to optimize the protein production by MSY-2 strain, Response Surface Methodology (RSM) was applied. The tested parameters were the carbon content, nitrogen content, and incubation time. Here, the value of regression coefficient (R2) = 0.7194 could be explained by the model, which is high to support the significance of the model. Under the optimal condition, the protein content was produced up to 3.77 g per L of the culture and MSY-2 strain contain 66.8 g protein per 100 g of cell dry weight. These results revealed the plausibility of applying the novel strain of yeast in single-cell protein production.

Keywords: single cell protein, response surface methodology, yeast, cassava processing waste

Procedia PDF Downloads 403
2508 Two Dimensional Steady State Modeling of Temperature Profile and Heat Transfer of Electrohydrodynamically Enhanced Micro Heat Pipe

Authors: H. Shokouhmand, M. Tajerian

Abstract:

A numerical investigation of laminar forced convection flows through a square cross section micro heat pipe by applying electrohydrodynamic (EHD) field has been studied. In the present study, pentane is selected as working fluid. Temperature and velocity profiles and heat transfer enhancement in the micro heat pipe by using EHD field at the two-dimensional and single phase fluid flow in steady state regime have been numerically calculated. At this model, only Coulomb force is considered. The study has been carried out for the Reynolds number 10 to 100 and EHD force field up to 8 KV. Coupled, non-linear equations governed on the model (continuity, momentum, and energy equations) have been solved simultaneously by CFD numerical methods. Steady state behavior of affecting parameters, e.g. friction factor, average temperature, Nusselt number and heat transfer enhancement criteria, have been evaluated. It has been observed that by increasing Reynolds number, the effect of EHD force became more significant and for smaller Reynolds numbers the rate of heat transfer enhancement criteria is increased. By obtaining and plotting the mentioned parameters, it has been shown that the EHD field enhances the heat transfer process. The numerical results show that by increasing EHD force field the absolute value of Nusselt number and friction factor increases and average temperature of fluid flow decreases. But the increasing rate of Nusselt number is greater than increasing value of friction factor, which makes applying EHD force field for heat transfer enhancement in micro heat pipes acceptable and applicable. The numerical results of model are in good agreement with the experimental results available in the literature.

Keywords: micro heat pipe, electrohydrodynamic force, Nusselt number, average temperature, friction factor

Procedia PDF Downloads 271
2507 Static Application Security Testing Approach for Non-Standard Smart Contracts

Authors: Antonio Horta, Renato Marinho, Raimir Holanda

Abstract:

Considered as an evolution of the Blockchain, the Ethereum platform, besides allowing transactions of its cryptocurrency named Ether, it allows the programming of decentralised applications (DApps) and smart contracts. However, this functionality into blockchains has raised other types of threats, and the exploitation of smart contracts vulnerabilities has taken companies to experience big losses. This research intends to figure out the number of contracts that are under risk of being drained. Through a deep investigation, more than two hundred thousand smart contracts currently available in the Ethereum platform were scanned and estimated how much money is at risk. The experiment was based in a query run on Google Big Query in July 2022 and returned 50,707,133 contracts published on the Ethereum platform. After applying the filtering criteria, the experimentgot 430,584 smart contracts to download and analyse. The filtering criteria consisted of filtering out: ERC20 and ERC721 contracts, contracts without transactions, and contracts without balance. From this amount of 430,584 smart contracts selected, only 268,103 had source codes published on Etherscan, however, we discovered, using a hashing process, that there were contracts duplication. Removing the duplicated contracts, the process ended up with 20,417 source codes, which were analysed using the open source SAST tool smartbugswith oyente and securify algorithms. In the end, there was nearly $100,000 at risk of being drained from the potentially vulnerable smart contracts. It is important to note that the tools used in this study may generate false positives, which may interfere with the number of vulnerable contracts. To address this point, our next step in this research is to develop an application to test the contract in a parallel environment to verify the vulnerability. Finally, this study aims to alert users and companies about the risk on not properly creating and analysing their smart contracts before publishing them into the platform. As any other application, smart contracts are at risk of having vulnerabilities which, in this case, may result in direct financial losses.

Keywords: blockchain, reentrancy, static application security testing, smart contracts

Procedia PDF Downloads 88
2506 Analysis on the Converged Method of Korean Scientific and Mathematical Fields and Liberal Arts Programme: Focusing on the Intervention Patterns in Liberal Arts

Authors: Jinhui Bak, Bumjin Kim

Abstract:

The purpose of this study is to analyze how the scientific and mathematical fields (STEM) and liberal arts (A) work together in the STEAM program. In the future STEAM programs that have been designed and developed, the humanities will act not just as a 'tool' for science technology and mathematics, but as a 'core' content to have an equivalent status. STEAM was first introduced to the Republic of Korea in 2011 when the Ministry of Education emphasized fostering creative convergence talent. Many programs have since been developed under the name STEAM, but with the majority of programs focusing on technology education, arts and humanities are considered secondary. As a result, arts is most likely to be accepted as an option that can be excluded from the teachers who run the STEAM program. If what we ultimately pursue through STEAM education is in fostering STEAM literacy, we should no longer turn arts into a tooling area for STEM. Based on this consciousness, this study analyzed over 160 STEAM programs in middle and high schools, which were produced and distributed by the Ministry of Education and the Korea Science and Technology Foundation from 2012 to 2017. The framework of analyses referenced two criteria presented in the related prior studies: normative convergence and technological convergence. In addition, we divide Arts into fine arts and liberal arts and focused on Korean Language Course which is in liberal arts and analyzed what kind of curriculum standards were selected, and what kind of process the Korean language department participated in teaching and learning. In this study, to ensure the reliability of the analysis results, we have chosen to cross-check the individual analysis results of the two researchers and only if they are consistent. We also conducted a reliability check on the analysis results of three middle and high school teachers involved in the STEAM education program. Analyzing 10 programs selected randomly from the analyzed programs, Cronbach's α .853 showed a reliable level. The results of this study are summarized as follows. First, the convergence ratio of the liberal arts was lowest in the department of moral at 14.58%. Second, the normative convergence is 28.19%, which is lower than that of the technological convergence. Third, the language and achievement criteria selected for the program were limited to functional areas such as listening, talking, reading and writing. This means that the convergence of Korean language departments is made only by the necessary tools to communicate opinions or promote scientific products. In this study, we intend to compare these results with the STEAM programs in the United States and abroad to explore what elements or key concepts are required for the achievement criteria for Korean language and curriculum. This is meaningful in that the humanities field (A), including Korean, provides basic data that can be fused into 'equivalent qualifications' with science (S), technical engineering (TE) and mathematics (M).

Keywords: Korean STEAM Programme, liberal arts, STEAM curriculum, STEAM Literacy, STEM

Procedia PDF Downloads 157
2505 Multi-Model Super Ensemble Based Advanced Approaches for Monsoon Rainfall Prediction

Authors: Swati Bhomia, C. M. Kishtawal, Neeru Jaiswal

Abstract:

Traditionally, monsoon forecasts have encountered many difficulties that stem from numerous issues such as lack of adequate upper air observations, mesoscale nature of convection, proper resolution, radiative interactions, planetary boundary layer physics, mesoscale air-sea fluxes, representation of orography, etc. Uncertainties in any of these areas lead to large systematic errors. Global circulation models (GCMs), which are developed independently at different institutes, each of which carries somewhat different representation of the above processes, can be combined to reduce the collective local biases in space, time, and for different variables from different models. This is the basic concept behind the multi-model superensemble and comprises of a training and a forecast phase. The training phase learns from the recent past performances of models and is used to determine statistical weights from a least square minimization via a simple multiple regression. These weights are then used in the forecast phase. The superensemble forecasts carry the highest skill compared to simple ensemble mean, bias corrected ensemble mean and the best model out of the participating member models. This approach is a powerful post-processing method for the estimation of weather forecast parameters reducing the direct model output errors. Although it can be applied successfully to the continuous parameters like temperature, humidity, wind speed, mean sea level pressure etc., in this paper, this approach is applied to rainfall, a parameter quite difficult to handle with standard post-processing methods, due to its high temporal and spatial variability. The present study aims at the development of advanced superensemble schemes comprising of 1-5 day daily precipitation forecasts from five state-of-the-art global circulation models (GCMs), i.e., European Centre for Medium Range Weather Forecasts (Europe), National Center for Environmental Prediction (USA), China Meteorological Administration (China), Canadian Meteorological Centre (Canada) and U.K. Meteorological Office (U.K.) obtained from THORPEX Interactive Grand Global Ensemble (TIGGE), which is one of the most complete data set available. The novel approaches include the dynamical model selection approach in which the selection of the superior models from the participating member models at each grid and for each forecast step in the training period is carried out. Multi-model superensemble based on the training using similar conditions is also discussed in the present study, which is based on the assumption that training with the similar type of conditions may provide the better forecasts in spite of the sequential training which is being used in the conventional multi-model ensemble (MME) approaches. Further, a variety of methods that incorporate a 'neighborhood' around each grid point which is available in literature to allow for spatial error or uncertainty, have also been experimented with the above mentioned approaches. The comparison of these schemes with respect to the observations verifies that the newly developed approaches provide more unified and skillful prediction of the summer monsoon (viz. June to September) rainfall compared to the conventional multi-model approach and the member models.

Keywords: multi-model superensemble, dynamical model selection, similarity criteria, neighborhood technique, rainfall prediction

Procedia PDF Downloads 139
2504 Diversity in Finance Literature Revealed through the Lens of Machine Learning: A Topic Modeling Approach on Academic Papers

Authors: Oumaima Lahmar

Abstract:

This paper aims to define a structured topography for finance researchers seeking to navigate the body of knowledge in their extrapolation of finance phenomena. To make sense of the body of knowledge in finance, a probabilistic topic modeling approach is applied on 6000 abstracts of academic articles published in three top journals in finance between 1976 and 2020. This approach combines both machine learning techniques and natural language processing to statistically identify the conjunctions between research articles and their shared topics described each by relevant keywords. The topic modeling analysis reveals 35 coherent topics that can well depict finance literature and provide a comprehensive structure for the ongoing research themes. Comparing the extracted topics to the Journal of Economic Literature (JEL) classification system, a significant similarity was highlighted between the characterizing keywords. On the other hand, we identify other topics that do not match the JEL classification despite being relevant in the finance literature.

Keywords: finance literature, textual analysis, topic modeling, perplexity

Procedia PDF Downloads 170
2503 Value Index, a Novel Decision Making Approach for Waste Load Allocation

Authors: E. Feizi Ashtiani, S. Jamshidi, M.H Niksokhan, A. Feizi Ashtiani

Abstract:

Waste load allocation (WLA) policies may use multi-objective optimization methods to find the most appropriate and sustainable solutions. These usually intend to simultaneously minimize two criteria, total abatement costs (TC) and environmental violations (EV). If other criteria, such as inequity, need for minimization as well, it requires introducing more binary optimizations through different scenarios. In order to reduce the calculation steps, this study presents value index as an innovative decision making approach. Since the value index contains both the environmental violation and treatment costs, it can be maximized simultaneously with the equity index. It implies that the definition of different scenarios for environmental violations is no longer required. Furthermore, the solution is not necessarily the point with minimized total costs or environmental violations. This idea is testified for Haraz River, in north of Iran. Here, the dissolved oxygen (DO) level of river is simulated by Streeter-Phelps equation in MATLAB software. The WLA is determined for fish farms using multi-objective particle swarm optimization (MOPSO) in two scenarios. At first, the trade-off curves of TC-EV and TC-Inequity are plotted separately as the conventional approach. In the second, the Value-Equity curve is derived. The comparative results show that the solutions are in a similar range of inequity with lower total costs. This is due to the freedom of environmental violation attained in value index. As a result, the conventional approach can well be replaced by the value index particularly for problems optimizing these objectives. This reduces the process to achieve the best solutions and may find better classification for scenario definition. It is also concluded that decision makers are better to focus on value index and weighting its contents to find the most sustainable alternatives based on their requirements.

Keywords: waste load allocation (WLA), value index, multi objective particle swarm optimization (MOPSO), Haraz River, equity

Procedia PDF Downloads 422
2502 A National Systematic Review on Determining Prevalence of Mobbing Exposure in Turkish Nurses

Authors: Betül Sönmez, Aytolan Yıldırım

Abstract:

Objective: This systematic review aims to methodically analyze studies regarding mobbing behavior prevalence, individuals performing this behavior and the effects of mobbing on Turkish nurses. Background: Worldwide reports on mobbing cases have increased in the past years, a similar trend also observable in Turkey. It has been demonstrated that among healthcare workers, mobbing is significantly widespread in nurses. The number of studies carried out in this regard has also increased. Method: The main criteria for choosing articles in this systematic review were nurses located in Turkey, regardless of any specific date. In November 2014, a search using the keywords 'mobbing, bullying, psychological terror/violence, emotional violence, nurses, healthcare workers, Turkey' in PubMed, Science Direct, Ebscohost, National Thesis Centre database and Google search engine led to 71 studies in this field. 33 studies were not met the inclusion criteria specified for this study. Results: The findings were obtained using the results of 38 studies carried out in the past 13 years in Turkey, a large sample consisting of 8,877 nurses. Analysis of the incidences of mobbing behavior revealed a broad spectrum, ranging from none-slight experiences to 100% experiences. The most frequently observed mobbing behaviors include attacking personality, blocking communication and attacking professional and social reputation. Victims mostly experienced mobbing from their managers, the most common consequence of these actions being psychological effects. Conclusions: The results of studies with various scales indicate exposure of nurses to similar mobbing behavior. The high frequency of exposure of nurses to mobbing behavior in such a large sample highlights the importance of considering this issue in terms of individual and institutional consequences that adversely affect the performance of nurses.

Keywords: mobbing, bullying, workplace violence, nurses, Turkey

Procedia PDF Downloads 277
2501 The Grammatical Dictionary Compiler: A System for Kartvelian Languages

Authors: Liana Lortkipanidze, Nino Amirezashvili, Nino Javashvili

Abstract:

The purpose of the grammatical dictionary is to provide information on the morphological and syntactic characteristics of the basic word in the dictionary entry. The electronic grammatical dictionaries are used as a tool of automated morphological analysis for texts processing. The Georgian Grammatical Dictionary should contain grammatical information for each word: part of speech, type of declension/conjugation, grammatical forms of the word (paradigm), alternative variants of basic word/lemma. In this paper, we present the system for compiling the Georgian Grammatical Dictionary automatically. We propose dictionary-based methods for extending grammatical lexicons. The input lexicon contains only a few number of words with identical grammatical features. The extension is based on similarity measures between features of words; more precisely, we add words to the extended lexicons, which are similar to those, which are already in the grammatical dictionary. Our dictionaries are corpora-based, and for the compiling, we introduce the method for lemmatization of unknown words, i.e., words of which neither full form nor lemma is in the grammatical dictionary.

Keywords: acquisition of lexicon, Georgian grammatical dictionary, lemmatization rules, morphological processor

Procedia PDF Downloads 148
2500 Locating Potential Site for Biomass Power Plant Development in Central Luzon Philippines Using GIS-Based Suitability Analysis

Authors: Bryan M. Baltazar, Marjorie V. Remolador, Klathea H. Sevilla, Imee Saladaga, Loureal Camille Inocencio, Ma. Rosario Concepcion O. Ang

Abstract:

Biomass energy is a traditional source of sustainable energy, which has been widely used in developing countries. The Philippines, specifically Central Luzon, has an abundant source of biomass. Hence, it could supply abundant agricultural residues (rice husks), as feedstock in a biomass power plant. However, locating a potential site for biomass development is a complex process which involves different factors, such as physical, environmental, socio-economic, and risks that are usually diverse and conflicting. Moreover, biomass distribution is highly dispersed geographically. Thus, this study develops an integrated method combining Geographical Information Systems (GIS) and methods for energy planning; Multi-Criteria Decision Analysis (MCDA) and Analytical Hierarchy Process (AHP), for locating suitable site for biomass power plant development in Central Luzon, Philippines by considering different constraints and factors. Using MCDA, a three level hierarchy of factors and constraints was produced, with corresponding weights determined by experts by using AHP. Applying the results, a suitability map for Biomass power plant development in Central Luzon was generated. It showed that the central part of the region has the highest potential for biomass power plant development. It is because of the characteristics of the area such as the abundance of rice fields, with generally flat land surfaces, accessible roads and grid networks, and low risks to flooding and landslide. This study recommends the use of higher accuracy resource maps, and further analysis in selecting the optimum site for biomass power plant development that would account for the cost and transportation of biomass residues.

Keywords: analytic hierarchy process, biomass energy, GIS, multi-criteria decision analysis, site suitability analysis

Procedia PDF Downloads 425
2499 Polymerase Chain Reaction Analysis and Random Amplified Polymorphic DNA of Agrobacterium Tumefaciens

Authors: Abeer M. Algeblawi

Abstract:

Fifteen isolates of Agrobacterium tumefaciens were obtained from crown gall samples collected from six locations (Tripoli, Alzahra, Ain-Zara, Alzawia, Alazezia in Libya) from Grape (Vitis vinifera L.), Pear (Pyrus communis L.), Peach (Prunus persica L.) and Alexandria in Egypt from Guava (Psidium guajava L.) trees, Artichoke (Cynara cardunculus L.) and Sugar beet (Beta vulgaris L.). Total DNA was extracted from the eight isolates as well as the identification of six isolates used into Polymerase Chain Reaction (PCR) analysis and Random Amplified Polymorphic DNA (RAPD) technique were used. High similarity (55.5%) was observed among the eight A. tumefaciens isolates (Agro1, Agro2, Agro3, Agro4, Agro5, Agro6, Agro7, and Agro8). The PCR amplification products were resulting from the use of two specific primers (virD2A-virD2C). Analysis induction six isolates of A. tumefaciens obtained from different hosts. A visible band was specific to A. tumefaciens of (220 bp, 224 bp) and 338 bp produced with total DNA extracted from bacterial cells.

Keywords: Agrobacterium tumefaciens, crown gall, identification, molecular characterization, PCR, RAPD

Procedia PDF Downloads 144
2498 NFResNet: Multi-Scale and U-Shaped Networks for Deblurring

Authors: Tanish Mittal, Preyansh Agrawal, Esha Pahwa, Aarya Makwana

Abstract:

Multi-Scale and U-shaped Networks are widely used in various image restoration problems, including deblurring. Keeping in mind the wide range of applications, we present a comparison of these architectures and their effects on image deblurring. We also introduce a new block called as NFResblock. It consists of a Fast Fourier Transformation layer and a series of modified Non-Linear Activation Free Blocks. Based on these architectures and additions, we introduce NFResnet and NFResnet+, which are modified multi-scale and U-Net architectures, respectively. We also use three differ-ent loss functions to train these architectures: Charbonnier Loss, Edge Loss, and Frequency Reconstruction Loss. Extensive experiments on the Deep Video Deblurring dataset, along with ablation studies for each component, have been presented in this paper. The proposed architectures achieve a considerable increase in Peak Signal to Noise (PSNR) ratio and Structural Similarity Index (SSIM) value.

Keywords: multi-scale, Unet, deblurring, FFT, resblock, NAF-block, nfresnet, charbonnier, edge, frequency reconstruction

Procedia PDF Downloads 136
2497 Multidirectional Product Support System for Decision Making in Textile Industry Using Collaborative Filtering Methods

Authors: A. Senthil Kumar, V. Murali Bhaskaran

Abstract:

In the information technology ground, people are using various tools and software for their official use and personal reasons. Nowadays, people are worrying to choose data accessing and extraction tools at the time of buying and selling their products. In addition, worry about various quality factors such as price, durability, color, size, and availability of the product. The main purpose of the research study is to find solutions to these unsolved existing problems. The proposed algorithm is a Multidirectional Rank Prediction (MDRP) decision making algorithm in order to take an effective strategic decision at all the levels of data extraction, uses a real time textile dataset and analyzes the results. Finally, the results are obtained and compared with the existing measurement methods such as PCC, SLCF, and VSS. The result accuracy is higher than the existing rank prediction methods.

Keywords: Knowledge Discovery in Database (KDD), Multidirectional Rank Prediction (MDRP), Pearson’s Correlation Coefficient (PCC), VSS (Vector Space Similarity)

Procedia PDF Downloads 286
2496 Serum MicroRNA and Inflammatory Mediators: Diagnostic Biomarkers for Endometritis in Arabian Mares

Authors: Sally Ibrahim, Mohamed Hedia, Mohamed Taqi, Mohamed Derbala, Karima Mahmoud, Youssef Ahmed, Sayed Ismail, Mohamed El-Belely

Abstract:

The identification and quantification of serum microRNA (miRNA) from mares with endometritis might serve as useful and implementable clinical biomarkers for the early diagnosis of endometiritis. Aims of the current study were (I) to study the expression pattern of eca-miR-155, eca-miR-223, eca-miR-17, eca-miR-200a, and eca-miR-205, and (II) to determine the levels of interleukin 6 (IL-6), prostaglandins (PGF₂α and PGE₂), in the serum of Arabian mares with healthy and abnormal uterine status (endometritis). This study was conducted on 80 Arabian mares (4-14 years old). Mares were divided into 48 sub-fertile mares suspected of endometritis and 32 fertile at stud farms. The criteria for mares to be enrolled in the endometritis group were that they had been bred three or more times unsuccessfully in the breeding season or had a history of more than one year of reproductive failure. In addition, two or more of the following criteria on a checklist were present: abnormal clinical findings, transrectal ultrasonographic uterine examination showed abnormal fluid in the uterus (echogenic or ≥2 cm in diameter), positive endometrial cytology; and bacterial and/or fungal growth. Serum samples were collected for measuring IL-6, PGF₂α, and PGE₂ concentrations, as well as serum miRNA isolation and quantitative real-time PCR. Serum concentrations of IL-6, PGE₂, and PGF₂α were higher (P ≤ 0.001) in mares with endometritis compared to the control healthy ones. The expression profile of eca-miR-155, eca-miR-223, eca-miR-17, eca-miR-200a, and eca-miR-205 increased (P≤0.001) in mares with endometritis compared to the control ones. To the best of our knowledge, this is the first study that revealed that serum miRNA and serum inflammatory mediators (IL-6, PGE₂, and PGF₂α) could be used as non-invasive gold standard biomarkers, and therefore might be served as an important additional diagnostic tool for endometritis in Arabian mares. Moreover, estimation of the serum concentrations of serum miRNA, IL-6, PGE₂, and PGF₂α is a promising recommended tool during the breeding soundness examination in mares.

Keywords: Arabian Mares, endometritis, inflammatory mediators, serum miRNA

Procedia PDF Downloads 180
2495 Plant Leaf Recognition Using Deep Learning

Authors: Aadhya Kaul, Gautam Manocha, Preeti Nagrath

Abstract:

Our environment comprises of a wide variety of plants that are similar to each other and sometimes the similarity between the plants makes the identification process tedious thus increasing the workload of the botanist all over the world. Now all the botanists cannot be accessible all the time for such laborious plant identification; therefore, there is an urge for a quick classification model. Also, along with the identification of the plants, it is also necessary to classify the plant as healthy or not as for a good lifestyle, humans require good food and this food comes from healthy plants. A large number of techniques have been applied to classify the plants as healthy or diseased in order to provide the solution. This paper proposes one such method known as anomaly detection using autoencoders using a set of collections of leaves. In this method, an autoencoder model is built using Keras and then the reconstruction of the original images of the leaves is done and the threshold loss is found in order to classify the plant leaves as healthy or diseased. A dataset of plant leaves is considered to judge the reconstructed performance by convolutional autoencoders and the average accuracy obtained is 71.55% for the purpose.

Keywords: convolutional autoencoder, anomaly detection, web application, FLASK

Procedia PDF Downloads 163
2494 Investigation of Genetic Diversity of Tilia tomentosa Moench. (Silver Lime) in Duzce-Turkey

Authors: Ibrahim Ilker Ozyigit, Ertugrul Filiz, Seda Birbilener, Semsettin Kulac, Zeki Severoglu

Abstract:

In this study, we have performed genetic diversity analysis of Tilia tomentosa genotypes by using randomly amplified polymorphic DNA (RAPD) primers. A total of 28 genotypes, including 25 members from the urban ecosystem and 3 genotypes from forest ecosystem as outgroup were used. 8 RAPD primers produced a total of 53 bands, of which 48 (90.6 %) were polymorphic. Percentage of polymorphic loci (P), observed number of alleles (Na), effective number of alleles (Ne), Nei's (1973) gene diversity (h), and Shannon's information index (I) were found as 94.29 %, 1.94, 1.60, 0.34, and 0.50, respectively. The unweighted pair-group method with arithmetic average (UPGMA) cluster analysis revealed that two major groups were observed. The genotypes of urban and forest ecosystems showed a high genetic similarity between 28% and 92% and these genotypes did not separate from each other in UPGMA tree. Also, urban and forest genotypes clustered together in principal component analysis (PCA).

Keywords: Tilia tomentosa, genetic diversity, urban ecosystem, RAPD, UPGMA

Procedia PDF Downloads 510
2493 Multi-Criteria Bid/No Bid Decision Support Framework for General Contractors: A Case of Pakistan

Authors: Nida Iftikhar, Jamaluddin Thaheem, Bilal Iftikhar

Abstract:

In the construction industry, adequate and effective decision-making can mean the difference between success and failure. Bidding is the most important element of the construction business since it is a mean by which contractors obtain work. This is probably the only option for any contractor firm to sustain in the market and achieve its objective of earning the profits by winning tenders. The capability to select most appropriate ventures not only defines the success and wellbeing of contractor firms but also their survival and sustainability in the industry. The construction practitioners are usually on their own when it comes to deciding on bidding for a project or not. Usually, experience-based solutions are offered where a lot of subjectivity is involved. This research has been opted considering the local construction industry of Pakistan in order to examine the critical success factors from contractors’ perspective while making bidding decisions, listing and evaluating critical factors in order of their importance, categorization of these factors into decision support & decision oppose groups and to develop a framework to help contractors in the decision-making process. Literature review, questionnaires, and structured interviews are used for identification and quantification of factors affecting bid/no bid decision-making. Statistical methods of ranking analysis and analytical hierarchy process of multi-criteria decision-making method are used for analysis. It is found that profitability, need for work and financial health of client are the most decisive factors in bid/no bid decision-making while project size, project type, fulfilling the tender conditions imposed by the client and relationship, identity & reputation of the client are least impact factors in bid/no bid decision-making. Further, to verify the developed framework, case studies have been conducted to evaluate the bid/no bid decision-making in building procurement. This is the first of its nature study in the context of the local construction industry and recommends using a holistic decision-making framework for such business-critical deliberations.

Keywords: bidding, bid decision-making, construction procurement, contractor

Procedia PDF Downloads 191
2492 Associations between Physical Activity and Risk Factors for Type II Diabetes in Prediabetic Adults

Authors: Rukia Yosuf

Abstract:

Diabetes is a national healthcare crisis related to both macrovascular and microvascular complications. We hypothesized that higher levels of physical activity are associated with lower total and visceral fat mass, lower systolic blood pressure, and increased insulin sensitivity. Participant inclusion criteria: 21-50 years old, BMI ≥ 30 kg/m2, hemoglobin A1C 5.7-6.4, fasting glucose 100-125 mg/dL, and HOMA IR ≥ 2.5. Exclusion criteria: history of diabetes, hypertension, HIV, renal disease, hearing loss, alcoholic intake over four drinks daily, use of organic nitrates or PDE5 inhibitors, and decreased cardiac function. Total physical activity was measured using accelerometers, body composition using DXA, and insulin resistance via fsIVGTT. Clinical and biochemical cardiometabolic risk factors, blood pressure and heart rate were obtained using a calibrated sphygmomanometer. Anthropometric measures, fasting glucose, insulin, lipid profile, C-reactive protein, and BMP were analyzed using standard procedures. Within our study, we found correlations between levels of physical activity in a heterogeneous group of prediabetic adults. Patients with more physical activity had a higher degree of insulin sensitivity, lower blood pressure, total visceral adipose tissue, and overall lower total mass. Total physical activity levels showed small, but significant correlations with systolic blood pressure, visceral fat, lean mass and insulin sensitivity. After normalizing for the race, age, and gender using multiple regression, these associations were no longer significant considering our small sample size. More research into prediabetes will decrease the population of diabetics overall. In the future, we could increase sample size and conduct cross sectional and longitudinal studies in various populations with prediabetes.

Keywords: diabetes, kidney disease, nephrology, prediabetes

Procedia PDF Downloads 187
2491 Numerical Solutions of Boundary Layer Flow over an Exponentially Stretching/Shrinking Sheet with Generalized Slip Velocity

Authors: Roslinda Nazar, Ezad Hafidz Hafidzuddin, Norihan M. Arifin, Ioan Pop

Abstract:

In this paper, the problem of steady laminar boundary layer flow and heat transfer over a permeable exponentially stretching/shrinking sheet with generalized slip velocity is considered. The similarity transformations are used to transform the governing nonlinear partial differential equations to a system of nonlinear ordinary differential equations. The transformed equations are then solved numerically using the bvp4c function in MATLAB. Dual solutions are found for a certain range of the suction and stretching/shrinking parameters. The effects of the suction parameter, stretching/shrinking parameter, velocity slip parameter, critical shear rate, and Prandtl number on the skin friction and heat transfer coefficients as well as the velocity and temperature profiles are presented and discussed.

Keywords: boundary layer, exponentially stretching/shrinking sheet, generalized slip, heat transfer, numerical solutions

Procedia PDF Downloads 432
2490 Analysis of Pavement Lifespan - Cost and Emissions of Greenhouse Gases: A Comparative Study of 10-year vs 30-year Design

Authors: Claudeny Simone Alves Santana, Alexandre Simas De Medeiros, Marcelino Aurélio Vieira Da Silva

Abstract:

The aim of the study was to assess the performance of pavements over time, considering the principles of Life Cycle Assessment (LCA) and the ability to withstand vehicle loads and associated environmental impacts. Within the study boundary, pavement design was conducted using the Mechanistic-Empirical Method, adopting criteria based on pavement cracking and wheel path rutting while also considering factors such as soil characteristics, material thickness, and the distribution of forces exerted by vehicles. The Ecoinvent® 3.6 database and SimaPro® software were employed to calculate emissions, and SICRO 3 information was used to estimate costs. Consequently, the study sought to identify the service that had the greatest impact on greenhouse gas emissions. The results were compared for design life periods of 10 and 30 years, considering structural performance and load-bearing capacity. Additionally, environmental impacts in terms of CO2 emissions per standard axle and construction costs in dollars per standard axle were analyzed. Based on the conducted analyses, it was possible to determine which pavement exhibited superior performance over time, considering technical, environmental, and economic criteria. One of the findings indicated that the mechanical characteristics of the soils used in the pavement layer directly influence the thickness of the pavement and the quantity of greenhouse gases, with a difference of approximately 7000 Kg CO2 Eq. The transportation service was identified as having the most significant negative impact. Other notable observations are that the study can contribute to future project guidelines and assist in decision-making regarding the selection of the most suitable pavement in terms of durability, load-bearing capacity, and sustainability.

Keywords: life cycle assessment, greenhouse gases, urban paving, service cost

Procedia PDF Downloads 73
2489 Matrix Method Posting

Authors: Varong Pongsai

Abstract:

The objective of this paper is introducing a new method of accounting posting which is called Matrix Method Posting. This method is based on the Matrix operation of pure Mathematics. Although, accounting field is classified as one of the social-science knowledge, many of accounting operations are placed by Mathematics sign and operation. Through the operation applying, it seems to be that the operations of Mathematics should be applied to accounting possibly. So, this paper tries to over-lap Mathematics logic to accounting logic smoothly. According to the context of discovery, deductive approach is employed to prove a simultaneously logical concept of both Mathematics and Accounting. The result proves that the Matrix can be placed to operate accounting perfectly, because Matrix and accounting logic also have a similarity concept which is balancing 2 sides during operations. Moreover, the Matrix posting also has a lot of benefit. It can help financial analyst calculating financial ratios comfortably. Furthermore, the matrix determinant which is a signature operation itself also helps auditors checking out the correction of clients’ recording. If the determinant is not equaled to 0, it will point out that the recording process of clients getting into the problem. Finally, the Matrix should be easily determining a concept of merger and consolidation far beyond the present day concept.

Keywords: matrix method posting, deductive approach, determinant, accounting application

Procedia PDF Downloads 367
2488 Mutual Information Based Image Registration of Satellite Images Using PSO-GA Hybrid Algorithm

Authors: Dipti Patra, Guguloth Uma, Smita Pradhan

Abstract:

Registration is a fundamental task in image processing. It is used to transform different sets of data into one coordinate system, where data are acquired from different times, different viewing angles, and/or different sensors. The registration geometrically aligns two images (the reference and target images). Registration techniques are used in satellite images and it is important in order to be able to compare or integrate the data obtained from these different measurements. In this work, mutual information is considered as a similarity metric for registration of satellite images. The transformation is assumed to be a rigid transformation. An attempt has been made here to optimize the transformation function. The proposed image registration technique hybrid PSO-GA incorporates the notion of Particle Swarm Optimization and Genetic Algorithm and is used for finding the best optimum values of transformation parameters. The performance comparision obtained with the experiments on satellite images found that the proposed hybrid PSO-GA algorithm outperforms the other algorithms in terms of mutual information and registration accuracy.

Keywords: image registration, genetic algorithm, particle swarm optimization, hybrid PSO-GA algorithm and mutual information

Procedia PDF Downloads 408
2487 Companies’ Internationalization: Multi-Criteria-Based Prioritization Using Fuzzy Logic

Authors: Jorge Anibal Restrepo Morales, Sonia Martín Gómez

Abstract:

A model based on a logical framework was developed to quantify SMEs' internationalization capacity. To do so, linguistic variables, such as human talent, infrastructure, innovation strategies, FTAs, marketing strategies, finance, etc. were integrated. It is argued that a company’s management of international markets depends on internal factors, especially capabilities and resources available. This study considers internal factors as the biggest business challenge because they force companies to develop an adequate set of capabilities. At this stage, importance and strategic relevance have to be defined in order to build competitive advantages. A fuzzy inference system is proposed to model the resources, skills, and capabilities that determine the success of internationalization. Data: 157 linguistic variables were used. These variables were defined by international trade entrepreneurs, experts, consultants, and researchers. Using expert judgment, the variables were condensed into18 factors that explain SMEs’ export capacity. The proposed model is applied by means of a case study of the textile and clothing cluster in Medellin, Colombia. In the model implementation, a general index of 28.2 was obtained for internationalization capabilities. The result confirms that the sector’s current capabilities and resources are not sufficient for a successful integration into the international market. The model specifies the factors and variables, which need to be worked on in order to improve export capability. In the case of textile companies, the lack of a continuous recording of information stands out. Likewise, there are very few studies directed towards developing long-term plans, and., there is little consistency in exports criteria. This method emerges as an innovative management tool linked to internal organizational spheres and their different abilities.

Keywords: business strategy, exports, internationalization, fuzzy set methods

Procedia PDF Downloads 294
2486 High-Capacity Image Steganography using Wavelet-based Fusion on Deep Convolutional Neural Networks

Authors: Amal Khalifa, Nicolas Vana Santos

Abstract:

Steganography has been known for centuries as an efficient approach for covert communication. Due to its popularity and ease of access, image steganography has attracted researchers to find secure techniques for hiding information within an innocent looking cover image. In this research, we propose a novel deep-learning approach to digital image steganography. The proposed method, DeepWaveletFusion, uses convolutional neural networks (CNN) to hide a secret image into a cover image of the same size. Two CNNs are trained back-to-back to merge the Discrete Wavelet Transform (DWT) of both colored images and eventually be able to blindly extract the hidden image. Based on two different image similarity metrics, a weighted gain function is used to guide the learning process and maximize the quality of the retrieved secret image and yet maintaining acceptable imperceptibility. Experimental results verified the high recoverability of DeepWaveletFusion which outperformed similar deep-learning-based methods.

Keywords: deep learning, steganography, image, discrete wavelet transform, fusion

Procedia PDF Downloads 90
2485 Significance of Tridimensional Volume of Tumor in Breast Cancer Compared to Conventional TNM Stage

Authors: Jaewoo Choi, Ki-Tae Hwang, Eunyoung Ko

Abstract:

Backgrounds/Aims: Patients with breast cancer are currently classified according to TNM stage. Nevertheless, the actual volume would be mis-estimated, and it would bring on inappropriate diagnosis. Tridimensional volume-stage derived from the ellipsoid formula was presented as useful measure. Methods: The medical records of 480 consecutive breast cancer between January 2001 and March 2013 were retrospectively reviewed. All patients were divided into three groups according to tumor volume by receiver operating characteristic analysis, and the ranges of each volume-stage were that V1 was below 2.5 cc, V2 was exceeded 2.5 and below 10.9 cc, and V3 was exceeded 10.9 cc. We analyzed outcomes of volume-stage and compared disease-free survival (DFS) and overall survival (OS) between size-stage and volume-stage with variant intrinsic factor. Results: In the T2 stage, there were patients who had a smaller volume than 4.2 cc known as maximum value of T1. These findings presented that patients in T1c had poorer DFS than T2-lesser (mean of DFS 48.7 vs. 51.8, p = 0.011). Such is also the case in OS (mean of OS 51.1 vs. 55.3, p = 0.006). The cumulative survival curves for V1, V2 compared T1, T2 showed similarity in DFS (HR 1.9 vs. 1.9), and so did it for V3 compared T3 (HR 3.5 vs. 2.6) significantly. Conclusion: This study demonstrated that tumor volume had good feasibility on the prognosis of patients with breast cancer. We proposed that volume-stage should be considered for an additional stage indicator, particularly in early breast cancer.

Keywords: breast cancer, tridimensional volume of tumor, TNM stage, volume stage

Procedia PDF Downloads 403