Search results for: classical microbiological analysis
27627 Solvent Effects on Anticancer Activities of Medicinal Plants
Authors: Jawad Alzeer
Abstract:
Natural products are well recognized as sources of drugs in several human ailments. To investigate the impact of variable extraction techniques on the cytotoxic effects of medicinal plant extracts, 5 well-known medicinal plants from Palestine were extracted with 90% ethanol, 80% methanol, acetone, coconut water, apple vinegar, grape vinegar or 5% acetic acid. The resulting extracts were screened for cytotoxic activities against three different cancer cell lines (B16F10, MCF-7, and HeLa) using a standard resazurin-based cytotoxicity assay and Nile Blue A as the positive control. Highly variable toxicities and tissue sensitivity were observed, depending upon the solvent used for extraction. Acetone consistently gave lower extraction yields but higher cytotoxicity, whereas other solvent systems gave much higher extraction yields with lower cytotoxicity. Interestingly, coconut water was found to offer a potential alternative to classical organic solvents; it gave consistently highest extraction yields, and in the case of S. officinalis L., highly toxic extracts towards MCF-7 cells derived from human breast cancer. These results demonstrate how the cytotoxicity of plant extracts can be inversely proportional to the yield, and that solvent selection plays an important role in both factors.Keywords: plant extract, natural products, anti cancer drug, cytotoxicity
Procedia PDF Downloads 45427626 Experimental Study of the Behavior of Elongated Non-spherical Particles in Wall-Bounded Turbulent Flows
Authors: Manuel Alejandro Taborda Ceballos, Martin Sommerfeld
Abstract:
Transport phenomena and dispersion of non-spherical particle in turbulent flows are found everywhere in industrial application and processes. Powder handling, pollution control, pneumatic transport, particle separation are just some examples where the particle encountered are not only spherical. These types of multiphase flows are wall bounded and mostly highly turbulent. The particles found in these processes are rarely spherical but may have various shapes (e.g., fibers, and rods). Although research related to the behavior of regular non-spherical particles in turbulent flows has been carried out for many years, it is still necessary to refine models, especially near walls where the interaction fiber-wall changes completely its behavior. Imaging-based experimental studies on dispersed particle-laden flows have been applied for many decades for a detailed experimental analysis. These techniques have the advantages that they provide field information in two or three dimensions, but have a lower temporal resolution compared to point-wise techniques such as PDA (phase-Doppler anemometry) and derivations therefrom. The applied imaging techniques in dispersed two-phase flows are extensions from classical PIV (particle image velocimetry) and PTV (particle tracking velocimetry) and the main emphasis was simultaneous measurement of the velocity fields of both phases. In a similar way, such data should also provide adequate information for validating the proposed models. Available experimental studies on the behavior of non-spherical particles are uncommon and mostly based on planar light-sheet measurements. Especially for elongated non-spherical particles, however, three-dimensional measurements are needed to fully describe their motion and to provide sufficient information for validation of numerical computations. For further providing detailed experimental results allowing a validation of numerical calculations of non-spherical particle dispersion in turbulent flows, a water channel test facility was built around a horizontal closed water channel. Into this horizontal main flow, a small cross-jet laden with fiber-like particles was injected, which was also solely driven by gravity. The dispersion of the fibers was measured by applying imaging techniques based on a LED array for backlighting and high-speed cameras. For obtaining the fluid velocity fields, almost neutrally buoyant tracer was used. The discrimination between tracer and fibers was done based on image size which was also the basis to determine fiber orientation with respect to the inertial coordinate system. The synchronous measurement of fluid velocity and fiber properties also allow the collection of statistics of fiber orientation, velocity fields of tracer and fibers, the angular velocity of the fibers and the orientation between fiber and instantaneous relative velocity. Consequently, an experimental study the behavior of elongated non-spherical particles in wall bounded turbulent flows was achieved. The development of a comprehensive analysis was succeeded, especially near the wall region, where exists hydrodynamic wall interaction effects (e.g., collision or lubrication) and abrupt changes of particle rotational velocity. This allowed us to predict numerically afterwards the behavior of non-spherical particles within the frame of the Euler/Lagrange approach, where the particles are therein treated as “point-particles”.Keywords: crossflow, non-spherical particles, particle tracking velocimetry, PIV
Procedia PDF Downloads 8627625 Comparative Study of Dynamic Effect on Analysis Approaches for Circular Tanks Using Codal Provisions
Authors: P. Deepak Kumar, Aishwarya Alok, P. R. Maiti
Abstract:
Liquid storage tanks have become widespread during the recent decades due to their extensive usage. Analysis of liquid containing tanks is known to be complex due to hydrodynamic force exerted on tank which makes the analysis a complex one. The objective of this research is to carry out analysis of liquid domain along with structural interaction for various geometries of circular tanks considering seismic effects. An attempt has been made to determine hydrodynamic pressure distribution on the tank wall considering impulsive and convective components of liquid mass. To get a better picture, a comparative study of Draft IS 1893 Part 2, ACI 350.3 and Eurocode 8 for Circular Shaped Tank has been performed. Further, the differences in the magnitude of shear and moment at base as obtained from static (IS 3370 IV) and dynamic (Draft IS 1892 Part 2) analysis of ground supported circular tank highlight the need for us to mature from the old code to a newer code, which is more accurate and reliable.Keywords: liquid filled containers, circular tanks, IS 1893 (part 2), seismic analysis, sloshing
Procedia PDF Downloads 35327624 Argumentative and Enunciative Analysis of Spanish Political Discourse
Authors: Cristina Diez
Abstract:
One of the most important challenges of discourse analysis is to find the linguistic mechanisms of subjectivity. The present article aims to raise the need for an argumentative and enunciative analysis to reach the subjective tissue of language. The intention is to prove that the instructions inscribed in the own language are those that indicate how a statement is to be interpreted and that the argumentative value is implied at the semantic level. For that, the theory of argumentation from Ducrot and Anscombre will be implemented. First, a reflection on the study about subjectivity and enunciation in language will be exposed, followed by concrete proposals on the linguistic mechanisms that speakers use either consciously or unconsciously, to finally focus on those argumentative tools that political discourse uses in order to influence the audience.Keywords: argumentation, enunciation, discourse analysis, subjectivity
Procedia PDF Downloads 20127623 Study on 3D FE Analysis on Normal and Osteoporosis Mouse Models Based on 3-Point Bending Tests
Authors: Tae-min Byun, Chang-soo Chon, Dong-hyun Seo, Han-sung Kim, Bum-mo Ahn, Hui-suk Yun, Cheolwoong Ko
Abstract:
In this study, a 3-point bending computational analysis of normal and osteoporosis mouse models was performed based on the Micro-CT image information of the femurs. The finite element analysis (FEA) found 1.68 N (normal group) and 1.39 N (osteoporosis group) in the average maximum force, and 4.32 N/mm (normal group) and 3.56 N/mm (osteoporosis group) in the average stiffness. In the comparison of the 3-point bending test results, the maximum force and the stiffness were different about 9.4 times in the normal group and about 11.2 times in the osteoporosis group. The difference between the analysis and the test was greatly significant and this result demonstrated improvement points of the material properties applied to the computational analysis of this study. For the next study, the material properties of the mouse femur will be supplemented through additional computational analysis and test.Keywords: 3-point bending test, mouse, osteoporosis, FEA
Procedia PDF Downloads 35127622 Failure Analysis and Verification Using an Integrated Method for Automotive Electric/Electronic Systems
Authors: Lei Chen, Jian Jiao, Tingdi Zhao
Abstract:
Failures of automotive electric/electronic systems, which are universally considered to be safety-critical and software-intensive, may cause catastrophic accidents. Analysis and verification of failures in these kinds of systems is a big challenge with increasing system complexity. Model-checking is often employed to allow formal verification by ensuring that the system model conforms to specified safety properties. The system-level effects of failures are established, and the effects on system behavior are observed through the formal verification. A hazard analysis technique, called Systems-Theoretic Process Analysis, is capable of identifying design flaws which may cause potential failure hazardous, including software and system design errors and unsafe interactions among multiple system components. This paper provides a concept on how to use model-checking integrated with Systems-Theoretic Process Analysis to perform failure analysis and verification of automotive electric/electronic systems. As a result, safety requirements are optimized, and failure propagation paths are found. Finally, an automotive electric/electronic system case study is used to verify the effectiveness and practicability of the method.Keywords: failure analysis and verification, model checking, system-theoretic process analysis, automotive electric/electronic system
Procedia PDF Downloads 12027621 Supercomputer Simulation of Magnetic Multilayers Films
Authors: Vitalii Yu. Kapitan, Aleksandr V. Perzhu, Konstantin V. Nefedev
Abstract:
The necessity of studying magnetic multilayer structures is explained by the prospects of their practical application as a technological base for creating new storages medium. Magnetic multilayer films have many unique features that contribute to increasing the density of information recording and the speed of storage devices. Multilayer structures are structures of alternating magnetic and nonmagnetic layers. In frame of the classical Heisenberg model, lattice spin systems with direct short- and long-range exchange interactions were investigated by Monte Carlo methods. The thermodynamic characteristics of multilayer structures, such as the temperature behavior of magnetization, energy, and heat capacity, were investigated. The processes of magnetization reversal of multilayer structures in external magnetic fields were investigated. The developed software is based on the new, promising programming language Rust. Rust is a new experimental programming language developed by Mozilla. The language is positioned as an alternative to C and C++. For the Monte Carlo simulation, the Metropolis algorithm and its parallel implementation using MPI and the Wang-Landau algorithm were used. We are planning to study of magnetic multilayer films with asymmetric Dzyaloshinskii–Moriya (DM) interaction, interfacing effects and skyrmions textures. This work was supported by the state task of the Ministry of Education and Science of the Russia # 3.7383.2017/8.9Keywords: The Monte Carlo methods, Heisenberg model, multilayer structures, magnetic skyrmion
Procedia PDF Downloads 16627620 Application of ANN for Estimation of Power Demand of Villages in Sulaymaniyah Governorate
Abstract:
Before designing an electrical system, the estimation of load is necessary for unit sizing and demand-generation balancing. The system could be a stand-alone system for a village or grid connected or integrated renewable energy to grid connection, especially as there are non–electrified villages in developing countries. In the classical model, the energy demand was found by estimating the household appliances multiplied with the amount of their rating and the duration of their operation, but in this paper, information exists for electrified villages could be used to predict the demand, as villages almost have the same life style. This paper describes a method used to predict the average energy consumed in each two months for every consumer living in a village by Artificial Neural Network (ANN). The input data are collected using a regional survey for samples of consumers representing typical types of different living, household appliances and energy consumption by a list of information, and the output data are collected from administration office of Piramagrun for each corresponding consumer. The result of this study shows that the average demand for different consumers from four villages in different months throughout the year is approximately 12 kWh/day, this model estimates the average demand/day for every consumer with a mean absolute percent error of 11.8%, and MathWorks software package MATLAB version 7.6.0 that contains and facilitate Neural Network Toolbox was used.Keywords: artificial neural network, load estimation, regional survey, rural electrification
Procedia PDF Downloads 12327619 Discovering New Organic Materials through Computational Methods
Authors: Lucas Viani, Benedetta Mennucci, Soo Young Park, Johannes Gierschner
Abstract:
Organic semiconductors have attracted the attention of the scientific community in the past decades due to their unique physicochemical properties, allowing new designs and alternative device fabrication methods. Until today, organic electronic devices are largely based on conjugated polymers mainly due to their easy processability. In the recent years, due to moderate ET and CT efficiencies and the ill-defined nature of polymeric systems the focus has been shifting to small conjugated molecules with well-defined chemical structure, easier control of intermolecular packing, and enhanced CT and ET properties. It has led to the synthesis of new small molecules, followed by the growth of their crystalline structure and ultimately by the device preparation. This workflow is commonly followed without a clear knowledge of the ET and CT properties related mainly to the macroscopic systems, which may lead to financial and time losses, since not all materials will deliver the properties and efficiencies demanded by the current standards. In this work, we present a theoretical workflow designed to predict the key properties of ET of these new materials prior synthesis, thus speeding up the discovery of new promising materials. It is based on quantum mechanical, hybrid, and classical methodologies, starting from a single molecule structure, finishing with the prediction of its packing structure, and prediction of properties of interest such as static and averaged excitonic couplings, and exciton diffusion length.Keywords: organic semiconductor, organic crystals, energy transport, excitonic couplings
Procedia PDF Downloads 25327618 Non-Dominated Sorting Genetic Algorithm (NSGA-II) for the Redistricting Problem in Mexico
Authors: Antonin Ponsich, Eric Alfredo Rincon Garcia, Roman Anselmo Mora Gutierrez, Miguel Angel Gutierrez Andrade, Sergio Gerardo De Los Cobos Silva, Pedro Lara Velzquez
Abstract:
The electoral zone design problem consists in redrawing the boundaries of legislative districts for electoral purposes in such a way that federal or state requirements are fulfilled. In Mexico, this process has been historically carried out by the National Electoral Institute (INE), by optimizing an integer nonlinear programming model, in which population equality and compactness of the designed districts are considered as two conflicting objective functions, while contiguity is included as a hard constraint. The solution technique used by the INE is a Simulated Annealing (SA) based algorithm, which handles the multi-objective nature of the problem through an aggregation function. The present work represents the first intent to apply a classical Multi-Objective Evolutionary Algorithm (MOEA), the second version of the Non-dominated Sorting Genetic Algorithm (NSGA-II), to this hard combinatorial problem. First results show that, when compared with the SA algorithm, the NSGA-II obtains promising results. The MOEA manages to produce well-distributed solutions over a wide-spread front, even though some convergence troubles for some instances constitute an issue, which should be corrected in future adaptations of MOEAs to the redistricting problem.Keywords: multi-objective optimization, NSGA-II, redistricting, zone design problem
Procedia PDF Downloads 36727617 Analysis of a Strengthening of a Building Reinforced Concrete Structure
Authors: Nassereddine Attari
Abstract:
Each operation to strengthen or repair requires special consideration and requires the use of methods, tools and techniques appropriate to the situation and specific problems of each of the constructs. The aim of this paper is to study the pathology of building of reinforced concrete towards the earthquake and the vulnerability assessment using a non-linear Pushover analysis and to develop curves for a medium capacity building in order to estimate the damaged condition of the building.Keywords: pushover analysis, earthquake, damage, strengthening
Procedia PDF Downloads 43027616 Analytical Derivative: Importance on Environment and Water Analysis/Cycle
Authors: Adesoji Sodeinde
Abstract:
Analytical derivatives has recently undergone an explosive growth in areas of separation techniques, likewise in detectability of certain compound/concentrated ions. The gloomy and depressing scenario which charaterized the application of analytical derivatives in areas of water analysis, water cycle and the environment should not be allowed to continue unabated. Due to technological advancement in various chemical/biochemical analysis separation techniques is widely used in areas of medical, forensic and to measure and assesses environment and social-economic impact of alternative control strategies. This technological improvement was dully established in the area of comparison between certain separation/detection techniques to bring about vital result in forensic[as Gas liquid chromatography reveals the evidence given in court of law during prosecution of drunk drivers]. The water quality analysis,pH and water temperature analysis can be performed in the field, the concentration of dissolved free amino-acid [DFAA] can also be detected through separation techniques. Some important derivatives/ions used in separation technique. Water analysis : Total water hardness [EDTA to determine ca and mg ions]. Gas liquid chromatography : innovative gas such as helium [He] or nitrogen [N] Water cycle : Animal bone charcoal,activated carbon and ultraviolet light [U.V light].Keywords: analytical derivative, environment, water analysis, chemical/biochemical analysis
Procedia PDF Downloads 33827615 A Series Solution of Fuzzy Integro-Differential Equation
Authors: Maryam Mosleh, Mahmood Otadi
Abstract:
The hybrid differential equations have a wide range of applications in science and engineering. In this paper, the homotopy analysis method (HAM) is applied to obtain the series solution of the hybrid differential equations. Using the homotopy analysis method, it is possible to find the exact solution or an approximate solution of the problem. Comparisons are made between improved predictor-corrector method, homotopy analysis method and the exact solution. Finally, we illustrate our approach by some numerical example.Keywords: Fuzzy number, parametric form of a fuzzy number, fuzzy integrodifferential equation, homotopy analysis method
Procedia PDF Downloads 55727614 Using HABIT to Establish the Chemicals Analysis Methodology for Maanshan Nuclear Power Plant
Authors: J. R. Wang, S. W. Chen, Y. Chiang, W. S. Hsu, J. H. Yang, Y. S. Tseng, C. Shih
Abstract:
In this research, the HABIT analysis methodology was established for Maanshan nuclear power plant (NPP). The Final Safety Analysis Report (FSAR), reports, and other data were used in this study. To evaluate the control room habitability under the CO2 storage burst, the HABIT methodology was used to perform this analysis. The HABIT result was below the R.G. 1.78 failure criteria. This indicates that Maanshan NPP habitability can be maintained. Additionally, the sensitivity study of the parameters (wind speed, atmospheric stability classification, air temperature, and control room intake flow rate) was also performed in this research.Keywords: PWR, HABIT, Habitability, Maanshan
Procedia PDF Downloads 44527613 Prompt Design for Code Generation in Data Analysis Using Large Language Models
Authors: Lu Song Ma Li Zhi
Abstract:
With the rapid advancement of artificial intelligence technology, large language models (LLMs) have become a milestone in the field of natural language processing, demonstrating remarkable capabilities in semantic understanding, intelligent question answering, and text generation. These models are gradually penetrating various industries, particularly showcasing significant application potential in the data analysis domain. However, retraining or fine-tuning these models requires substantial computational resources and ample downstream task datasets, which poses a significant challenge for many enterprises and research institutions. Without modifying the internal parameters of the large models, prompt engineering techniques can rapidly adapt these models to new domains. This paper proposes a prompt design strategy aimed at leveraging the capabilities of large language models to automate the generation of data analysis code. By carefully designing prompts, data analysis requirements can be described in natural language, which the large language model can then understand and convert into executable data analysis code, thereby greatly enhancing the efficiency and convenience of data analysis. This strategy not only lowers the threshold for using large models but also significantly improves the accuracy and efficiency of data analysis. Our approach includes requirements for the precision of natural language descriptions, coverage of diverse data analysis needs, and mechanisms for immediate feedback and adjustment. Experimental results show that with this prompt design strategy, large language models perform exceptionally well in multiple data analysis tasks, generating high-quality code and significantly shortening the data analysis cycle. This method provides an efficient and convenient tool for the data analysis field and demonstrates the enormous potential of large language models in practical applications.Keywords: large language models, prompt design, data analysis, code generation
Procedia PDF Downloads 3927612 Li2S Nanoparticles Impact on the First Charge of Li-ion/Sulfur Batteries: An Operando XAS/XES Coupled With XRD Analysis
Authors: Alice Robba, Renaud Bouchet, Celine Barchasz, Jean-Francois Colin, Erik Elkaim, Kristina Kvashnina, Gavin Vaughan, Matjaz Kavcic, Fannie Alloin
Abstract:
With their high theoretical energy density (~2600 Wh.kg-1), lithium/sulfur (Li/S) batteries are highly promising, but these systems are still poorly understood due to the complex mechanisms/equilibria involved. Replacing S8 by Li2S as the active material allows the use of safer negative electrodes, like silicon, instead of lithium metal. S8 and Li2S have different conductivity and solubility properties, resulting in a profoundly changed activation process during the first cycle. Particularly, during the first charge a high polarization and a lack of reproducibility between tests are observed. Differences observed between raw Li2S material (micron-sized) and that electrochemically produced in a battery (nano-sized) may indicate that the electrochemical process depends on the particle size. Then the major focus of the presented work is to deepen the understanding of the Li2S material charge mechanism, and more precisely to characterize the effect of the initial Li2S particle size both on the mechanism and the electrode preparation process. To do so, Li2S nanoparticles were synthetized according to two ways: a liquid path synthesis and a dissolution in ethanol, allowing Li2S nanoparticles/carbon composites to be made. Preliminary chemical and electrochemical tests show that starting with Li2S nanoparticles could effectively suppress the high initial polarization but also influence the electrode slurry preparation. Indeed, it has been shown that classical formulation process - a slurry composed of Polyvinylidone Fluoride polymer dissolved in N-methyle-2-pyrrolidone - cannot be used with Li2S nanoparticles. This reveals a complete different Li2S material behavior regarding polymers and organic solvents when going at the nanometric scale. Then the coupling between two operando characterizations such as X-Ray Diffraction (XRD) and X-Ray Absorption and Emission Spectroscopy (XAS/XES) have been carried out in order to interpret the poorly understood first charge. This study discloses that initial particle size of the active material has a great impact on the working mechanism and particularly on the different equilibria involved during the first charge of the Li2S based Li-ion batteries. These results explain the electrochemical differences and particularly the polarization differences observed during the first charge between micrometric and nanometric Li2S-based electrodes. Finally, this work could lead to a better active material design and so to more efficient Li2S-based batteries.Keywords: Li-ion/Sulfur batteries, Li2S nanoparticles effect, Operando characterizations, working mechanism
Procedia PDF Downloads 26627611 Meta-Analysis of the Impact of Positive Psychological Capital on Employees Outcomes: The Moderating Role of Tenure
Authors: Hyeondal Jeong, Yoonjung Baek
Abstract:
This research examines the effects of positive psychological capital (or PsyCap) on employee’s outcomes (satisfaction, commitment, organizational citizenship behavior, innovation behavior and individual creativity). This study conducted a meta-analysis of articles published in the Republic of Korea. As a result, positive psychological capital has a positive effect on the behavior of employees. Heterogeneity was identified among the studies included in the analysis and the context factors were analyzed; the study proposes contextual factors such as team tenure. The moderating effect of team tenure was not statistically significant. The implications were discussed based on the analysis results.Keywords: positive psychological capital , satisfaction, commitment, OCB, creativity, meta-analysis
Procedia PDF Downloads 31527610 Time-Evolving Wave Packet in Phase Space
Authors: Mitsuyoshi Tomiya, Kentaro Kawamura, Shoichi Sakamoto
Abstract:
In chaotic billiard systems, scar-like localization has been found on time-evolving wave packet. We may call it the “dynamical scar” to separate it to the original scar in stationary states. It also comes out along the vicinity of classical unstable periodic orbits, when the wave packets are launched along the orbits, against the hypothesis that the waves become homogenous all around the billiard. Then time-evolving wave packets are investigated numerically in phase space. The Wigner function is adopted to detect the wave packets in phase space. The 2-dimensional Poincaré sections of the 4-dimensional phase space are introduced to clarify the dynamical behavior of the wave packets. The Poincaré sections of the coordinate (x or y) and the momentum (Px or Py) can visualize the dynamical behavior of the wave packets, including the behavior in the momentum degree also. For example, in “dynamical scar” states, a bit larger momentum component comes first, and then the a bit smaller and smaller components follow next. The sections made in the momentum space (Px or Py) elucidates specific trajectories that have larger contribution to the “dynamical scar” states. It is the fixed point observation of the momentum degrees at a specific fixed point(x0, y0) in the phase space. The accumulation are also calculated to search the “dynamical scar” in the Poincare sections. It is found the scars as bright spots in momentum degrees of the phase space.Keywords: chaotic billiard, Poincaré section, scar, wave packet
Procedia PDF Downloads 45227609 The Origin Variability of the Obturator Artery
Authors: Halimah Al Hifzi, Waseem Al-Talalwah, Shorok Al Dorazi, Hassan Al Mousa, Zainab Al-Hashim, Roger Soames
Abstract:
The obturator artery is one branches of anterior division of the internal iliac artery. It passes on the lateral wall of pelvis to escape into thigh region via obturator foremen. Based on previous research studies, it found to be extremely variable in origin and course. It may arise from internal or external iliac artery. The current study includes 82 dissected specimens to investigate the origin of the obturator artery and explain the clinical importance. The obturator artery arises from the internal iliac artery in 75% either from its anterior or posterior division in 46.9% or 25% respectively. Further, it arises neither from the anterior nor posterior division of the internal iliac artery but it arises between them in 3.1%. In 25%, the obturator artery arises from the external iliac artery. In case of aneurysmectomy of posterior division, carries a high risk of insufficient of vascular supply for demand structures such as proximal adductors attachment and hip joint. Therefore, vascular surgeons have to pay attention to the posterior division being an origin of the obturator artery beside its usual three classical branches: superior gluteal, iliolumbar and lateral sacral arteries. Further, the obturator artery arising from the external iliac system is in great dangerous of laceration in case of anterior pelvic fracture. Therefore, it may lead to haemorrhagic shock threatening life.Keywords: obturator artery, external iliac, internal iliac artery, anterior division, posterior division, superior gluteal, iliolumbar and lateral sacral, pubic fracture, aneurysm, shock
Procedia PDF Downloads 35727608 Efficiency of the Slovak Commercial Banks Applying the DEA Window Analysis
Authors: Iveta Řepková
Abstract:
The aim of this paper is to estimate the efficiency of the Slovak commercial banks employing the Data Envelopment Analysis (DEA) window analysis approach during the period 2003-2012. The research is based on unbalanced panel data of the Slovak commercial banks. Undesirable output was included into analysis of banking efficiency. It was found that most efficient banks were Postovabanka, UniCredit Bank and Istrobanka in CCR model and the most efficient banks were Slovenskasporitelna, Istrobanka and UniCredit Bank in BCC model. On contrary, the lowest efficient banks were found Privatbanka and CitiBank. We found that the largest banks in the Slovak banking market were lower efficient than medium-size and small banks. Results of the paper is that during the period 2003-2008 the average efficiency was increasing and then during the period 2010-2011 the average efficiency decreased as a result of financial crisis.Keywords: data envelopment analysis, efficiency, Slovak banking sector, window analysis
Procedia PDF Downloads 35727607 On Exploring Search Heuristics for improving the efficiency in Web Information Extraction
Authors: Patricia Jiménez, Rafael Corchuelo
Abstract:
Nowadays the World Wide Web is the most popular source of information that relies on billions of on-line documents. Web mining is used to crawl through these documents, collect the information of interest and process it by applying data mining tools in order to use the gathered information in the best interest of a business, what enables companies to promote theirs. Unfortunately, it is not easy to extract the information a web site provides automatically when it lacks an API that allows to transform the user-friendly data provided in web documents into a structured format that is machine-readable. Rule-based information extractors are the tools intended to extract the information of interest automatically and offer it in a structured format that allow mining tools to process it. However, the performance of an information extractor strongly depends on the search heuristic employed since bad choices regarding how to learn a rule may easily result in loss of effectiveness and/or efficiency. Improving search heuristics regarding efficiency is of uttermost importance in the field of Web Information Extraction since typical datasets are very large. In this paper, we employ an information extractor based on a classical top-down algorithm that uses the so-called Information Gain heuristic introduced by Quinlan and Cameron-Jones. Unfortunately, the Information Gain relies on some well-known problems so we analyse an intuitive alternative, Termini, that is clearly more efficient; we also analyse other proposals in the literature and conclude that none of them outperforms the previous alternative.Keywords: information extraction, search heuristics, semi-structured documents, web mining.
Procedia PDF Downloads 33527606 Identifying of Hybrid Lines for Lpx-B1 Gene in Durum Wheat
Authors: Özlem Ateş Sönmezoğlu, Begüm Terzi, Ahmet Yıldırım, Ramazan Özbey
Abstract:
The basic criteria which determine durum wheat quality is its suitability for pasta processing that is pasta making quality. Bright yellow color is a desired property in pasta products. Durum wheat pasta making quality is affected by grain pigment content and oxidative enzymes which affect adversely bright yellow color. Of the oxidative enzymes, lipoxygenase LOX is the most effective one on oxidative bleaching of yellow pigments in durum wheat products. Thus, wheat cultivars that are high in yellow pigments but low in LOX enzyme activity should be preferred for the production of pasta with high color quality. The aim of this study was to reduce lipoxygenase activities of the backcross durum wheat lines that were previously improved for their protein quality. For this purpose, two advanced lines with different parents (TMB2 and TMB3) were used recurrent parents. Also, Gediz-75 wheat with low LOX enzyme activity was used as the gene source. In all of the generations, backcrossed plants carrying the targeted gene region (Lpx-B1.1) were selected using SSR markers by marker assisted selection method. As a result, the study will be completed in three years instead of six years required in a classical backcross breeding study, leading to the development of high-quality candidate varieties. This research has been financially supported by TÜBİTAK (Project No: 112T910).Keywords: durum wheat, lipoxygenase, LOX, Lpx-B1.1, MAS, Triticum durum
Procedia PDF Downloads 30827605 Comparing Test Equating by Item Response Theory and Raw Score Methods with Small Sample Sizes on a Study of the ARTé: Mecenas Learning Game
Authors: Steven W. Carruthers
Abstract:
The purpose of the present research is to equate two test forms as part of a study to evaluate the educational effectiveness of the ARTé: Mecenas art history learning game. The researcher applied Item Response Theory (IRT) procedures to calculate item, test, and mean-sigma equating parameters. With the sample size n=134, test parameters indicated “good” model fit but low Test Information Functions and more acute than expected equating parameters. Therefore, the researcher applied equipercentile equating and linear equating to raw scores and compared the equated form parameters and effect sizes from each method. Item scaling in IRT enables the researcher to select a subset of well-discriminating items. The mean-sigma step produces a mean-slope adjustment from the anchor items, which was used to scale the score on the new form (Form R) to the reference form (Form Q) scale. In equipercentile equating, scores are adjusted to align the proportion of scores in each quintile segment. Linear equating produces a mean-slope adjustment, which was applied to all core items on the new form. The study followed a quasi-experimental design with purposeful sampling of students enrolled in a college level art history course (n=134) and counterbalancing design to distribute both forms on the pre- and posttests. The Experimental Group (n=82) was asked to play ARTé: Mecenas online and complete Level 4 of the game within a two-week period; 37 participants completed Level 4. Over the same period, the Control Group (n=52) did not play the game. The researcher examined between group differences from post-test scores on test Form Q and Form R by full-factorial Two-Way ANOVA. The raw score analysis indicated a 1.29% direct effect of form, which was statistically non-significant but may be practically significant. The researcher repeated the between group differences analysis with all three equating methods. For the IRT mean-sigma adjusted scores, form had a direct effect of 8.39%. Mean-sigma equating with a small sample may have resulted in inaccurate equating parameters. Equipercentile equating aligned test means and standard deviations, but resultant skewness and kurtosis worsened compared to raw score parameters. Form had a 3.18% direct effect. Linear equating produced the lowest Form effect, approaching 0%. Using linearly equated scores, the researcher conducted an ANCOVA to examine the effect size in terms of prior knowledge. The between group effect size for the Control Group versus Experimental Group participants who completed the game was 14.39% with a 4.77% effect size attributed to pre-test score. Playing and completing the game increased art history knowledge, and individuals with low prior knowledge tended to gain more from pre- to post test. Ultimately, researchers should approach test equating based on their theoretical stance on Classical Test Theory and IRT and the respective assumptions. Regardless of the approach or method, test equating requires a representative sample of sufficient size. With small sample sizes, the application of a range of equating approaches can expose item and test features for review, inform interpretation, and identify paths for improving instruments for future study.Keywords: effectiveness, equipercentile equating, IRT, learning games, linear equating, mean-sigma equating
Procedia PDF Downloads 19227604 Dynamic Analysis of Transmission Line Towers
Authors: L. Srikanth, D. Neelima Satyam
Abstract:
The transmission line towers are one of the important life line structures in the distribution of power from the source to the various places for several purposes. The predominant external loads which act on these towers are wind and earthquake loads. In this present study tower is analyzed using Indian Standards IS: 875:1987 (Wind Load), IS: 802:1995 (Structural Steel), IS:1893:2002 (Earthquake) and dynamic analysis of tower has been performed considering ground motion of 2001 Bhuj Earthquake (India). The dynamic analysis was performed considering a tower system consisting two towers spaced 800m apart and 35m height each. This analysis has been performed using numerical time stepping finite difference method which is central difference method were employed by a developed MATLAB program to get the normalized ground motion parameters includes acceleration, frequency, velocity which are important in designing the tower. The tower is analyzed using response spectrum analysis.Keywords: response spectra, dynamic analysis, central difference method, transmission tower
Procedia PDF Downloads 39827603 Valence and Arousal-Based Sentiment Analysis: A Comparative Study
Authors: Usama Shahid, Muhammad Zunnurain Hussain
Abstract:
This research paper presents a comprehensive analysis of a sentiment analysis approach that employs valence and arousal as its foundational pillars, in comparison to traditional techniques. Sentiment analysis is an indispensable task in natural language processing that involves the extraction of opinions and emotions from textual data. The valence and arousal dimensions, representing the intensity and positivity/negativity of emotions, respectively, enable the creation of four quadrants, each representing a specific emotional state. The study seeks to determine the impact of utilizing these quadrants to identify distinct emotional states on the accuracy and efficiency of sentiment analysis, in comparison to traditional techniques. The results reveal that the valence and arousal-based approach outperforms other approaches, particularly in identifying nuanced emotions that may be missed by conventional methods. The study's findings are crucial for applications such as social media monitoring and market research, where the accurate classification of emotions and opinions is paramount. Overall, this research highlights the potential of using valence and arousal as a framework for sentiment analysis and offers invaluable insights into the benefits of incorporating specific types of emotions into the analysis. These findings have significant implications for researchers and practitioners in the field of natural language processing, as they provide a basis for the development of more accurate and effective sentiment analysis tools.Keywords: sentiment analysis, valence and arousal, emotional states, natural language processing, machine learning, text analysis, sentiment classification, opinion mining
Procedia PDF Downloads 10127602 Input-Output Analysis in Laptop Computer Manufacturing
Authors: H. Z. Ulukan, E. Demircioğlu, M. Erol Genevois
Abstract:
The scope of this paper and the aim of proposed model were to apply monetary Input –Output (I-O) analysis to point out the importance of reusing know-how and other requirements in order to reduce the production costs in a manufacturing process for a laptop computer. I-O approach using the monetary input-output model is employed to demonstrate the impacts of different factors in a manufacturing process. A sensitivity analysis showing the correlation between these different factors is also presented. It is expected that the recommended model would have an advantageous effect in the cost minimization process.Keywords: input-output analysis, monetary input-output model, manufacturing process, laptop computer
Procedia PDF Downloads 39127601 Development and Verification of the Idom Shielding Optimization Tool
Authors: Omar Bouhassoun, Cristian Garrido, César Hueso
Abstract:
The radiation shielding design is an optimization problem with multiple -constrained- objective functions (radiation dose, weight, price, etc.) that depend on several parameters (material, thickness, position, etc.). The classical approach for shielding design consists of a brute force trial-and-error process subject to previous designer experience. Therefore, the result is an empirical solution but not optimal, which can degrade the overall performance of the shielding. In order to automate the shielding design procedure, the IDOM Shielding Optimization Tool (ISOT) has been developed. This software combines optimization algorithms with the capabilities to read/write input files, run calculations, as well as parse output files for different radiation transport codes. In the first stage, the software was established to adjust the input files for two well-known Monte Carlo codes (MCNP and Serpent) and optimize the result (weight, volume, price, dose rate) using multi-objective genetic algorithms. Nevertheless, its modular implementation easily allows the inclusion of more radiation transport codes and optimization algorithms. The work related to the development of ISOT and its verification on a simple 3D multi-layer shielding problem using both MCNP and Serpent will be presented. ISOT looks very promising for achieving an optimal solution to complex shielding problems.Keywords: optimization, shielding, nuclear, genetic algorithm
Procedia PDF Downloads 11027600 Prediction of Music Track Popularity: A Machine Learning Approach
Authors: Syed Atif Hassan, Luv Mehta, Syed Asif Hassan
Abstract:
Hit song science is a field of investigation wherein machine learning techniques are applied to music tracks in order to extract such features from audio signals which can capture information that could explain the popularity of respective tracks. Record companies invest huge amounts of money into recruiting fresh talents and churning out new music each year. Gaining insight into the basis of why a song becomes popular will result in tremendous benefits for the music industry. This paper aims to extract basic musical and more advanced, acoustic features from songs while also taking into account external factors that play a role in making a particular song popular. We use a dataset derived from popular Spotify playlists divided by genre. We use ten genres (blues, classical, country, disco, hip-hop, jazz, metal, pop, reggae, rock), chosen on the basis of clear to ambiguous delineation in the typical sound of their genres. We feed these features into three different classifiers, namely, SVM with RBF kernel, a deep neural network, and a recurring neural network, to build separate predictive models and choosing the best performing model at the end. Predicting song popularity is particularly important for the music industry as it would allow record companies to produce better content for the masses resulting in a more competitive market.Keywords: classifier, machine learning, music tracks, popularity, prediction
Procedia PDF Downloads 66327599 Visualization of Malaysia Universities Websites Based On Social Network Analysis
Authors: N. A. Ismail, Abdul Arif, Sharul Hafiz, Lu S. J., Tham W. S., Wong S. K.
Abstract:
This paper investigates the visulization of Malaysia universities websites. Twenty (20) public universities websites in Malaysia has been chosen as samples to explore and visualize the link relationship between their academic websites using social network analysis methods such as inlink, degree, weight, betweenness and modularity class. All of the connection and relation demonstrate the power to influence, comprehensive strength and also the variety of subject types that are present in universities. The experimental results also show that University Malaysia Sabah (UMS) is the biggest back links provider.Keywords: academic websites, link analysis, social network analysis, experimental result
Procedia PDF Downloads 47127598 Numerical and Experimental Investigations of Cantilever Rectangular Plate Structure on Subsonic Flutter
Authors: Mevlüt Burak Dalmış, Kemal Yaman
Abstract:
In this study, flutter characteristics of cantilever rectangular plate structure under incompressible flow regime are investigated by comparing the results of commercial flutter analysis program ZAERO© with wind tunnel tests conducted in Ankara Wind Tunnel (ART). A rectangular polycarbonate (PC) plate, 5x125x1000 mm in dimensions, is used for both numerical and experimental investigations. Analysis and test results are very compatible with each other. A comparison between two different solution methods (g and k-method) of ZAERO© is also done. It is seen that, k-method gives closer result than the other one. However, g-method results are on conservative side and it is better to use conservative results namely g-method results. Even if the modal analysis results are used for the flutter analysis for this simple structure, a modal test should be conducted in order to validate the modal analysis results to have accurate flutter analysis results for more complicated structures.Keywords: flutter, plate, subsonic flow, wind tunnel
Procedia PDF Downloads 518