Search results for: sequential linear programming
3024 Task Value and Research Culture of Southern Luzon State University
Authors: Antonio V. Romana, Rizaide A. Salayo, Maria Lavinia E. Fetalino
Abstract:
This study assessed the subjective task value and research culture of SLSU faculty. It used the sequential explanatory mixed-method research design. For the quantitative phase, a questionnaire on the research culture and task value were used. While in the qualitative phase, the data was coded and thematized to interpret the focus group discussion outcome. Results showed that the dimensions of the subjective task value, intrinsic, got the highest rank while the utility value got the lowest. It is worth mentioning that all subjective task values were "Agreed." From the FGD, faculty members valued research and wanted to be involved in this undertaking. However, the limited number of faculty researchers, heavy teaching workload, inadequate information on the research process, lack of self-confidence, and low incentives received from research hindered their writing and engagement with research. Thus, a policy brief was developed. It is recommended that the institution may conduct a series of research seminar workshops for the faculty members, plan regular research idea exchange activities, and revisit the university's research thrust and agenda for faculties specialization and expertise alignment. In addition, the university may also lessen the workload and hire additional faculty members so that educators may focus on their research work. Finally, cash incentives may still be considered upon knowing that the faculty members have varied experiences in doing research tasks.Keywords: task value, interest value, attainment value, utility value, research culture
Procedia PDF Downloads 653023 The Effect of Traffic on Harmful Metals and Metalloids in the Street Dust and Surface Soil from Urban Areas of Tehran, Iran: Levels, Distribution and Chemical Partitioning Based on Single and Sequential Extraction Procedures
Authors: Hossein Arfaeinia, Ahmad Jonidi Jafari, Sina Dobaradaran, Sadegh Niazi, Mojtaba Ehsanifar, Amir Zahedi
Abstract:
Street dust and surface soil samples were collected from very heavy, heavy, medium and low traffic areas and natural site in Tehran, Iran. These samples were analyzed for some physical–chemical features, total and chemical speciation of selected metals and metalloids (Zn, Al, Sr, Pb, Cu, Cr, Cd, Co, Ni, and V) to study the effect of traffic on their mobility and accumulation in the environment. The pH, electrical conductivity (EC), carbonates and organic carbon (OC) values were similar in soil and dust samples from similar traffic areas. The traffic increases EC contents in dust/soil matrixes but has no effect on concentrations of metals and metalloids in soil samples. Rises in metal and metalloids levels with traffic were found in dust samples. Moreover, the traffic increases the percentage of acid soluble fraction and Fe and Mn oxides associated fractions of Pb and Zn. The mobilization of Cu, Zn, Pb, Cr in dust samples was easier than in soil. The speciation of metals and metalloids except Cd is mainly affected by physicochemical features in soil, although total metals and metalloids affected the speciation in dust samples (except chromium and nickel).Keywords: street dust, surface soil, traffic, metals, metalloids, chemical speciation
Procedia PDF Downloads 2593022 Task Scheduling and Resource Allocation in Cloud-based on AHP Method
Authors: Zahra Ahmadi, Fazlollah Adibnia
Abstract:
Scheduling of tasks and the optimal allocation of resources in the cloud are based on the dynamic nature of tasks and the heterogeneity of resources. Applications that are based on the scientific workflow are among the most widely used applications in this field, which are characterized by high processing power and storage capacity. In order to increase their efficiency, it is necessary to plan the tasks properly and select the best virtual machine in the cloud. The goals of the system are effective factors in scheduling tasks and resource selection, which depend on various criteria such as time, cost, current workload and processing power. Multi-criteria decision-making methods are a good choice in this field. In this research, a new method of work planning and resource allocation in a heterogeneous environment based on the modified AHP algorithm is proposed. In this method, the scheduling of input tasks is based on two criteria of execution time and size. Resource allocation is also a combination of the AHP algorithm and the first-input method of the first client. Resource prioritization is done with the criteria of main memory size, processor speed and bandwidth. What is considered in this system to modify the AHP algorithm Linear Max-Min and Linear Max normalization methods are the best choice for the mentioned algorithm, which have a great impact on the ranking. The simulation results show a decrease in the average response time, return time and execution time of input tasks in the proposed method compared to similar methods (basic methods).Keywords: hierarchical analytical process, work prioritization, normalization, heterogeneous resource allocation, scientific workflow
Procedia PDF Downloads 1453021 The Influence of Wasta on Organizational Practices in Kuwait
Authors: Abrar Al-Enzi
Abstract:
Despite being frequently used everyday in the Arab World, Wasta, which is seen as a type of social capital, has received little attention from previous scholars, even in the Middle East. In simple words, Wasta basically means granting deserved or undeserved privileges to others through personal contacts. This paper suggests that Wasta is an important determinant of how some employees get recruited and turn to Wasta for privileges and favors in organizations. It is said, that Wasta accelerates career advancement and other work practices for employees, whether they deserve it or even are suitable for it or not. The overall goal of this paper is to see how Wasta influences human resource management practices by viewing the history of Wasta, the importance of using it, and how it affects employees as well as organizations in terms of recruitment and work practices. Accordingly, the question that will be addressed is: Does Wasta influence human resource management, knowledge sharing and innovation in Kuwait, which in turn affects employees’ commitment within organizations? Therefore, a mixed method sequential exploratory research design will be used to explore the research topic through initial exploratory interviews, paper-based and online surveys (Quantitative method) and semi-structured interviews (Qualitative method). The reason behind such a choice is because both qualitative and quantitative methods complement each other when combined by providing a clearer picture of the topic.Keywords: human resource management practices, Kuwait, social capital, Wasta
Procedia PDF Downloads 2083020 Airport Investment Risk Assessment under Uncertainty
Authors: Elena M. Capitanul, Carlos A. Nunes Cosenza, Walid El Moudani, Felix Mora Camino
Abstract:
The construction of a new airport or the extension of an existing one requires massive investments and many times public private partnerships were considered in order to make feasible such projects. One characteristic of these projects is uncertainty with respect to financial and environmental impacts on the medium to long term. Another one is the multistage nature of these types of projects. While many airport development projects have been a success, some others have turned into a nightmare for their promoters. This communication puts forward a new approach for airport investment risk assessment. The approach takes explicitly into account the degree of uncertainty in activity levels prediction and proposes milestones for the different stages of the project for minimizing risk. Uncertainty is represented through fuzzy dual theory and risk management is performed using dynamic programming. An illustration of the proposed approach is provided.Keywords: airports, fuzzy logic, risk, uncertainty
Procedia PDF Downloads 4133019 Leveraging Power BI for Advanced Geotechnical Data Analysis and Visualization in Mining Projects
Authors: Elaheh Talebi, Fariba Yavari, Lucy Philip, Lesley Town
Abstract:
The mining industry generates vast amounts of data, necessitating robust data management systems and advanced analytics tools to achieve better decision-making processes in the development of mining production and maintaining safety. This paper highlights the advantages of Power BI, a powerful intelligence tool, over traditional Excel-based approaches for effectively managing and harnessing mining data. Power BI enables professionals to connect and integrate multiple data sources, ensuring real-time access to up-to-date information. Its interactive visualizations and dashboards offer an intuitive interface for exploring and analyzing geotechnical data. Advanced analytics is a collection of data analysis techniques to improve decision-making. Leveraging some of the most complex techniques in data science, advanced analytics is used to do everything from detecting data errors and ensuring data accuracy to directing the development of future project phases. However, while Power BI is a robust tool, specific visualizations required by geotechnical engineers may have limitations. This paper studies the capability to use Python or R programming within the Power BI dashboard to enable advanced analytics, additional functionalities, and customized visualizations. This dashboard provides comprehensive tools for analyzing and visualizing key geotechnical data metrics, including spatial representation on maps, field and lab test results, and subsurface rock and soil characteristics. Advanced visualizations like borehole logs and Stereonet were implemented using Python programming within the Power BI dashboard, enhancing the understanding and communication of geotechnical information. Moreover, the dashboard's flexibility allows for the incorporation of additional data and visualizations based on the project scope and available data, such as pit design, rock fall analyses, rock mass characterization, and drone data. This further enhances the dashboard's usefulness in future projects, including operation, development, closure, and rehabilitation phases. Additionally, this helps in minimizing the necessity of utilizing multiple software programs in projects. This geotechnical dashboard in Power BI serves as a user-friendly solution for analyzing, visualizing, and communicating both new and historical geotechnical data, aiding in informed decision-making and efficient project management throughout various project stages. Its ability to generate dynamic reports and share them with clients in a collaborative manner further enhances decision-making processes and facilitates effective communication within geotechnical projects in the mining industry.Keywords: geotechnical data analysis, power BI, visualization, decision-making, mining industry
Procedia PDF Downloads 923018 Experiences of Timing Analysis of Parallel Embedded Software
Authors: Muhammad Waqar Aziz, Syed Abdul Baqi Shah
Abstract:
The execution time analysis is fundamental to the successful design and execution of real-time embedded software. In such analysis, the Worst-Case Execution Time (WCET) of a program is a key measure, on the basis of which system tasks are scheduled. The WCET analysis of embedded software is also needed for system understanding and to guarantee its behavior. WCET analysis can be performed statically (without executing the program) or dynamically (through measurement). Traditionally, research on the WCET analysis assumes sequential code running on single-core platforms. However, as computation is steadily moving towards using a combination of parallel programs and multi-core hardware, new challenges in WCET analysis need to be addressed. In this article, we report our experiences of performing the WCET analysis of Parallel Embedded Software (PES) running on multi-core platform. The primary purpose was to investigate how WCET estimates of PES can be computed statically, and how they can be derived dynamically. Our experiences, as reported in this article, include the challenges we faced, possible suggestions to these challenges and the workarounds that were developed. This article also provides observations on the benefits and drawbacks of deriving the WCET estimates using the said methods and provides useful recommendations for further research in this area.Keywords: embedded software, worst-case execution-time analysis, static flow analysis, measurement-based analysis, parallel computing
Procedia PDF Downloads 3243017 Induced Pulsation Attack Against Kalman Filter Driven Brushless DC Motor Control System
Authors: Yuri Boiko, Iluju Kiringa, Tet Yeap
Abstract:
We use modeling and simulation tools, to introduce a novel bias injection attack, named the ’Induced Pulsation Attack’, which targets Cyber Physical Systems with closed-loop controlled Brushless DC (BLDC) motor and Kalman filter driver in the feedback loop. This attack involves engaging a linear function with a constant gradient to distort the coefficient of the injected bias, which falsifies the Kalman filter estimates of the rotor’s angular speed. As a result, this manipulation interaction inside the control system causes periodic pulsations in a form of asymmetric sine wave of both current and voltage in the circuit windings, with a high magnitude. It is shown that by varying the gradient of linear function, one can control both the frequency and structure of the induced pulsations. It is also demonstrated that terminating the attack at any point leads to additional compensating effort from the controller to restore the speed to its equilibrium value. This compensation effort produces an exponentially decaying wave, which we call the ’attack withdrawal syndrome’ wave. The conditions for maximizing or minimizing the impact of the attack withdrawal syndrome are determined. Linking the termination of the attack to the end of the full period of the induced pulsation wave has been shown to nullify the attack withdrawal syndrome wave, thereby improving the attack’s covertness.Keywords: cyber-attack, induced pulsation, bias injection, Kalman filter, BLDC motor, control system, closed loop, P- controller, PID-controller, saw-function, cyber-physical system
Procedia PDF Downloads 713016 Design Study on a Contactless Material Feeding Device for Electro Conductive Workpieces
Authors: Oliver Commichau, Richard Krimm, Bernd-Arno Behrens
Abstract:
A growing demand on the production rate of modern presses leads to higher stroke rates. Commonly used material feeding devices for presses like grippers and roll-feeding systems can only achieve high stroke rates along with high gripper forces, to avoid stick-slip. These forces are limited by the sensibility of the surfaces of the workpieces. Stick-slip leads to scratches on the surface and false positioning of the workpiece. In this paper, a new contactless feeding device is presented, which develops higher feeding force without damaging the surface of the workpiece through gripping forces. It is based on the principle of the linear induction motor. A primary part creates a magnetic field and induces eddy currents in the electrically conductive material. A Lorentz-Force applies to the workpiece in feeding direction as a mutual reaction between the eddy-currents and the magnetic induction. In this study, the FEA model of this approach is shown. The calculation of this model was used to identify the influence of various design parameters on the performance of the feeder and thus showing the promising capabilities and limits of this technology. In order to validate the study, a prototype of the feeding device has been built. An experimental setup was used to measure pulling forces and placement accuracy of the experimental feeder in order to give an outlook of a potential industrial application of this approach.Keywords: conductive material, contactless feeding, linear induction, Lorentz-Force
Procedia PDF Downloads 1793015 Polyampholytic Resins: Advances in Ion Exchanging Properties
Authors: N. P. G. N. Chandrasekara, R. M. Pashley
Abstract:
Ion exchange (IEX) resins are commonly available as cationic or anionic resins but not as polyampholytic resins. This is probably because sequential acid and base washing cannot produce complete regeneration of polyampholytic resins with chemically attached anionic and cationic groups in close proximity. The ‘Sirotherm’ process, developed by the Commonwealth Scientific and Industrial Research Organization (CSIRO) in Melbourne, Australia was originally based on the use of a physical mixture of weakly basic (WB) and weakly acidic (WA) ion-exchange resin beads. These resins were regenerated thermally and they were capable of removing salts from an aqueous solution at higher temperatures compared to the salt sorbed at ambient temperatures with a significant reduction of the sorption capacity with increasing temperature. A new process for the efficient regeneration of mixed bead resins using ammonium bicarbonate with heat was studied recently and this chemical/thermal regeneration technique has the capability for completely regenerating polyampholytic resins. Even so, the low IEX capacities of polyampholytic resins restrict their commercial applications. Recently, we have established another novel process for increasing the IEX capacity of a typical polyampholytic resin. In this paper we will discuss the chemical/thermal regeneration of a polyampholytic (WA/WB) resin and a novel process for enhancing its ion exchange capacity, by increasing its internal pore area. We also show how effective this method is for completely recycled regeneration, with the potential of substantially reducing chemical waste.Keywords: capacity, ion exchange, polyampholytic resin, regeneration
Procedia PDF Downloads 3763014 Analyzing the Practicality of Drawing Inferences in Automation of Commonsense Reasoning
Authors: Chandan Hegde, K. Ashwini
Abstract:
Commonsense reasoning is the simulation of human ability to make decisions during the situations that we encounter every day. It has been several decades since the introduction of this subfield of artificial intelligence, but it has barely made some significant progress. The modern computing aids also have remained impotent in this regard due to the absence of a strong methodology towards commonsense reasoning development. Among several accountable reasons for the lack of progress, drawing inference out of commonsense knowledge-base stands out. This review paper emphasizes on a detailed analysis of representation of reasoning uncertainties and feasible prospects of programming aids for drawing inferences. Also, the difficulties in deducing and systematizing commonsense reasoning and the substantial progress made in reasoning that influences the study have been discussed. Additionally, the paper discusses the possible impacts of an effective inference technique in commonsense reasoning.Keywords: artificial intelligence, commonsense reasoning, knowledge base, uncertainty in reasoning
Procedia PDF Downloads 1873013 Structural and Thermodynamic Properties of MnNi
Authors: N. Benkhettoua, Y. Barkata
Abstract:
We present first-principles studies of structural and thermodynamic properties of MnNi According to the calculated total energies, by using an all-electron full-potential linear muffin–tin orbital method (FP-LMTO) within LDA and the quasi-harmonic Debye model implemented in the Gibbs program is used for the temperature effect on structural and calorific properties.Keywords: magnetic materials, structural properties, thermodynamic properties, metallurgical and materials engineering
Procedia PDF Downloads 5563012 A Finite Element/Finite Volume Method for Dam-Break Flows over Deformable Beds
Authors: Alia Alghosoun, Ashraf Osman, Mohammed Seaid
Abstract:
A coupled two-layer finite volume/finite element method was proposed for solving dam-break flow problem over deformable beds. The governing equations consist of the well-balanced two-layer shallow water equations for the water flow and a linear elastic model for the bed deformations. Deformations in the topography can be caused by a brutal localized force or simply by a class of sliding displacements on the bathymetry. This deformation in the bed is a source of perturbations, on the water surface generating water waves which propagate with different amplitudes and frequencies. Coupling conditions at the interface are also investigated in the current study and two mesh procedure is proposed for the transfer of information through the interface. In the present work a new procedure is implemented at the soil-water interface using the finite element and two-layer finite volume meshes with a conservative distribution of the forces at their intersections. The finite element method employs quadratic elements in an unstructured triangular mesh and the finite volume method uses the Rusanove to reconstruct the numerical fluxes. The numerical coupled method is highly efficient, accurate, well balanced, and it can handle complex geometries as well as rapidly varying flows. Numerical results are presented for several test examples of dam-break flows over deformable beds. Mesh convergence study is performed for both methods, the overall model provides new insight into the problems at minimal computational cost.Keywords: dam-break flows, deformable beds, finite element method, finite volume method, hybrid techniques, linear elasticity, shallow water equations
Procedia PDF Downloads 1813011 Reed: An Approach Towards Quickly Bootstrapping Multilingual Acoustic Models
Authors: Bipasha Sen, Aditya Agarwal
Abstract:
Multilingual automatic speech recognition (ASR) system is a single entity capable of transcribing multiple languages sharing a common phone space. Performance of such a system is highly dependent on the compatibility of the languages. State of the art speech recognition systems are built using sequential architectures based on recurrent neural networks (RNN) limiting the computational parallelization in training. This poses a significant challenge in terms of time taken to bootstrap and validate the compatibility of multiple languages for building a robust multilingual system. Complex architectural choices based on self-attention networks are made to improve the parallelization thereby reducing the training time. In this work, we propose Reed, a simple system based on 1D convolutions which uses very short context to improve the training time. To improve the performance of our system, we use raw time-domain speech signals directly as input. This enables the convolutional layers to learn feature representations rather than relying on handcrafted features such as MFCC. We report improvement on training and inference times by atleast a factor of 4x and 7.4x respectively with comparable WERs against standard RNN based baseline systems on SpeechOcean's multilingual low resource dataset.Keywords: convolutional neural networks, language compatibility, low resource languages, multilingual automatic speech recognition
Procedia PDF Downloads 1233010 Design of Multi-Loop Controller for Minimization of Energy Consumption in the Distillation Column
Authors: Vinayambika S. Bhat, S. Shanmuga Priya, I. Thirunavukkarasu, Shreeranga Bhat
Abstract:
An attempt has been made to design a decoupling controller for systems with more inputs more outputs with dead time in it. The de-coupler is designed for the chemical process industry 3×3 plant transfer function with dead time. The Quantitative Feedback Theory (QFT) based controller has also been designed here for the 2×2 distillation column transfer function. The developed control techniques were simulated using the MATLAB/Simulink. Also, the stability of the process was analyzed, together with the presence of various perturbations in it. Time domain specifications like setting time along with overshoot and oscillations were analyzed to prove the efficiency of the de-coupler method. The load disturbance rejection was tested along with its performance. The QFT control technique was synthesized based on the stability and performance specifications in the presence of uncertainty in time constant of the plant transfer function through sequential loop shaping technique. Further, the energy efficiency of the distillation column was improved by proper tuning of the controller. A distillation column consumes 3% of the total energy consumption of the world. A suitable control technique is very important from an economic point of view. The real time implementation of the process is under process in our laboratory.Keywords: distillation, energy, MIMO process, time delay, robust stability
Procedia PDF Downloads 4143009 Optimization of Sequential Thermophilic Bio-Hydrogen/Methane Production from Mono-Ethylene Glycol via Anaerobic Digestion: Impact of Inoculum to Substrate Ratio and N/P Ratio
Authors: Ahmed Elreedy, Ahmed Tawfik
Abstract:
This investigation aims to assess the effect of inoculum to substrate ratio (ISR) and nitrogen to phosphorous balance on simultaneous biohydrogen and methane production from anaerobic decomposition of mono-ethylene glycol (MEG). Different ISRs were applied in the range between 2.65 and 13.23 gVSS/gCOD, whereas the tested N/P ratios were changed from 4.6 to 8.5; both under thermophilic conditions (55°C). The maximum obtained methane and hydrogen yields (MY and HY) of 151.86±10.8 and 22.27±1.1 mL/gCODinitial were recorded at ISRs of 5.29 and 3.78 gVSS/gCOD, respectively. Unlikely, the ammonification process, in terms of net ammonia produced, was found to be ISR and COD/N ratio dependent, reaching its peak value of 515.5±31.05 mgNH4-N/L at ISR and COD/N ratio of 13.23 gVSS/gCOD and 11.56. The optimum HY was enhanced by more than 1.45-fold with declining N/P ratio from 8.5 to 4.6; whereas, the MY was improved (1.6-fold), while increasing N/P ratio from 4.6 to 5.5 with no significant impact at N/P ratio of 8.5. The results obtained revealed that the methane production was strongly influenced by initial ammonia, compared to initial phosphate. Likewise, the generation of ammonia was markedly deteriorated from 535.25±41.5 to 238.33±17.6 mgNH4-N/L with increasing N/P ratio from 4.6 to 8.5. The kinetic study using Modified Gompertz equation was successfully fitted to the experimental outputs (R2 > 0.9761).Keywords: mono-ethylene glycol, biohydrogen and methane, inoculum to substrate ratio, nitrogen to phosphorous balance, ammonification
Procedia PDF Downloads 3823008 The Efficiency Analysis in the Health Sector: Marmara Region
Authors: Hale Kirer Silva Lecuna, Beyza Aydin
Abstract:
Health is one of the main components of human capital and sustainable development, and it is very important for economic growth. Health economics, which is an indisputable part of the science of economics, has five stages in general. These are health and development, financing of health services, economic regulation in the health, allocation of resources and efficiency of health services. A well-developed and efficient health sector plays a major role by increasing the level of development of countries. The most crucial pillars of the health sector are the hospitals that are divided into public and private. The main purpose of the hospitals is to provide more efficient services. Therefore the aim is to meet patients’ satisfaction by increasing the service quality. Health-related studies in Turkey date back to the Ottoman and Seljuk Empires. In the near past, Turkey applied 'Health Sector Transformation Programs' under different titles between 2003 and 2010. Our aim in this paper is to measure how effective these transformation programs are for the health sector, to see how much they can increase the efficiency of hospitals over the years, to see the return of investments, to make comments and suggestions on the results, and to provide a new reference for the literature. Within this framework, the public and private hospitals in Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, Istanbul, Kirklareli, Kocaeli, Sakarya, Tekirdağ, Yalova will be examined by using Data Envelopment Analysis (DEA) for the years between 2000 and 2019. DEA is a linear programming-based technique, which gives relatively good results in multivariate studies. DEA basically estimates an efficiency frontier and make a comparison. Constant returns to scale and variable returns to scale are two most commonly used DEA methods. Both models are divided into two as input and output-oriented. To analyze the data, the number of personnel, number of specialist physicians, number of practitioners, number of beds, number of examinations will be used as input variables; and the number of surgeries, in-patient ratio, and crude mortality rate as output variables. 11 hospitals belonging to the Marmara region were included in the study. It is seen that these hospitals worked effectively only in 7 provinces (Balıkesir, Bilecik, Bursa, Edirne, İstanbul, Kırklareli, Yalova) for the year 2001 when no transformation program was implemented. After the transformation program was implemented, for example, in 2014 and 2016, 10 hospitals (Balıkesir, Bilecik, Bursa, Çanakkale, Edirne, İstanbul, Kocaeli, Kırklareli, Tekirdağ, Yalova) were found to be effective. In 2015, ineffective results were observed for Sakarya, Tekirdağ and Yalova. However, since these values are closer to 1 after the transformation program, we can say that the transformation program has positive effects. For Sakarya alone, no effective results have been achieved in any year. When we look at the results in general, it shows that the transformation program has a positive effect on the effectiveness of hospitals.Keywords: data envelopment analysis, efficiency, health sector, Marmara region
Procedia PDF Downloads 1303007 Development of a Direct Immunoassay for Human Ferritin Using Diffraction-Based Sensing Method
Authors: Joel Ballesteros, Harriet Jane Caleja, Florian Del Mundo, Cherrie Pascual
Abstract:
Diffraction-based sensing was utilized in the quantification of human ferritin in blood serum to provide an alternative to label-based immunoassays currently used in clinical diagnostics and researches. The diffraction intensity was measured by the diffractive optics technology or dotLab™ system. Two methods were evaluated in this study: direct immunoassay and direct sandwich immunoassay. In the direct immunoassay, human ferritin was captured by human ferritin antibodies immobilized on an avidin-coated sensor while the direct sandwich immunoassay had an additional step for the binding of a detector human ferritin antibody on the analyte complex. Both methods were repeatable with coefficient of variation values below 15%. The direct sandwich immunoassay had a linear response from 10 to 500 ng/mL which is wider than the 100-500 ng/mL of the direct immunoassay. The direct sandwich immunoassay also has a higher calibration sensitivity with value 0.002 Diffractive Intensity (ng mL-1)-1) compared to the 0.004 Diffractive Intensity (ng mL-1)-1 of the direct immunoassay. The limit of detection and limit of quantification values of the direct immunoassay were found to be 29 ng/mL and 98 ng/mL, respectively, while the direct sandwich immunoassay has a limit of detection (LOD) of 2.5 ng/mL and a limit of quantification (LOQ) of 8.2 ng/mL. In terms of accuracy, the direct immunoassay had a percent recovery of 88.8-93.0% in PBS while the direct sandwich immunoassay had 94.1 to 97.2%. Based on the results, the direct sandwich immunoassay is a better diffraction-based immunoassay in terms of accuracy, LOD, LOQ, linear range, and sensitivity. The direct sandwich immunoassay was utilized in the determination of human ferritin in blood serum and the results are validated by Chemiluminescent Magnetic Immunoassay (CMIA). The calculated Pearson correlation coefficient was 0.995 and the p-values of the paired-sample t-test were less than 0.5 which show that the results of the direct sandwich immunoassay was comparable to that of CMIA and could be utilized as an alternative analytical method.Keywords: biosensor, diffraction, ferritin, immunoassay
Procedia PDF Downloads 3543006 Some Pertinent Issues and Considerations on CBSE
Authors: Anil Kumar Tripathi, Ratneshwer Gupta
Abstract:
All the software engineering researches and best industry practices aim at providing software products with high degree of quality and functionality at low cost and less time. These requirements are addressed by the Component Based Software Engineering (CBSE) as well. CBSE, which deals with the software construction by components’ assembly, is a revolutionary extension of Software Engineering. CBSE must define and describe processes to assure timely completion of high quality software systems that are composed of a variety of pre built software components. Though these features provide distinct and visible benefits in software design and programming, they also raise some challenging problems. The aim of this work is to summarize the pertinent issues and considerations in CBSE to make an understanding in forms of concepts and observations that may lead to development of newer ways of dealing with the problems and challenges in CBSE.Keywords: software component, component based software engineering, software process, testing, maintenance
Procedia PDF Downloads 4013005 Umbrella Reinforcement Learning – A Tool for Hard Problems
Authors: Egor E. Nuzhin, Nikolay V. Brilliantov
Abstract:
We propose an approach for addressing Reinforcement Learning (RL) problems. It combines the ideas of umbrella sampling, borrowed from Monte Carlo technique of computational physics and chemistry, with optimal control methods, and is realized on the base of neural networks. This results in a powerful algorithm, designed to solve hard RL problems – the problems, with long-time delayed reward, state-traps sticking and a lack of terminal states. It outperforms the prominent algorithms, such as PPO, RND, iLQR and VI, which are among the most efficient for the hard problems. The new algorithm deals with a continuous ensemble of agents and expected return, that includes the ensemble entropy. This results in a quick and efficient search of the optimal policy in terms of ”exploration-exploitation trade-off” in the state-action space.Keywords: umbrella sampling, reinforcement learning, policy gradient, dynamic programming
Procedia PDF Downloads 213004 Altered TP53 Mutations in de Novo Acute Myeloid Leukemia Patients in Iran
Authors: Naser Shagerdi Esmaeli, Mohsen Hamidpour, Parisa Hasankhani Tehrani
Abstract:
Background: The TP53 mutation is frequently detected in acute myeloid leukemia (AML) patients with complex karyotype (CK), but the stability of this mutation during the clinical course remains unclear. Material and Methods: In this study, TP53 mutations were identified in 7% of 500 patients with de novo AML and 58.8% of patients with CK in Tabriz, Iran. TP53 mutations were closely associated with older age, lower white blood cell (WBC) and platelet counts, FAB M6 subtype, unfavorable-risk cytogenetics, and CK, but negatively associated with NPM1 mutation, FLT3/ITD and DNMT3A mutation. Result: Multivariate analysis demonstrated that TP53 mutation was an independent poor prognostic factor for overall survival and disease-free survival among the total cohort and the subgroup of patients with CK. A scoring system incorporating TP53 mutation and nine other prognostic factors, including age, WBC counts, cytogenetics, and gene mutations, into survival analysis proved to be very useful to stratify AML patients. Sequential study of 420 samples showed that TP53 mutations were stable during AML evolution, whereas the mutation was acquired only in 1 of the 126 TP53 wild-type patients when therapy-related AML originated from different clone emerged. Conclusion: In conclusion, TP53 mutations are associated with distinct clinic-biological features and poor prognosis in de novo AML patients and are rather stable during disease progression.Keywords: acute myloblastic leukemia, TP53, FLT3/ITD, Iran
Procedia PDF Downloads 1073003 Weyl Type Theorem and the Fuglede Property
Authors: M. H. M. Rashid
Abstract:
Given H a Hilbert space and B(H) the algebra of bounded linear operator in H, let δAB denote the generalized derivation defined by A and B. The main objective of this article is to study Weyl type theorems for generalized derivation for (A,B) satisfying a couple of Fuglede.Keywords: Fuglede Property, Weyl’s theorem, generalized derivation, Aluthge transform
Procedia PDF Downloads 1283002 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite
Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov
Abstract:
The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis
Procedia PDF Downloads 1513001 An Investigation of Simultaneous Mixed Emotion Experiences for Self and Other in Early Childhood
Authors: Esther Burkitt, Dawn Watling
Abstract:
Background: Four types of patterns of simultaneous mixed emotions have been identified in middle childhood, adolescence and adulthood. The present study applied an analogue emotion scale which permits measuring of intensity of opposite valence emotions over time rather than bipolar ratings and used an exhaustive coding scheme to investigate whether children in early childhood experience previously identified and additional types of mixed emotional experiences. Methods: To explore the presence of simultaneous mixed emotion experiences in early childhood, 112 children (59 girls) aged 5 years 1 month - 7 years 2 months (X=6 years 1 month; SD = 10 months) were recruited across the UK. They were allocated on the basis of alternation by gender on class lists to one of two conditions hearing vignettes describing mixed emotion events in an age and gender matched protagonist or themselves (other, n = 57 and self, n = 55). Findings: New types of flexuous, vertical and other experiences were identified alongside sequential, prevalent, highly parallel and inverse types of experiences identified in older populations. Conclusions: The analogue emotion scale uncovered a broader range of simultaneous mixed emotional experiences than previously identified. The value of exploring the utility of the findings in emotion assessments is discussed along with suggestions to explore impacts of educational and cultural influences on children’s mixed emotional experiences.Keywords: childhood, emotion, graphing, self
Procedia PDF Downloads 343000 Evaluation of the Internal Quality for Pineapple Based on the Spectroscopy Approach and Neural Network
Authors: Nonlapun Meenil, Pisitpong Intarapong, Thitima Wongsheree, Pranchalee Samanpiboon
Abstract:
In Thailand, once pineapples are harvested, they must be classified into two classes based on their sweetness: sweet and unsweet. This paper has studied and developed the assessment of internal quality of pineapples using a low-cost compact spectroscopy sensor according to the Spectroscopy approach and Neural Network (NN). During the experiments, Batavia pineapples were utilized, generating 100 samples. The extracted pineapple juice of each sample was used to determine the Soluble Solid Content (SSC) labeling into sweet and unsweet classes. In terms of experimental equipment, the sensor cover was specifically designed to install the sensor and light source to read the reflectance at a five mm depth from pineapple flesh. By using a spectroscopy sensor, data on visible and near-infrared reflectance (Vis-NIR) were collected. The NN was used to classify the pineapple classes. Before the classification step, the preprocessing methods, which are Class balancing, Data shuffling, and Standardization were applied. The 510 nm and 900 nm reflectance values of the middle parts of pineapples were used as features of the NN. With the Sequential model and Relu activation function, 100% accuracy of the training set and 76.67% accuracy of the test set were achieved. According to the abovementioned information, using a low-cost compact spectroscopy sensor has achieved favorable results in classifying the sweetness of the two classes of pineapples.Keywords: neural network, pineapple, soluble solid content, spectroscopy
Procedia PDF Downloads 762999 Calculation of Pressure-Varying Langmuir and Brunauer-Emmett-Teller Isotherm Adsorption Parameters
Authors: Trevor C. Brown, David J. Miron
Abstract:
Gas-solid physical adsorption methods are central to the characterization and optimization of the effective surface area, pore size and porosity for applications such as heterogeneous catalysis, and gas separation and storage. Properties such as adsorption uptake, capacity, equilibrium constants and Gibbs free energy are dependent on the composition and structure of both the gas and the adsorbent. However, challenges remain, in accurately calculating these properties from experimental data. Gas adsorption experiments involve measuring the amounts of gas adsorbed over a range of pressures under isothermal conditions. Various constant-parameter models, such as Langmuir and Brunauer-Emmett-Teller (BET) theories are used to provide information on adsorbate and adsorbent properties from the isotherm data. These models typically do not provide accurate interpretations across the full range of pressures and temperatures. The Langmuir adsorption isotherm is a simple approximation for modelling equilibrium adsorption data and has been effective in estimating surface areas and catalytic rate laws, particularly for high surface area solids. The Langmuir isotherm assumes the systematic filling of identical adsorption sites to a monolayer coverage. The BET model is based on the Langmuir isotherm and allows for the formation of multiple layers. These additional layers do not interact with the first layer and the energetics are equal to the adsorbate as a bulk liquid. This BET method is widely used to measure the specific surface area of materials. Both Langmuir and BET models assume that the affinity of the gas for all adsorption sites are identical and so the calculated adsorbent uptake at the monolayer and equilibrium constant are independent of coverage and pressure. Accurate representations of adsorption data have been achieved by extending the Langmuir and BET models to include pressure-varying uptake capacities and equilibrium constants. These parameters are determined using a novel regression technique called flexible least squares for time-varying linear regression. For isothermal adsorption the adsorption parameters are assumed to vary slowly and smoothly with increasing pressure. The flexible least squares for pressure-varying linear regression (FLS-PVLR) approach assumes two distinct types of discrepancy terms, dynamic and measurement for all parameters in the linear equation used to simulate the data. Dynamic terms account for pressure variation in successive parameter vectors, and measurement terms account for differences between observed and theoretically predicted outcomes via linear regression. The resultant pressure-varying parameters are optimized by minimizing both dynamic and measurement residual squared errors. Validation of this methodology has been achieved by simulating adsorption data for n-butane and isobutane on activated carbon at 298 K, 323 K and 348 K and for nitrogen on mesoporous alumina at 77 K with pressure-varying Langmuir and BET adsorption parameters (equilibrium constants and uptake capacities). This modeling provides information on the adsorbent (accessible surface area and micropore volume), adsorbate (molecular areas and volumes) and thermodynamic (Gibbs free energies) variations of the adsorption sites.Keywords: Langmuir adsorption isotherm, BET adsorption isotherm, pressure-varying adsorption parameters, adsorbate and adsorbent properties and energetics
Procedia PDF Downloads 2342998 Intelligent Rescheduling Trains for Air Pollution Management
Authors: Kainat Affrin, P. Reshma, G. Narendra Kumar
Abstract:
Optimization of timetable is the need of the day for the rescheduling and routing of trains in real time. Trains are scheduled in parallel with the road transport vehicles to the same destination. As the number of trains is restricted due to single track, customers usually opt for road transport to use frequently. The air pollution increases as the density of vehicles on road transport is increased. Use of an alternate mode of transport like train helps in reducing air-pollution. This paper mainly aims at attracting the passengers to Train transport by proper rescheduling of trains using hybrid of stop-skip algorithm and iterative convex programming algorithm. Rescheduling of train bi-directionally is achieved on a single track with dynamic dual time and varying stops. Introduction of more trains attract customers to use rail transport frequently, thereby decreasing the pollution. The results are simulated using Network Simulator (NS-2).Keywords: air pollution, AODV, re-scheduling, WSNs
Procedia PDF Downloads 3612997 Object-Oriented Program Comprehension by Identification of Software Components and Their Connexions
Authors: Abdelhak-Djamel Seriai, Selim Kebir, Allaoua Chaoui
Abstract:
During the last decades, object oriented program- ming has been massively used to build large-scale systems. However, evolution and maintenance of such systems become a laborious task because of the lack of object oriented programming to offer a precise view of the functional building blocks of the system. This lack is caused by the fine granularity of classes and objects. In this paper, we use a post object-oriented technology namely software components, to propose an approach based on the identification of the functional building blocks of an object oriented system by analyzing its source code. These functional blocks are specified as software components and the result is a multi-layer component based software architecture.Keywords: software comprehension, software component, object oriented, software architecture, reverse engineering
Procedia PDF Downloads 4122996 Prediction of Terrorist Activities in Nigeria using Bayesian Neural Network with Heterogeneous Transfer Functions
Authors: Tayo P. Ogundunmade, Adedayo A. Adepoju
Abstract:
Terrorist attacks in liberal democracies bring about a few pessimistic results, for example, sabotaged public support in the governments they target, disturbing the peace of a protected environment underwritten by the state, and a limitation of individuals from adding to the advancement of the country, among others. Hence, seeking for techniques to understand the different factors involved in terrorism and how to deal with those factors in order to completely stop or reduce terrorist activities is the topmost priority of the government in every country. This research aim is to develop an efficient deep learning-based predictive model for the prediction of future terrorist activities in Nigeria, addressing low-quality prediction accuracy problems associated with the existing solution methods. The proposed predictive AI-based model as a counterterrorism tool will be useful by governments and law enforcement agencies to protect the lives of individuals in society and to improve the quality of life in general. A Heterogeneous Bayesian Neural Network (HETBNN) model was derived with Gaussian error normal distribution. Three primary transfer functions (HOTTFs), as well as two derived transfer functions (HETTFs) arising from the convolution of the HOTTFs, are namely; Symmetric Saturated Linear transfer function (SATLINS ), Hyperbolic Tangent transfer function (TANH), Hyperbolic Tangent sigmoid transfer function (TANSIG), Symmetric Saturated Linear and Hyperbolic Tangent transfer function (SATLINS-TANH) and Symmetric Saturated Linear and Hyperbolic Tangent Sigmoid transfer function (SATLINS-TANSIG). Data on the Terrorist activities in Nigeria gathered through questionnaires for the purpose of this study were used. Mean Square Error (MSE), Mean Absolute Error (MAE) and Test Error are the forecast prediction criteria. The results showed that the HETFs performed better in terms of prediction and factors associated with terrorist activities in Nigeria were determined. The proposed predictive deep learning-based model will be useful to governments and law enforcement agencies as an effective counterterrorism mechanism to understand the parameters of terrorism and to design strategies to deal with terrorism before an incident actually happens and potentially causes the loss of precious lives. The proposed predictive AI-based model will reduce the chances of terrorist activities and is particularly helpful for security agencies to predict future terrorist activities.Keywords: activation functions, Bayesian neural network, mean square error, test error, terrorism
Procedia PDF Downloads 1652995 Planktivorous Fish Schooling Responses to Current at Natural and Artificial Reefs
Authors: Matthew Holland, Jason Everett, Martin Cox, Iain Suthers
Abstract:
High spatial-resolution distribution of planktivorous reef fish can reveal behavioural adaptations to optimise the balance between feeding success and predator avoidance. We used a multi-beam echosounder to record bathymetry and the three-dimensional distribution of fish schools associated with natural and artificial reefs. We utilised generalised linear models to assess the distribution, orientation, and aggregation of fish schools relative to the structure, vertical relief, and currents. At artificial reefs, fish schooled more closely to the structure and demonstrated a preference for the windward side, particularly when exposed to strong currents. Similarly, at natural reefs fish demonstrated a preference for windward aspects of bathymetry, particularly when associated with high vertical relief. Our findings suggest that under conditions with stronger current velocity, fish can exercise their preference to remain close to structure for predator avoidance, while still receiving an adequate supply of zooplankton delivered by the current. Similarly, when current velocity is low, fish tend to disperse for better access to zooplankton. As artificial reefs are generally deployed with the goal of creating productivity rather than simply attracting fish from elsewhere, we advise that future artificial reefs be designed as semi-linear arrays perpendicular to the prevailing current, with multiple tall towers. This will facilitate the conversion of dispersed zooplankton into energy for higher trophic levels, enhancing reef productivity and fisheries.Keywords: artificial reef, current, forage fish, multi-beam, planktivorous fish, reef fish, schooling
Procedia PDF Downloads 158