Search results for: compound model
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 17660

Search results for: compound model

12350 Applying the Regression Technique for ‎Prediction of the Acute Heart Attack ‎

Authors: Paria Soleimani, Arezoo Neshati

Abstract:

Myocardial infarction is one of the leading causes of ‎death in the world. Some of these deaths occur even before the patient ‎reaches the hospital. Myocardial infarction occurs as a result of ‎impaired blood supply. Because the most of these deaths are due to ‎coronary artery disease, hence the awareness of the warning signs of a ‎heart attack is essential. Some heart attacks are sudden and intense, but ‎most of them start slowly, with mild pain or discomfort, then early ‎detection and successful treatment of these symptoms is vital to save ‎them. Therefore, importance and usefulness of a system designing to ‎assist physicians in the early diagnosis of the acute heart attacks is ‎obvious.‎ The purpose of this study is to determine how well a predictive ‎model would perform based on the only patient-reportable clinical ‎history factors, without using diagnostic tests or physical exams. This ‎type of the prediction model might have application outside of the ‎hospital setting to give accurate advice to patients to influence them to ‎seek care in appropriate situations. For this purpose, the data were ‎collected on 711 heart patients in Iran hospitals. 28 attributes of clinical ‎factors can be reported by patients; were studied. Three logistic ‎regression models were made on the basis of the 28 features to predict ‎the risk of heart attacks. The best logistic regression model in terms of ‎performance had a C-index of 0.955 and with an accuracy of 94.9%. ‎The variables, severe chest pain, back pain, cold sweats, shortness of ‎breath, nausea, and vomiting were selected as the main features.‎

Keywords: Coronary heart disease, Acute heart attacks, Prediction, Logistic ‎regression‎

Procedia PDF Downloads 447
12349 Human Talent Management: A Research Agenda

Authors: Mehraj Udin Ganaie, Mohammad Israrul Haque

Abstract:

The purpose of this paper is to enhance the theoretical and conceptual understanding of human talent management (HTM). With the help of extensive review of existing literature, we proposed a conceptual framework and few propositions to elucidate the influential relationship of competency focus, talent pooling, talent investment, and talenting orientation with value creation of a firm. It is believed that human talent management model will enhance the understanding of talent management orientation among practitioners and academicians. Practitioners will be able to align HTM orientation with business strategy wisely to yield better value for business (Shareholders, Employees, Owners, Customers, agents, and other stakeholders). Future research directions will explain how human talent management researchers will work on the integration of relationship and contribute towards the maturity of talent management by further exploring and validating the model empirically to enhance the body of knowledge.

Keywords: talent management orientation, competency focus, talent pooling, talent investment, talenting orientation

Procedia PDF Downloads 382
12348 Research on Measuring Operational Risk in Commercial Banks Based on Internal Control

Authors: Baobao Li

Abstract:

Operational risk covers all operations of commercial banks and has a close relationship with the bank’s internal control. But in the commercial banks' management practice, internal control is always separated from the operational risk measurement. With the increasing of operational risk events in recent years, operational risk is paid more and more attention by regulators and banks’ managements. The paper first discussed the relationship between internal control and operational risk management and used CVaR-POT model to measure operational risk, and then put forward a modified measurement method (to use operational risk assessment results to modify the measurement results of the CVaR-POT model). The paper also analyzed the necessity and rationality of this method. The method takes into consideration the influence of internal control, improves the accuracy and effectiveness of operational risk measurement and save the economic capital for commercial banks, avoiding the drawbacks of using some mainstream models one-sidedly.

Keywords: commercial banks, internal control, operational risk, risk measurement

Procedia PDF Downloads 396
12347 New Requirements of the Fifth Dimension of War: Planning of Cyber Operation Capabilities

Authors: Mehmet Kargaci

Abstract:

Transformation of technology and strategy has been the main factor for the evolution of war. In addition to land, maritime, air and space domains, cyberspace has become the fifth domain with emerge of internet. The current security environment has become more complex and uncertain than ever before. Moreover, warfare has evaluated from conventional to irregular, asymmetric and hybrid war. Weak actors such as terrorist organizations and non-state actors has increasingly conducted cyber-attacks against strong adversaries. Besides, states has developed cyber capabilities in order to defense critical infrastructure regarding the cyber threats. Cyber warfare will be key in future security environment. Although what to do has been placed in operational plans, how to do has lacked and ignored as to cyber defense and attack. The purpose of the article is to put forward a model for how to conduct cyber capabilities in a conventional war. First, cyber operations capabilities will be discussed. Second put forward the necessities of cyberspace environment and develop a model for how to plan an operation using cyber operation capabilities, finally the assessment of the applicability of cyber operation capabilities and offers will be presented.

Keywords: cyber war, cyber threats, cyber operation capabilities, operation planning

Procedia PDF Downloads 333
12346 A Mixing Matrix Estimation Algorithm for Speech Signals under the Under-Determined Blind Source Separation Model

Authors: Jing Wu, Wei Lv, Yibing Li, Yuanfan You

Abstract:

The separation of speech signals has become a research hotspot in the field of signal processing in recent years. It has many applications and influences in teleconferencing, hearing aids, speech recognition of machines and so on. The sounds received are usually noisy. The issue of identifying the sounds of interest and obtaining clear sounds in such an environment becomes a problem worth exploring, that is, the problem of blind source separation. This paper focuses on the under-determined blind source separation (UBSS). Sparse component analysis is generally used for the problem of under-determined blind source separation. The method is mainly divided into two parts. Firstly, the clustering algorithm is used to estimate the mixing matrix according to the observed signals. Then the signal is separated based on the known mixing matrix. In this paper, the problem of mixing matrix estimation is studied. This paper proposes an improved algorithm to estimate the mixing matrix for speech signals in the UBSS model. The traditional potential algorithm is not accurate for the mixing matrix estimation, especially for low signal-to noise ratio (SNR).In response to this problem, this paper considers the idea of an improved potential function method to estimate the mixing matrix. The algorithm not only avoids the inuence of insufficient prior information in traditional clustering algorithm, but also improves the estimation accuracy of mixing matrix. This paper takes the mixing of four speech signals into two channels as an example. The results of simulations show that the approach in this paper not only improves the accuracy of estimation, but also applies to any mixing matrix.

Keywords: DBSCAN, potential function, speech signal, the UBSS model

Procedia PDF Downloads 133
12345 Beta-Cyclodextrin Inclusion Complexes for Antifungal Food Packaging Applications

Authors: Cristina Munoz-Shuguli, Francisco Rodriguez, Julio Bruna, M. Jose Galotto, Abel Guarda

Abstract:

The microbial contamination in fruits due to the presence of fungal is the most important cause of their deterioration and loss. The development of active food packaging materials with antifungal properties has been proposed as an innovative strategy in order to prevent this problem. In this way, natural compounds as the essential oils or their derivatives, also called volatile compounds (VC), can be incorporated in the food packaging materials to control the fungal growth during fruit packaging. However, if the VC is incorporated directly in the packaging material, it is released very fast due to VC high volatility. For this reason, the formation of inclusion complexes through the encapsulation of VC into beta-cyclodextrin (β-CD) and their incorporation in package materials is an alternative to maintain an antifungal atmosphere around the packaged fruits for longer times. In this context, the aim of this work was to develop inclusion complexes based in β-CD and VC (β-CD:VC) for further application in the antifungal food packaging materials development. β-CD:VC inclusion complexes were obtained with two different molar ratios 2:1 and 1:1, through co-precipitation method. The entrapment efficiency of β-CD:VC as well the release of antifungal compound from inclusion complexes exposed to different relative humidity (25, 50, and 97 %) to headspace were determined by gaseous chromatography (GC). Also, thermal and antimicrobial properties of β-CD:VC were determined through thermogravimetric analysis (TGA) and antifungal assays against Botrytis cinerea, respectively. GC results showed that β-CD:VC 2:1 had a higher entrapment efficiency than β-CD:VC 1:1, with values of 75.5 ± 3.71 % and 59.6 ± 1.51 %, respectively. It was probably because during the synthesis of β-CD:VC 1:1, there was less molecular space to the movement of VC molecules. Furthermore, the release of VC from β-CD:VC was directly related with the relative humidity. High amount of VC was released when the inclusion complexes were exposed to high humidity, possibly due to the interactions between the water molecules and the β-CD hydrophilic wall. On the other hand, a better thermal stability of VC in inclusion complexes allowed to verify its effective encapsulation into β-CD. Finally, antimicrobial assays showed that the inclusion complexes had a high antifungal activity at very low concentrations. Therefore, the results obtained in this work allow suggesting the β-CD:VC inclusion complexes as potential candidates to the development of fruit antifungal packaging materials, which activity is relative humidity dependent.

Keywords: Botrytis cinerea, fruit packaging, headspace release, volatile compounds

Procedia PDF Downloads 120
12344 Additional Method for the Purification of Lanthanide-Labeled Peptide Compounds Pre-Purified by Weak Cation Exchange Cartridge

Authors: K. Eryilmaz, G. Mercanoglu

Abstract:

Aim: Purification of the final product, which is the last step in the synthesis of lanthanide-labeled peptide compounds, can be accomplished by different methods. Among these methods, the two most commonly used methods are C18 solid phase extraction (SPE) and weak cation exchanger cartridge elution. SPE C18 solid phase extraction method yields high purity final product, while elution from the weak cation exchanger cartridge is pH dependent and ineffective in removing colloidal impurities. The aim of this work is to develop an additional purification method for the lanthanide-labeled peptide compound in cases where the desired radionuclidic and radiochemical purity of the final product can not be achieved because of pH problem or colloidal impurity. Material and Methods: For colloidal impurity formation, 3 mL of water for injection (WFI) was added to 30 mCi of 177LuCl3 solution and allowed to stand for 1 day. 177Lu-DOTATATE was synthesized using EZAG ML-EAZY module (10 mCi/mL). After synthesis, the final product was mixed with the colloidal impurity solution (total volume:13 mL, total activity: 40 mCi). The resulting mixture was trapped in SPE-C18 cartridge. The cartridge was washed with 10 ml saline to remove impurities to the waste vial. The product trapped in the cartridge was eluted with 2 ml of 50% ethanol and collected to the final product vial via passing through a 0.22μm filter. The final product was diluted with 10 mL of saline. Radiochemical purity before and after purification was analysed by HPLC method. (column: ACE C18-100A. 3µm. 150 x 3.0mm, mobile phase: Water-Acetonitrile-Trifluoro acetic acid (75:25:1), flow rate: 0.6 mL/min). Results: UV and radioactivity detector results in HPLC analysis showed that colloidal impurities were completely removed from the 177Lu-DOTATATE/ colloidal impurity mixture by purification method. Conclusion: The improved purification method can be used as an additional method to remove impurities that may result from the lanthanide-peptide synthesis in which the weak cation exchange purification technique is used as the last step. The purification of the final product and the GMP compliance (the final aseptic filtration and the sterile disposable system components) are two major advantages.

Keywords: lanthanide, peptide, labeling, purification, radionuclide, radiopharmaceutical, synthesis

Procedia PDF Downloads 159
12343 An ANN-Based Predictive Model for Diagnosis and Forecasting of Hypertension

Authors: Obe Olumide Olayinka, Victor Balanica, Eugen Neagoe

Abstract:

The effects of hypertension are often lethal thus its early detection and prevention is very important for everybody. In this paper, a neural network (NN) model was developed and trained based on a dataset of hypertension causative parameters in order to forecast the likelihood of occurrence of hypertension in patients. Our research goal was to analyze the potential of the presented NN to predict, for a period of time, the risk of hypertension or the risk of developing this disease for patients that are or not currently hypertensive. The results of the analysis for a given patient can support doctors in taking pro-active measures for averting the occurrence of hypertension such as recommendations regarding the patient behavior in order to lower his hypertension risk. Moreover, the paper envisages a set of three example scenarios in order to determine the age when the patient becomes hypertensive, i.e. determine the threshold for hypertensive age, to analyze what happens if the threshold hypertensive age is set to a certain age and the weight of the patient if being varied, and, to set the ideal weight for the patient and analyze what happens with the threshold of hypertensive age.

Keywords: neural network, hypertension, data set, training set, supervised learning

Procedia PDF Downloads 390
12342 The Impact of Job Meaningfulness on the Relationships between Job Autonomy, Supportive Organizational Climate, and Job Satisfaction

Authors: Sashank Nyapati, Laura Lorente-Prieto, Maria Peiro

Abstract:

The general objective of this study is to analyse the mediating role of meaningfulness in the relationships between job autonomy and job satisfaction and supportive organizational climate and job satisfaction. Theories such as the Job Characteristics Model, Conservation of Resources theory, as well as the Job Demands-Resources theory were used as theoretical framework. Data was obtained from the 5th European Working Conditions Survey (EWCS), and sample was composed of 1005 and 1000 workers from Spain and Portugal respectively. The analysis was conducted using the SOBEL Macro for SPSS (A multiple regression mediation model) developed by Preacher and Hayes in 2003. Results indicated that Meaningfulness partially mediates both the Job Autonomy-Job Satisfaction as well as the Supportive Organizational Climate-Job Satisfaction relationships. However, the percentages are large enough to draw substantial conclusions, especially that Job Meaningfulness plays an essential – if indirect – role in the amount of Satisfaction that one experiences at work. Some theoretical and practical implications are discussed.

Keywords: meaningfulness, job autonomy, supportive organizational climate, job satisfaction

Procedia PDF Downloads 534
12341 Internet Purchases in European Union Countries: Multiple Linear Regression Approach

Authors: Ksenija Dumičić, Anita Čeh Časni, Irena Palić

Abstract:

This paper examines economic and Information and Communication Technology (ICT) development influence on recently increasing Internet purchases by individuals for European Union member states. After a growing trend for Internet purchases in EU27 was noticed, all possible regression analysis was applied using nine independent variables in 2011. Finally, two linear regression models were studied in detail. Conducted simple linear regression analysis confirmed the research hypothesis that the Internet purchases in analysed EU countries is positively correlated with statistically significant variable Gross Domestic Product per capita (GDPpc). Also, analysed multiple linear regression model with four regressors, showing ICT development level, indicates that ICT development is crucial for explaining the Internet purchases by individuals, confirming the research hypothesis.

Keywords: European union, Internet purchases, multiple linear regression model, outlier

Procedia PDF Downloads 300
12340 Proposal for a Framework for Teaching Entrepreneurship and Innovation Using the Methods and Current Methodologies

Authors: Marcelo T. Okano, Jaqueline C. Bueno, Oduvaldo Vendrametto, Osmildo S. Santos, Marcelo E. Fernandes, Heide Landi

Abstract:

Developing countries are increasingly finding that entrepreneurship and innovation are the ways to speed up their developments and initiate or encourage technological development. The educational institutions such as universities, colleges and colleges of technology, has two main roles in this process, to guide and train entrepreneurs and provide technological knowledge and encourage innovation. Thus there was completing the triple helix model of innovation with universities, government and industry. But the teaching of entrepreneurship and innovation can not be only the traditional model, with blackboard, chalk and classroom. The new methods and methodologies such as Canvas, elevator pitching, design thinking, etc. require students to get involved and to experience the simulations of business, expressing their ideas and discussing them. The objective of this research project is to identify the main methods and methodologies used for the teaching of entrepreneurship and innovation, to propose a framework, test it and make a case study. To achieve the objective of this research, firstly was a survey of the literature on the entrepreneurship and innovation, business modeling, business planning, Canvas business model, design thinking and other subjects about the themes. Secondly, we developed the framework for teaching entrepreneurship and innovation based on bibliographic research. Thirdly, we tested the framework in a higher education class IT management for a semester. Finally, we detail the results in the case study in a course of IT management. As important results we improve the level of understanding and business administration students, allowing them to manage own affairs. Methods such as canvas and business plan helped students to plan and shape the ideas and business. Pitching for entrepreneurs and investors in the market brought a reality for students. The prototype allowed the company groups develop their projects. The proposed framework allows entrepreneurship education and innovation can leave the classroom, bring the reality of business roundtables to university relying on investors and real entrepreneurs.

Keywords: entrepreneurship, innovation, Canvas, traditional model

Procedia PDF Downloads 575
12339 Unveiling the Domino Effect: Barriers and Strategies in the Adoption of Telecommuting as a Post-Pandemic Workspace

Authors: Divnesh Lingam, Devi Rengamani Seenivasagam, Prashant Chand, Caleb Yee, John Chief, Rajeshkannan Ananthanarayanan

Abstract:

Telecommuting Post-Pandemic: Barriers, Solutions, and Strategies. Amidst the COVID-19 outbreak in 2020, remote work emerged as a vital business continuity measure. This study investigates telecommuting’s modern work model, exploring its benefits and obstacles. Utilizing Interpretive Structural Modelling uncovers barriers hindering telecommuting adoption. A validated set of thirteen barriers is examined through departmental surveys, revealing interrelationships. The resulting model highlights interactions and dependencies, forming a foundational framework. By addressing dominant barriers, a domino effect on subservient barriers is demonstrated. This research fosters further exploration, proposing management strategies for successful telecommuting adoption and reshaping the traditional workspace.

Keywords: barriers, interpretive structural modelling, post-pandemic, telecommuting

Procedia PDF Downloads 91
12338 Stochastic Optimization of a Vendor-Managed Inventory Problem in a Two-Echelon Supply Chain

Authors: Bita Payami-Shabestari, Dariush Eslami

Abstract:

The purpose of this paper is to develop a multi-product economic production quantity model under vendor management inventory policy and restrictions including limited warehouse space, budget, and number of orders, average shortage time and maximum permissible shortage. Since the “costs” cannot be predicted with certainty, it is assumed that data behave under uncertain environment. The problem is first formulated into the framework of a bi-objective of multi-product economic production quantity model. Then, the problem is solved with three multi-objective decision-making (MODM) methods. Then following this, three methods had been compared on information on the optimal value of the two objective functions and the central processing unit (CPU) time with the statistical analysis method and the multi-attribute decision-making (MADM). The results are compared with statistical analysis method and the MADM. The results of the study demonstrate that augmented-constraint in terms of optimal value of the two objective functions and the CPU time perform better than global criteria, and goal programming. Sensitivity analysis is done to illustrate the effect of parameter variations on the optimal solution. The contribution of this research is the use of random costs data in developing a multi-product economic production quantity model under vendor management inventory policy with several constraints.

Keywords: economic production quantity, random cost, supply chain management, vendor-managed inventory

Procedia PDF Downloads 127
12337 Immobilizing Quorum Sensing Inhibitors on Biomaterial Surfaces

Authors: Aditi Taunk, George Iskander, Kitty Ka Kit Ho, Mark Willcox, Naresh Kumar

Abstract:

Bacterial infections on biomaterial implants and medical devices accounts for 60-70% of all hospital acquired infections (HAIs). Treatment or removal of these infected devices results in high patient mortality and morbidity along with increased hospital expenses. In addition, with no effective strategies currently available and rapid development of antibacterial resistance has made device-related infections extremely difficult to treat. Therefore, in this project we have developed biomaterial surfaces using antibacterial compounds that inhibit biofilm formation by interfering with the bacterial communication mechanism known as quorum sensing (QS). This study focuses on covalent attachment of potent quorum sensing (QS) inhibiting compounds, halogenated furanones (FUs) and dihydropyrrol-2-ones (DHPs), onto glass surfaces. The FUs were attached by photoactivating the azide groups on the surface, and the acid functionalized DHPs were immobilized on amine surface via EDC/NHS coupling. The modified surfaces were tested in vitro against pathogenic organisms such as Staphylococcus aureus and Pseudomonas aeruginosa using confocal laser scanning microscopy (CLSM). Successful attachment of compounds on the substrates was confirmed by X-ray photoelectron spectroscopy (XPS) and contact angle measurements. The antibacterial efficacy was assessed, and significant reduction in bacterial adhesion and biofilm formation was observed on the FU and DHP coated surfaces. The activity of the coating was dependent upon the type of substituent present on the phenyl group of the DHP compound. For example, the ortho-fluorophenyl DHP (DHP-2) exhibited 79% reduction in bacterial adhesion against S. aureus and para-fluorophenyl DHP (DHP-3) exhibited 70% reduction against P. aeruginosa. The results were found to be comparable to DHP coated surfaces prepared in earlier study via Michael addition reaction. FUs and DHPs were able to retain their in vitro antibacterial efficacy after covalent attachment via azide chemistry. This approach is a promising strategy to develop efficient antibacterial biomaterials to reduce device related infections.

Keywords: antibacterial biomaterials, biomedical device-related infections, quorum sensing, surface functionalization

Procedia PDF Downloads 267
12336 Two-Dimensional Modeling of Spent Nuclear Fuel Using FLUENT

Authors: Imane Khalil, Quinn Pratt

Abstract:

In a nuclear reactor, an array of fuel rods containing stacked uranium dioxide pellets clad with zircalloy is the heat source for a thermodynamic cycle of energy conversion from heat to electricity. After fuel is used in a nuclear reactor, the assemblies are stored underwater in a spent nuclear fuel pool at the nuclear power plant while heat generation and radioactive decay rates decrease before it is placed in packages for dry storage or transportation. A computational model of a Boiling Water Reactor spent fuel assembly is modeled using FLUENT, the computational fluid dynamics package. Heat transfer simulations were performed on the two-dimensional 9x9 spent fuel assembly to predict the maximum cladding temperature for different input to the FLUENT model. Uncertainty quantification is used to predict the heat transfer and the maximum temperature profile inside the assembly.

Keywords: spent nuclear fuel, conduction, heat transfer, uncertainty quantification

Procedia PDF Downloads 218
12335 The Principle of Methodological Rationality and Security of Organisations

Authors: Jan Franciszek Jacko

Abstract:

This investigation presents the principle of methodological rationality of decision making and discusses the impact of an organisation's members' methodologically rational or irrational decisions on its security. This study formulates and partially justifies some research hypotheses regarding the impact. The thinking experiment is used according to Max Weber's ideal types method. Two idealised situations("models") are compared: Model A, whereall decision-makers follow methodologically rational decision-making procedures. Model B, in which these agents follow methodologically irrational decision-making practices. Analysing and comparing the two models will allow the formulation of some research hypotheses regarding the impact of methodologically rational and irrational attitudes of members of an organisation on its security. In addition to the method, phenomenological analyses of rationality and irrationality are applied.

Keywords: methodological rationality, rational decisions, security of organisations, philosophy of economics

Procedia PDF Downloads 138
12334 The Influence of Air Temperature Controls in Estimation of Air Temperature over Homogeneous Terrain

Authors: Fariza Yunus, Jasmee Jaafar, Zamalia Mahmud, Nurul Nisa’ Khairul Azmi, Nursalleh K. Chang, Nursalleh K. Chang

Abstract:

Variation of air temperature from one place to another is cause by air temperature controls. In general, the most important control of air temperature is elevation. Another significant independent variable in estimating air temperature is the location of meteorological stations. Distances to coastline and land use type are also contributed to significant variations in the air temperature. On the other hand, in homogeneous terrain direct interpolation of discrete points of air temperature work well to estimate air temperature values in un-sampled area. In this process the estimation is solely based on discrete points of air temperature. However, this study presents that air temperature controls also play significant roles in estimating air temperature over homogenous terrain of Peninsular Malaysia. An Inverse Distance Weighting (IDW) interpolation technique was adopted to generate continuous data of air temperature. This study compared two different datasets, observed mean monthly data of T, and estimation error of T–T’, where T’ estimated value from a multiple regression model. The multiple regression model considered eight independent variables of elevation, latitude, longitude, coastline, and four land use types of water bodies, forest, agriculture and build up areas, to represent the role of air temperature controls. Cross validation analysis was conducted to review accuracy of the estimation values. Final results show, estimation values of T–T’ produced lower errors for mean monthly mean air temperature over homogeneous terrain in Peninsular Malaysia.

Keywords: air temperature control, interpolation analysis, peninsular Malaysia, regression model, air temperature

Procedia PDF Downloads 372
12333 Piezo-Extracted Model Based Chloride/ Carbonation Induced Corrosion Assessment in Reinforced Concrete Structures

Authors: Gupta. Ashok, V. talakokula, S. bhalla

Abstract:

Rebar corrosion is one of the main causes of damage and premature failure of the reinforced concrete (RC) structures worldwide, causing enormous costs for inspection, maintenance, restoration and replacement. Therefore, early detection of corrosion and timely remedial action on the affected portion can facilitate an optimum utilization of the structure, imparting longevity to it. The recent advent of the electro-mechanical impedance (EMI) technique using piezo sensors (PZT) for structural health monitoring (SHM) has provided a new paradigm to the maintenance engineers to diagnose the onset of the damage at the incipient stage itself. This paper presents a model based approach for corrosion assessment based on the equivalent parameters extracted from the impedance spectrum of concrete-rebar system using the EMI technique via the PZT sensors.

Keywords: impedance, electro-mechanical, stiffness, mass, damping, equivalent parameters

Procedia PDF Downloads 542
12332 Identification of Key Parameters for Benchmarking of Combined Cycle Power Plants Retrofit

Authors: S. Sabzchi Asl, N. Tahouni, M. H. Panjeshahi

Abstract:

Benchmarking of a process with respect to energy consumption, without accomplishing a full retrofit study, can save both engineering time and money. In order to achieve this goal, the first step is to develop a conceptual-mathematical model that can easily be applied to a group of similar processes. In this research, we have aimed to identify a set of key parameters for the model which is supposed to be used for benchmarking of combined cycle power plants. For this purpose, three similar combined cycle power plants were studied. The results showed that ambient temperature, pressure and relative humidity, number of HRSG evaporator pressure levels and relative power in part load operation are the main key parameters. Also, the relationships between these parameters and produced power (by gas/ steam turbine), gas turbine and plant efficiency, temperature and mass flow rate of the stack flue gas were investigated.

Keywords: combined cycle power plant, energy benchmarking, modelling, retrofit

Procedia PDF Downloads 304
12331 Forthcoming Big Data on Smart Buildings and Cities: An Experimental Study on Correlations among Urban Data

Authors: Yu-Mi Song, Sung-Ah Kim, Dongyoun Shin

Abstract:

Cities are complex systems of diverse and inter-tangled activities. These activities and their complex interrelationships create diverse urban phenomena. And such urban phenomena have considerable influences on the lives of citizens. This research aimed to develop a method to reveal the causes and effects among diverse urban elements in order to enable better understanding of urban activities and, therefrom, to make better urban planning strategies. Specifically, this study was conducted to solve a data-recommendation problem found on a Korean public data homepage. First, a correlation analysis was conducted to find the correlations among random urban data. Then, based on the results of that correlation analysis, the weighted data network of each urban data was provided to people. It is expected that the weights of urban data thereby obtained will provide us with insights into cities and show us how diverse urban activities influence each other and induce feedback.

Keywords: big data, machine learning, ontology model, urban data model

Procedia PDF Downloads 416
12330 Magnetic Navigation in Underwater Networks

Authors: Kumar Divyendra

Abstract:

Underwater Sensor Networks (UWSNs) have wide applications in areas such as water quality monitoring, marine wildlife management etc. A typical UWSN system consists of a set of sensors deployed randomly underwater which communicate with each other using acoustic links. RF communication doesn't work underwater, and GPS too isn't available underwater. Additionally Automated Underwater Vehicles (AUVs) are deployed to collect data from some special nodes called Cluster Heads (CHs). These CHs aggregate data from their neighboring nodes and forward them to the AUVs using optical links when an AUV is in range. This helps reduce the number of hops covered by data packets and helps conserve energy. We consider the three-dimensional model of the UWSN. Nodes are initially deployed randomly underwater. They attach themselves to the surface using a rod and can only move upwards or downwards using a pump and bladder mechanism. We use graph theory concepts to maximize the coverage volume while every node maintaining connectivity with at least one surface node. We treat the surface nodes as landmarks and each node finds out its hop distance from every surface node. We treat these hop-distances as coordinates and use them for AUV navigation. An AUV intending to move closer to a node with given coordinates moves hop by hop through nodes that are closest to it in terms of these coordinates. In absence of GPS, multiple different approaches like Inertial Navigation System (INS), Doppler Velocity Log (DVL), computer vision-based navigation, etc., have been proposed. These systems have their own drawbacks. INS accumulates error with time, vision techniques require prior information about the environment. We propose a method that makes use of the earth's magnetic field values for navigation and combines it with other methods that simultaneously increase the coverage volume under the UWSN. The AUVs are fitted with magnetometers that measure the magnetic intensity (I), horizontal inclination (H), and Declination (D). The International Geomagnetic Reference Field (IGRF) is a mathematical model of the earth's magnetic field, which provides the field values for the geographical coordinateson earth. Researchers have developed an inverse deep learning model that takes the magnetic field values and predicts the location coordinates. We make use of this model within our work. We combine this with with the hop-by-hop movement described earlier so that the AUVs move in such a sequence that the deep learning predictor gets trained as quickly and precisely as possible We run simulations in MATLAB to prove the effectiveness of our model with respect to other methods described in the literature.

Keywords: clustering, deep learning, network backbone, parallel computing

Procedia PDF Downloads 97
12329 A Conceptual Model of Preparing School Counseling Students as Related Service Providers in the Transition Process

Authors: LaRon A. Scott, Donna M. Gibson

Abstract:

Data indicate that counselor education programs in the United States do not prepare their students adequately to serve students with disabilities nor provide counseling as a related service. There is a need to train more school counselors to provide related services to students with disabilities, for many reasons, but specifically, school counselors are participating in Individualized Education Programs (IEP) and transition planning meetings for students with disabilities where important academic, mental health and post-secondary education decisions are made. While school counselors input is perceived very important to the process, they may not have the knowledge or training in this area to feel confident in offering required input in these meetings. Using a conceptual research design, a model that can be used to prepare school counseling students as related service providers and effective supports to address transition for students with disabilities was developed as a component of this research. The authors developed the Collaborative Model of Preparing School Counseling Students as Related Service Providers to Students with Disabilities, based on a conceptual framework that involves an integration of Social Cognitive Career Theory (SCCT) and evidenced-based practices based on Self-Determination Theory (SDT) to provide related and transition services and planning with students with disabilities. The authors’ conclude that with five overarching competencies, (1) knowledge and understanding of disabilities, (2) knowledge and expertise in group counseling to students with disabilities, (3), knowledge and experience in specific related service components, (4) knowledge and experience in evidence-based counseling interventions, (5) knowledge and experiencing in evidenced-based transition and career planning services, that school counselors can enter the field with the necessary expertise to adequately serve all students. Other examples and strategies are suggested, and recommendations for preparation programs seeking to integrate a model to prepare school counselors to implement evidenced-based transition strategies in supporting students with disabilities are included

Keywords: transition education, social cognitive career theory, self-determination, counseling

Procedia PDF Downloads 242
12328 Communication and Management of Incidental Pathology in a Cohort of 1,214 Consecutive Appendicectomies

Authors: Matheesha Herath, Ned Kinnear, Bridget Heijkoop, Eliza Bramwell, Alannah Frazetto, Amy Noll, Prajay Patel, Derek Hennessey, Greg Otto, Christopher Dobbins, Tarik Sammour, James Moore

Abstract:

Background: Important incidental pathology requiring further action is commonly found during appendicectomy, macro- and microscopically. It is unknown whether the acute surgical unit (ASU) model affects the management and disclosure of these findings. Methods: An ASU model was introduced at our institution on 01/08/2012. In this retrospective cohort study, all patients undergoing appendicectomy 2.5 years before (traditional group) or after (ASU group) this date were compared. The primary outcomes were rates of appropriate management of the incidental findings and communication of the findings to the patient and to their general practitioner (GP). Results: 1,214 patients underwent emergency appendicectomy; 465 in the traditional group and 749 in the ASU group. 80 (6.6%) patients (25 and 55 in each respective period) had important incidental findings. There were 24 patients with benign polyps, 15 with neuro-endocrine tumour, 11 with endometriosis, 8 with pelvic inflammatory disease, 8 Enterobius vermicularis infection, 7 with low grade mucinous cystadenoma, 3 with inflammatory bowel disease, 2 with diverticulitis, 2 with tubo-ovarian mass, 1 with secondary appendiceal malignancy and none with primary appendiceal adenocarcinoma. One patient had dual pathologies. There was no difference between the traditional and ASU group with regards to communication of the findings to the patient (p=0.44) and their GP (p=0.27), and there was no difference in the rates of appropriate management (p=0.21). Conclusions: The introduction of an ASU model did not change rates of surgeon-to-patient and surgeon-to-GP communication nor affect rates of appropriate management of important incidental pathology during an appendectomy.

Keywords: acute care surgery, appendicitis, appendicectomy, incidental

Procedia PDF Downloads 142
12327 Multi-Scale Modelling of the Cerebral Lymphatic System and Its Failure

Authors: Alexandra K. Diem, Giles Richardson, Roxana O. Carare, Neil W. Bressloff

Abstract:

Alzheimer's disease (AD) is the most common form of dementia and although it has been researched for over 100 years, there is still no cure or preventive medication. Its onset and progression is closely related to the accumulation of the neuronal metabolite Aβ. This raises the question of how metabolites and waste products are eliminated from the brain as the brain does not have a traditional lymphatic system. In recent years the rapid uptake of Aβ into cerebral artery walls and its clearance along those arteries towards the lymph nodes in the neck has been suggested and confirmed in mice studies, which has led to the hypothesis that interstitial fluid (ISF), in the basement membranes in the walls of cerebral arteries, provides the pathways for the lymphatic drainage of Aβ. This mechanism, however, requires a net reverse flow of ISF inside the blood vessel wall compared to the blood flow and the driving forces for such a mechanism remain unknown. While possible driving mechanisms have been studied using mathematical models in the past, a mechanism for net reverse flow has not been discovered yet. Here, we aim to address the question of the driving force of this reverse lymphatic drainage of Aβ (also called perivascular drainage) by using multi-scale numerical and analytical modelling. The numerical simulation software COMSOL Multiphysics 4.4 is used to develop a fluid-structure interaction model of a cerebral artery, which models blood flow and displacements in the artery wall due to blood pressure changes. An analytical model of a layer of basement membrane inside the wall governs the flow of ISF and, therefore, solute drainage based on the pressure changes and wall displacements obtained from the cerebral artery model. The findings suggest that an active role in facilitating a reverse flow is played by the components of the basement membrane and that stiffening of the artery wall during age is a major risk factor for the impairment of brain lymphatics. Additionally, our model supports the hypothesis of a close association between cerebrovascular diseases and the failure of perivascular drainage.

Keywords: Alzheimer's disease, artery wall mechanics, cerebral blood flow, cerebral lymphatics

Procedia PDF Downloads 523
12326 Unsteady Flow Simulations for Microchannel Design and Its Fabrication for Nanoparticle Synthesis

Authors: Mrinalini Amritkar, Disha Patil, Swapna Kulkarni, Sukratu Barve, Suresh Gosavi

Abstract:

Micro-mixers play an important role in the lab-on-a-chip applications and micro total analysis systems to acquire the correct level of mixing for any given process. The mixing process can be classified as active or passive according to the use of external energy. Literature of microfluidics reports that most of the work is done on the models of steady laminar flow; however, the study of unsteady laminar flow is an active area of research at present. There are wide applications of this, out of which, we consider nanoparticle synthesis in micro-mixers. In this work, we have developed a model for unsteady flow to study the mixing performance of a passive micro mixer for reactants used for such synthesis. The model is developed in Finite Volume Method (FVM)-based software, OpenFOAM. The model is tested by carrying out the simulations at Re of 0.5. Mixing performance of the micro-mixer is investigated using simulated concentration values of mixed species across the width of the micro-mixer and calculating the variance across a line profile. Experimental validation is done by passing dyes through a Y shape micro-mixer fabricated using polydimethylsiloxane (PDMS) polymer and comparing variances with the simulated ones. Gold nanoparticles are later synthesized through the micro-mixer and collected at two different times leading to significantly different size distributions. These times match with the time scales over which reactant concentrations vary as obtained from simulations. Our simulations could thus be used to create design aids for passive micro-mixers used in nanoparticle synthesis.

Keywords: Lab-on-chip, LOC, micro-mixer, OpenFOAM, PDMS

Procedia PDF Downloads 159
12325 Efficient DNN Training on Heterogeneous Clusters with Pipeline Parallelism

Authors: Lizhi Ma, Dan Liu

Abstract:

Pipeline parallelism has been widely used to accelerate distributed deep learning to alleviate GPU memory bottlenecks and to ensure that models can be trained and deployed smoothly under limited graphics memory conditions. However, in highly heterogeneous distributed clusters, traditional model partitioning methods are not able to achieve load balancing. The overlap of communication and computation is also a big challenge. In this paper, HePipe is proposed, an efficient pipeline parallel training method for highly heterogeneous clusters. According to the characteristics of the neural network model pipeline training task, oriented to the 2-level heterogeneous cluster computing topology, a training method based on the 2-level stage division of neural network modeling and partitioning is designed to improve the parallelism. Additionally, a multi-forward 1F1B scheduling strategy is designed to accelerate the training time of each stage by executing the computation units in advance to maximize the overlap between the forward propagation communication and backward propagation computation. Finally, a dynamic recomputation strategy based on task memory requirement prediction is proposed to improve the fitness ratio of task and memory, which improves the throughput of the cluster and solves the memory shortfall problem caused by memory differences in heterogeneous clusters. The empirical results show that HePipe improves the training speed by 1.6×−2.2× over the existing asynchronous pipeline baselines.

Keywords: pipeline parallelism, heterogeneous cluster, model training, 2-level stage partitioning

Procedia PDF Downloads 16
12324 Portfolio Optimization with Reward-Risk Ratio Measure Based on the Mean Absolute Deviation

Authors: Wlodzimierz Ogryczak, Michal Przyluski, Tomasz Sliwinski

Abstract:

In problems of portfolio selection, the reward-risk ratio criterion is optimized to search for a risky portfolio with the maximum increase of the mean return in proportion to the risk measure increase when compared to the risk-free investments. In the classical model, following Markowitz, the risk is measured by the variance thus representing the Sharpe ratio optimization and leading to the quadratic optimization problems. Several Linear Programming (LP) computable risk measures have been introduced and applied in portfolio optimization. In particular, the Mean Absolute Deviation (MAD) measure has been widely recognized. The reward-risk ratio optimization with the MAD measure can be transformed into the LP formulation with the number of constraints proportional to the number of scenarios and the number of variables proportional to the total of the number of scenarios and the number of instruments. This may lead to the LP models with huge number of variables and constraints in the case of real-life financial decisions based on several thousands scenarios, thus decreasing their computational efficiency and making them hardly solvable by general LP tools. We show that the computational efficiency can be then dramatically improved by an alternative model based on the inverse risk-reward ratio minimization and by taking advantages of the LP duality. In the introduced LP model the number of structural constraints is proportional to the number of instruments thus not affecting seriously the simplex method efficiency by the number of scenarios and therefore guaranteeing easy solvability. Moreover, we show that under natural restriction on the target value the MAD risk-reward ratio optimization is consistent with the second order stochastic dominance rules.

Keywords: portfolio optimization, reward-risk ratio, mean absolute deviation, linear programming

Procedia PDF Downloads 405
12323 Coarse-Graining in Micromagnetic Simulations of Magnetic Hyperthermia

Authors: Razyeh Behbahani, Martin L. Plumer, Ivan Saika-Voivod

Abstract:

Micromagnetic simulations based on the stochastic Landau-Lifshitz-Gilbert equation are used to calculate dynamic magnetic hysteresis loops relevant to magnetic hyperthermia applications. With the goal to effectively simulate room-temperature loops for large iron-oxide based systems at relatively slow sweep rates on the order of 1 Oe/ns or less, a coarse-graining scheme is proposed and tested. The scheme is derived from a previously developed renormalization-group approach. Loops associated with nanorods, used as building blocks for larger nanoparticles that were employed in preclinical trials (Dennis et al., 2009 Nanotechnology 20 395103), serve as the model test system. The scaling algorithm is shown to produce nearly identical loops over several decades in the model grain sizes. Sweep-rate scaling involving the damping constant alpha is also demonstrated.

Keywords: coarse-graining, hyperthermia, hysteresis loops, micromagnetic simulations

Procedia PDF Downloads 146
12322 Experimental Chip/Tool Temperature FEM Model Calibration by Infrared Thermography: A Case Study

Authors: Riccardo Angiuli, Michele Giannuzzi, Rodolfo Franchi, Gabriele Papadia

Abstract:

Temperature knowledge in machining is fundamental to improve the numerical and FEM models used for the study of some critical process aspects, such as the behavior of the worked material and tool. The extreme conditions in which they operate make it impossible to use traditional measuring instruments; infrared thermography can be used as a valid measuring instrument for temperature measurement during metal cutting. In the study, a large experimental program on superduplex steel (ASTM A995 gr. 5A) cutting was carried out, the relevant cutting temperatures were measured by infrared thermography when certain cutting parameters changed, from traditional values to extreme ones. The values identified were used to calibrate a FEM model for the prediction of residual life of the tools. During the study, the problems related to the detection of cutting temperatures by infrared thermography were analyzed, and a dedicated procedure was developed that could be used during similar processing.

Keywords: machining, infrared thermography, FEM, temperature measurement

Procedia PDF Downloads 183
12321 Assessment of Training, Job Attitudes and Motivation: A Mediation Model in Banking Sector of Pakistan

Authors: Abdul Rauf, Xiaoxing Liu, Rizwan Qaisar Danish, Waqas Amin

Abstract:

The core intention of this study is to analyze the linkage of training, job attitudes and motivation through a mediation model in the banking sector of Pakistan. Moreover, this study is executed to answer a range of queries regarding the consideration of employees about training, job satisfaction, motivation and organizational commitment. Hence, the association of training with job satisfaction, job satisfaction with motivation, organizational commitment with job satisfaction, organization commitment as independently with motivation and training directly related to motivation is determined in this course of study. A questionnaire crafted for comprehending the purpose of this study by including four variables such as training, job satisfaction, motivation and organizational commitment which have to measure. A sample of 450 employees from seventeen private (17) banks and two (2) public banks was taken on the basis of convenience sampling from Pakistan. However, 357 questionnaires, completely filled were received back. AMOS used for assessing the conformity factor analysis (CFA) model and statistical techniques practiced to scan the collected data (i.e.) descriptive statistics, regression analysis and correlation analysis. The empirical findings revealed that training and organizational commitment has a significant and positive impact directly on job satisfaction and motivation as well as through the mediator (job satisfaction) also the impact sensing in the same way on the motivation of employees in the financial Banks of Pakistan. In this research study, the banking sector is under discussion, so the findings could not generalize on other sectors such as manufacturing, textiles, telecom, and medicine, etc. The low sample size is also the limitation of this study. On the foundation of these results the management fascinates to make the revised strategies regarding training program for the employees as it enhances their motivation level, and job satisfaction on a regular basis.

Keywords: job satisfaction, motivation, organizational commitment, Pakistan, training

Procedia PDF Downloads 253