Search results for: integrated definition for process description capture (IDEF3) method
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 33866

Search results for: integrated definition for process description capture (IDEF3) method

31286 A Clustering Algorithm for Massive Texts

Authors: Ming Liu, Chong Wu, Bingquan Liu, Lei Chen

Abstract:

Internet users have to face the massive amount of textual data every day. Organizing texts into categories can help users dig the useful information from large-scale text collection. Clustering, in fact, is one of the most promising tools for categorizing texts due to its unsupervised characteristic. Unfortunately, most of traditional clustering algorithms lose their high qualities on large-scale text collection. This situation mainly attributes to the high- dimensional vectors generated from texts. To effectively and efficiently cluster large-scale text collection, this paper proposes a vector reconstruction based clustering algorithm. Only the features that can represent the cluster are preserved in cluster’s representative vector. This algorithm alternately repeats two sub-processes until it converges. One process is partial tuning sub-process, where feature’s weight is fine-tuned by iterative process. To accelerate clustering velocity, an intersection based similarity measurement and its corresponding neuron adjustment function are proposed and implemented in this sub-process. The other process is overall tuning sub-process, where the features are reallocated among different clusters. In this sub-process, the features useless to represent the cluster are removed from cluster’s representative vector. Experimental results on the three text collections (including two small-scale and one large-scale text collections) demonstrate that our algorithm obtains high quality on both small-scale and large-scale text collections.

Keywords: vector reconstruction, large-scale text clustering, partial tuning sub-process, overall tuning sub-process

Procedia PDF Downloads 435
31285 Critical Parameters of a Square-Well Fluid

Authors: Hamza Javar Magnier, Leslie V. Woodcock

Abstract:

We report extensive molecular dynamics (MD) computational investigations into the thermodynamic description of supercritical properties for a model fluid that is the simplest realistic representation of atoms or molecules. The pair potential is a hard-sphere repulsion of diameter σ with a very short attraction of length λσ. When λ = 1.005 the range is so short that the model atoms are referred to as “adhesive spheres”. Molecular dimers, trimers …etc. up to large clusters, or droplets, of many adhesive-sphere atoms are unambiguously defined. This then defines percolation transitions at the molecular level that bound the existence of gas and liquid phases at supercritical temperatures, and which define the existence of a supercritical mesophase. Both liquid and gas phases are seen to terminate at the loci of percolation transitions, and below a second characteristic temperature (Tc2) are separated by the supercritical mesophase. An analysis of the distribution of clusters in gas, meso- and liquid phases confirms the colloidal nature of this mesophase. The general phase behaviour is compared with both experimental properties of the water-steam supercritical region and also with formally exact cluster theory of Mayer and Mayer. Both are found to be consistent with the present findings that in this system the supercritical mesophase narrows in density with increasing T > Tc and terminates at a higher Tc2 at a confluence of the primary percolation loci. The expended plot of the MD data points in the mesophase of 7 critical and supercritical isotherms in highlight this narrowing in density of the linear-slope region of the mesophase as temperature is increased above the critical. This linearity in the mesophase implies the existence of a linear combination rule between gas and liquid which is an extension of the Lever rule in the subcritical region, and can be used to obtain critical parameters without resorting to experimental data in the two-phase region. Using this combination rule, the calculated critical parameters Tc = 0.2007 and Pc = 0.0278 are found be agree with the values found by of Largo and coworkers. The properties of this supercritical mesophase are shown to be consistent with an alternative description of the phenomenon of critical opalescence seen in the supercritical region of both molecular and colloidal-protein supercritical fluids.

Keywords: critical opalescence, supercritical, square-well, percolation transition, critical parameters.

Procedia PDF Downloads 521
31284 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis

Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin

Abstract:

Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.

Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve

Procedia PDF Downloads 337
31283 Progressive Watershed Management Approaches in Iran

Authors: S. H. R. Sadeghi, A. Sadoddin, A. Najafinejad

Abstract:

Expansionism and ever-increasing population menace all different resources worldwide. The issue, hence, is critical in developing countries like Iran where new technologies are rapidly luxuriated and unguardedly applied, resulting in unexpected outcomes. However, uncommon and comprehensive approaches are introduced to take all the different aspects involved into consideration. In the last decade, few approaches such as community-based, stakeholders-oriented, adaptive and ultimately integrated management, have emerged and are developing for efficient, Co-management or best management, economic and sustainable development and management of watershed resources in Iran. In the present paper, an attempt has been made to focus on state-of-the-art approaches for the management of watershed resources applied in Iran. The study has been then supported by reports of some case studies conducted throughout the country involving previously mentioned approaches. Scrutinizing results of the researches verified a progressive tendency of the managerial approaches in watershed management strategies leading to a general approaching balance situation. The approaches are firmly rooted in educational, research, executive, legal and policy-making sectors leading to some recuperation at different levels. However, there is a long way ahead to naturalize detrimental effects of unscientific, illegal and over exploitation of the watershed resources in Iran.

Keywords: comprehensive management, ecosystem balance, integrated watershed management, land resources optimization

Procedia PDF Downloads 370
31282 A Stochastic Model to Predict Earthquake Ground Motion Duration Recorded in Soft Soils Based on Nonlinear Regression

Authors: Issam Aouari, Abdelmalek Abdelhamid

Abstract:

For seismologists, the characterization of seismic demand should include the amplitude and duration of strong shaking in the system. The duration of ground shaking is one of the key parameters in earthquake resistant design of structures. This paper proposes a nonlinear statistical model to estimate earthquake ground motion duration in soft soils using multiple seismicity indicators. Three definitions of ground motion duration proposed by literature have been applied. With a comparative study, we select the most significant definition to use for predict the duration. A stochastic model is presented for the McCann and Shah Method using nonlinear regression analysis based on a data set for moment magnitude, source to site distance and site conditions. The data set applied is taken from PEER strong motion databank and contains shallow earthquakes from different regions in the world; America, Turkey, London, China, Italy, Chili, Mexico...etc. Main emphasis is placed on soft site condition. The predictive relationship has been developed based on 600 records and three input indicators. Results have been compared with others published models. It has been found that the proposed model can predict earthquake ground motion duration in soft soils for different regions and sites conditions.

Keywords: duration, earthquake, prediction, regression, soft soil

Procedia PDF Downloads 153
31281 A Low Cost Gain-Coupled Distributed Feedback Laser Based on Periodic Surface p-Contacts

Authors: Yongyi Chen, Li Qin, Peng Jia, Yongqiang Ning, Yun Liu, Lijun Wang

Abstract:

The distributed feedback (DFB) lasers are indispensable in optical phase array (OPA) used for light detection and ranging (LIDAR) techniques, laser communication systems and integrated optics, thanks to their stable single longitudinal mode and narrow linewidth properties. Traditional index-coupled (IC) DFB lasers with uniform gratings have an inherent problem of lasing two degenerated modes. Phase shifts are usually required to eliminate the mode degeneration, making the grating structure complex and expensive. High-quality antireflection (AR) coatings on both lasing facets are also essential owing to the random facet phases introduced by the chip cleavage process, which means half of the lasing energy is wasted. Gain-coupled DFB (GC-DFB) lasers based on the periodic gain (or loss) are announced to have single longitudinal mode as well as capable of the unsymmetrical coating to increase lasing power and efficiency thanks to facet immunity. However, expensive and time-consuming technologies such as epitaxial regrowth and nanoscale grating processing are still required just as IC-DFB lasers, preventing them from practical applications and commercial markets. In this research, we propose a low-cost, single-mode regrowth-free GC-DFB laser based on periodic surface p-contacts. The gain coupling effect is achieved simply by periodic current distribution in the quantum well caused by periodic surface p-contacts, introducing very little index-coupling effect that can be omitted. It is prepared by i-line lithography, without nanoscale grating fabrication or secondary epitaxy. Due to easy fabrication techniques, it provides a method to fabricate practical low cost GC-DFB lasers for widespread practical applications.

Keywords: DFB laser, gain-coupled, low cost, periodic p-contacts

Procedia PDF Downloads 128
31280 Formulation of Corrector Methods from 3-Step Hybid Adams Type Methods for the Solution of First Order Ordinary Differential Equation

Authors: Y. A. Yahaya, Ahmad Tijjani Asabe

Abstract:

This paper focuses on the formulation of 3-step hybrid Adams type method for the solution of first order differential equation (ODE). The methods which was derived on both grid and off grid points using multistep collocation schemes and also evaluated at some points to produced Block Adams type method and Adams moulton method respectively. The method with the highest order was selected to serve as the corrector. The convergence was valid and efficient. The numerical experiments were carried out and reveal that hybrid Adams type methods performed better than the conventional Adams moulton method.

Keywords: adam-moulton type (amt), corrector method, off-grid, block method, convergence analysis

Procedia PDF Downloads 626
31279 Integrated Risk Assessment of Storm Surge and Climate Change for the Coastal Infrastructure

Authors: Sergey V. Vinogradov

Abstract:

Coastal communities are presently facing increased vulnerabilities due to rising sea levels and shifts in global climate patterns, a trend expected to escalate in the long run. To address the needs of government entities, the public sector, and private enterprises, there is an urgent need to thoroughly investigate, assess, and manage the present and projected risks associated with coastal flooding, including storm surges, sea level rise, and nuisance flooding. In response to these challenges, a practical approach to evaluating storm surge inundation risks has been developed. This methodology offers an integrated assessment of potential flood risk in targeted coastal areas. The physical modeling framework involves simulating synthetic storms and utilizing hydrodynamic models that align with projected future climate and ocean conditions. Both publicly available and site-specific data form the basis for a risk assessment methodology designed to translate inundation model outputs into statistically significant projections of expected financial and operational consequences. This integrated approach produces measurable indicators of impacts stemming from floods, encompassing economic and other dimensions. By establishing connections between the frequency of modeled flood events and their consequences across a spectrum of potential future climate conditions, our methodology generates probabilistic risk assessments. These assessments not only account for future uncertainty but also yield comparable metrics, such as expected annual losses for each inundation event. These metrics furnish stakeholders with a dependable dataset to guide strategic planning and inform investments in mitigation. Importantly, the model's adaptability ensures its relevance across diverse coastal environments, even in instances where site-specific data for analysis may be limited.

Keywords: climate, coastal, surge, risk

Procedia PDF Downloads 56
31278 Estimation of Train Operation Using an Exponential Smoothing Method

Authors: Taiyo Matsumura, Kuninori Takahashi, Takashi Ono

Abstract:

The purpose of this research is to improve the convenience of waiting for trains at level crossings and stations and to prevent accidents resulting from forcible entry into level crossings, by providing level crossing users and passengers with information that tells them when the next train will pass through or arrive. For this paper, we proposed methods for estimating operation by means of an average value method, variable response smoothing method, and exponential smoothing method, on the basis of open data, which has low accuracy, but for which performance schedules are distributed in real time. We then examined the accuracy of the estimations. The results showed that the application of an exponential smoothing method is valid.

Keywords: exponential smoothing method, open data, operation estimation, train schedule

Procedia PDF Downloads 388
31277 Analysis of Performance Improvement Factors in Supply Chain Manufacturing Using Analytic Network Process and Kaizen

Authors: Juliza Hidayati, Yesie M. Sinuhaji, Sawarni Hasibuan

Abstract:

A company producing drinking water through many incompatibility issues that affect supply chain performance. The study was conducted to determine the factors that affect the performance of the supply chain and improve it. To obtain the dominant factors affecting the performance of the supply chain used Analytic Network Process, while to improve performance is done by using Kaizen. Factors affecting the performance of the supply chain to be a reference to identify the cause of the non-conformance. Results weighting using ANP indicates that the dominant factor affecting the level of performance is the precision of the number of shipments (15%), the ability of the fulfillment of the booking amount (12%), and the number of rejected products when signing (12%). Incompatibility of the factors that affect the performance of the supply chain are identified, so that found the root cause of the problem is most dominant. Based on the weight of Risk Priority Number (RPN) gained the most dominant root cause of the problem, namely the poorly maintained engine, the engine worked for three shifts, machine parts that are not contained in the plant. Improvements then performed using the Kaizen method of systematic and sustainable.

Keywords: analytic network process, booking amount, risk priority number, supply chain performance

Procedia PDF Downloads 294
31276 A Study for Effective CO2 Sequestration of Hydrated Cement by Direct Aqueous Carbonation

Authors: Hyomin Lee, Jinhyun Lee, Jinyeon Hwang, Younghoon Choi, Byeongseo Son

Abstract:

Global warming is a world-wide issue. Various carbon capture and storage (CCS) technologies for reducing CO2 concentration in the atmosphere have been increasingly studied. Mineral carbonation is one of promising method for CO2 sequestration. Waste cement generating from aggregate recycling processes of waste concrete is potentially a good raw material containing reactive components for mineral carbonation. The major goal of our long-term project is to developed effective methods for CO2 sequestration using waste cement. In the present study, the carbonation characteristics of hydrated cement were examined by conducting two different direct aqueous carbonation experiments. We also evaluate the influence of NaCl and MgCl2 as additives to increase mineral carbonation efficiency of hydrated cement. Cement paste was made with W:C= 6:4 and stored for 28 days in water bath. The prepared cement paste was pulverized to the size less than 0.15 mm. 15 g of pulverized cement paste and 200 ml of solutions containing additives were reacted in ambient temperature and pressure conditions. 1M NaCl and 0.25 M MgCl2 was selected for additives after leaching test. Two different sources of CO2 was applied for direct aqueous carbonation experiment: 0.64 M NaHCO3 was used for CO2 donor in method 1 and pure CO2 gas (99.9%) was bubbling into reacting solution at the flow rate of 20 ml/min in method 2. The pH and Ca ion concentration were continuously measured with pH/ISE Multiparameter to observe carbonation behaviors. Material characterization of reacted solids was performed by TGA, XRD, SEM/EDS analyses. The carbonation characteristics of hydrated cement were significantly different with additives. Calcite was a dominant calcium carbonate mineral after the two carbonation experiments with no additive and NaCl additive. The significant amount of aragonite and vaterite as well as very fine calcite of poorer crystallinity was formed with MgCl2 additive. CSH (calcium silicate hydrate) in hydrated cement were changed to MSH (magnesium silicate hydrate). This transformation contributed to the high carbonation efficiency. Carbonation experiment with method 1 revealed that that the carbonation of hydrated cement took relatively long time in MgCl2 solution compared to that in NaCl solution and the contents of aragonite and vaterite were increased as increasing reaction time. In order to maximize carbonation efficiency in direct aqueous carbonation with CO2 gas injection (method 2), the control of solution pH was important. The solution pH was decreased with injection of CO2 gas. Therefore, the carbonation efficiency in direct aqueous carbonation was closely related to the stability of calcium carbonate minerals with pH changes. With no additive and NaCl additive, the maximum carbonation was achieved when the solution pH was greater than 11. Calcium carbonate form by mineral carbonation seemed to be re-dissolved as pH decreased below 11 with continuous CO2 gas injection. The type of calcium carbonate mineral formed during carbonation in MgCl2 solution was closely related to the variation of solution pH caused by CO2 gas injection. The amount of aragonite significantly increased with decreasing solution pH, whereas the amount of calcite decreased.

Keywords: CO2 sequestration, Mineral carbonation, Cement and concrete, MgCl2 and NaCl

Procedia PDF Downloads 379
31275 Decision Support System for Hospital Selection in Emergency Medical Services: A Discrete Event Simulation Approach

Authors: D. Tedesco, G. Feletti, P. Trucco

Abstract:

The present study aims to develop a Decision Support System (DSS) to support the operational decision of the Emergency Medical Service (EMS) regarding the assignment of medical emergency requests to Emergency Departments (ED). In the literature, this problem is also known as “hospital selection” and concerns the definition of policies for the selection of the ED to which patients who require further treatment are transported by ambulance. The employed research methodology consists of the first phase of revision of the technical-scientific literature concerning DSSs to support the EMS management and, in particular, the hospital selection decision. From the literature analysis, it emerged that current studies are mainly focused on the EMS phases related to the ambulance service and consider a process that ends when the ambulance is available after completing a request. Therefore, all the ED-related issues are excluded and considered as part of a separate process. Indeed, the most studied hospital selection policy turned out to be proximity, thus allowing to minimize the transport time and release the ambulance in the shortest possible time. The purpose of the present study consists in developing an optimization model for assigning medical emergency requests to the EDs, considering information relating to the subsequent phases of the process, such as the case-mix, the expected service throughput times, and the operational capacity of different EDs in hospitals. To this end, a Discrete Event Simulation (DES) model was created to evaluate different hospital selection policies. Therefore, the next steps of the research consisted of the development of a general simulation architecture, its implementation in the AnyLogic software and its validation on a realistic dataset. The hospital selection policy that produced the best results was the minimization of the Time To Provider (TTP), considered as the time from the beginning of the ambulance journey to the ED at the beginning of the clinical evaluation by the doctor. Finally, two approaches were further compared: a static approach, which is based on a retrospective estimate of the TTP, and a dynamic approach, which is based on a predictive estimate of the TTP determined with a constantly updated Winters model. Findings reveal that considering the minimization of TTP as a hospital selection policy raises several benefits. It allows to significantly reduce service throughput times in the ED with a minimum increase in travel time. Furthermore, an immediate view of the saturation state of the ED is produced and the case-mix present in the ED structures (i.e., the different triage codes) is considered, as different severity codes correspond to different service throughput times. Besides, the use of a predictive approach is certainly more reliable in terms of TTP estimation than a retrospective approach but entails a more difficult application. These considerations can support decision-makers in introducing different hospital selection policies to enhance EMSs performance.

Keywords: discrete event simulation, emergency medical services, forecast model, hospital selection

Procedia PDF Downloads 90
31274 Learners’ Perceptions of Tertiary Level Teachers’ Code Switching: A Vietnamese Perspective

Authors: Hoa Pham

Abstract:

The literature on language teaching and second language acquisition has been largely driven by monolingual ideology with a common assumption that a second language (L2) is best taught and learned in the L2 only. The current study challenges this assumption by reporting learners' positive perceptions of tertiary level teachers' code switching practices in Vietnam. The findings of this study contribute to our understanding of code switching practices in language classrooms from a learners' perspective. Data were collected from student participants who were working towards a Bachelor degree in English within the English for Business Communication stream through the use of focus group interviews. The literature has documented that this method of interviewing has a number of distinct advantages over individual student interviews. For instance, group interactions generated by focus groups create a more natural environment than that of an individual interview because they include a range of communicative processes in which each individual may influence or be influenced by others - as they are in their real life. The process of interaction provides the opportunity to obtain the meanings and answers to a problem that are "socially constructed rather than individually created" leading to the capture of real-life data. The distinct feature of group interaction offered by this technique makes it a powerful means of obtaining deeper and richer data than those from individual interviews. The data generated through this study were analysed using a constant comparative approach. Overall, the students expressed positive views of this practice indicating that it is a useful teaching strategy. Teacher code switching was seen as a learning resource and a source supporting language output. This practice was perceived to promote student comprehension and to aid the learning of content and target language knowledge. This practice was also believed to scaffold the students' language production in different contexts. However, the students indicated their preference for teacher code switching to be constrained, as extensive use was believed to negatively impact on their L2 learning and trigger cognitive reliance on the L1 for L2 learning. The students also perceived that when the L1 was used to a great extent, their ability to develop as autonomous learners was negatively impacted. This study found that teacher code switching was supported in certain contexts by learners, thus suggesting that there is a need for the widespread assumption about the monolingual teaching approach to be re-considered.

Keywords: codeswitching, L1 use, L2 teaching, learners’ perception

Procedia PDF Downloads 324
31273 Application Potential of Forward Osmosis-Nanofiltration Hybrid Process for the Treatment of Mining Waste Water

Authors: Ketan Mahawer, Abeer Mutto, S. K. Gupta

Abstract:

The mining wastewater contains inorganic metal salts, which makes it saline and additionally contributes to contaminating the surface and underground freshwater reserves that exist nearby mineral processing industries. Therefore, treatment of wastewater and water recovery is obligatory by any available technology before disposing it into the environment. Currently, reverse osmosis (RO) is the commercially acceptable conventional membrane process for saline wastewater treatment, but consumes an enormous amount of energy and makes the process expensive. To solve this industrial problem with minimum energy consumption, we tested the feasibility of forward osmosis-nanofiltration (FO-NF) hybrid process for the mining wastewater treatment. The FO-NF process experimental results for 0.029M concentration of saline wastewater treated by 0.42 M sodium-sulfate based draw solution shows that specific energy consumption of the FO-NF process compared with standalone NF was slightly above (between 0.5-1 kWh/m3) from conventional process. However, average freshwater recovery was 30% more from standalone NF with same feed and operating conditions. Hence, FO-NF process in place of RO/NF offers a huge possibility for treating mining industry wastewater and concentrates the metals as the by-products without consuming an excessive/large amount of energy and in addition, mitigates the fouling in long periods of treatment, which also decreases the maintenance and replacement cost of the separation process.

Keywords: forward osmosis, nanofiltration, mining, draw solution, divalent solute

Procedia PDF Downloads 118
31272 Investigation of Efficient Production of ¹³⁵La for the Auger Therapy Using Medical Cyclotron in Poland

Authors: N. Zandi, M. Sitarz, J. Jastrzebski, M. Vagheian, J. Choinski, A. Stolarz, A. Trzcinska

Abstract:

¹³⁵La with the half-life of 19.5 h can be considered as a good candidate for Auger therapy. ¹³⁵La decays almost 100% by electron capture to the stable ¹³⁵Ba. In this study, all important possible reactions leading to ¹³⁵La production are investigated in details, and the corresponding theoretical yield for each reaction using the Monte-Carlo method (MCNPX code) are presented. Among them, the best reaction based on the cost-effectiveness and production yield regarding Poland facilities equipped with medical cyclotron has been selected. ¹³⁵La is produced using 16.5 MeV proton beam of general electric PET trace cyclotron through the ¹³⁵Ba(p,n)¹³⁵La reaction. Moreover, for a consistent facilitating comparison between the theoretical calculations and the experimental measurements, the beam current and also the proton beam energy is measured experimentally. Then, the obtained proton energy is considered as the entrance energy for the theoretical calculations. The production yield finally is measured and compared with the results obtained using the MCNPX code. The results show the experimental measurement and the theoretical calculations are in good agreement.

Keywords: efficient ¹³⁵La production, proton cyclotron energy measurement, MCNPX code, theoretical and experimental production yield

Procedia PDF Downloads 142
31271 A Case for Q-Methodology: Teachers as Policymakers

Authors: Thiru Vandeyar

Abstract:

The present study set out to determine how Q methodology may be used as an inclusive education policy development process. Utilising Q-methodology as a strategy of inquiry, this qualitative instrumental case study set out to explore how teachers, as a crucial but often neglected human resource, may be included in developing policy. A social constructivist lens and the theoretical moorings of Proudford’s emancipatory approach to educational change anchored in teachers’ ‘writerly’ interpretation of policy text was employed. Findings suggest that Q-method is a unique research approach to include teachers’ voices in policy development. Second, that beliefs, attitudes, and professionalism of teachers to improve teaching and learning using ICT are integral to policy formulation. The study indicates that teachers have unique beliefs about what statements should constitute a school’s information and communication (ICT) policy. Teachers’ experiences are an extremely valuable resource in and should not be ignored in the policy formulation process.

Keywords: teachers, q-methodology, education policy, ICT

Procedia PDF Downloads 85
31270 The Lethal Autonomy and Military Targeting Process

Authors: Serdal Akyüz, Halit Turan, Mehmet Öztürk

Abstract:

The future security environment will have new battlefield and enemies. The boundaries of battlefield and the identity of enemies cannot be noticed easily. The politicians may not want to lose their soldiers in very risky operations. This approach will pave the way for smart machines like war robots and new drones. These machines will have the decision-making ability and act simultaneously. This ability can change the military targeting process. Military targeting process (MTP) benefits from a wide scope of lethal and non-lethal weapons to reach an intended end-state. This process is now managed by people but in the future smart machines can do it by themselves. At first sight, this development seems useful for humanity owing to decrease the casualties in war. Using robots -which can decide, detect, deliver and asses without human support- for homeland security and against terrorist has very crucial risks and threats. Besides, it can decrease the havoc but also increase the collateral damages. This paper examines the current use of smart war machines, military targeting process and presents a new approach to MTP from lethal autonomy concept's point of view.

Keywords: the autonomous weapon systems, the lethal autonomy, military targeting process (MTP)

Procedia PDF Downloads 428
31269 A Review on Higher-Order Spline Techniques for Solving Burgers Equation Using B-Spline Methods and Variation of B-Spline Techniques

Authors: Maryam Khazaei Pool, Lori Lewis

Abstract:

This is a summary of articles based on higher order B-splines methods and the variation of B-spline methods such as Quadratic B-spline Finite Elements Method, Exponential Cubic B-Spline Method, Septic B-spline Technique, Quintic B-spline Galerkin Method, and B-spline Galerkin Method based on the Quadratic B-spline Galerkin method (QBGM) and Cubic B-spline Galerkin method (CBGM). In this paper, we study the B-spline methods and variations of B-spline techniques to find a numerical solution to the Burgers’ equation. A set of fundamental definitions, including Burgers equation, spline functions, and B-spline functions, are provided. For each method, the main technique is discussed as well as the discretization and stability analysis. A summary of the numerical results is provided, and the efficiency of each method presented is discussed. A general conclusion is provided where we look at a comparison between the computational results of all the presented schemes. We describe the effectiveness and advantages of these methods.

Keywords: Burgers’ equation, Septic B-spline, modified cubic B-spline differential quadrature method, exponential cubic B-spline technique, B-spline Galerkin method, quintic B-spline Galerkin method

Procedia PDF Downloads 126
31268 Readout Development of a LGAD-based Hybrid Detector for Microdosimetry (HDM)

Authors: Pierobon Enrico, Missiaggia Marta, Castelluzzo Michele, Tommasino Francesco, Ricci Leonardo, Scifoni Emanuele, Vincezo Monaco, Boscardin Maurizio, La Tessa Chiara

Abstract:

Clinical outcomes collected over the past three decades have suggested that ion therapy has the potential to be a treatment modality superior to conventional radiation for several types of cancer, including recurrences, as well as for other diseases. Although the results have been encouraging, numerous treatment uncertainties remain a major obstacle to the full exploitation of particle radiotherapy. To overcome therapy uncertainties optimizing treatment outcome, the best possible radiation quality description is of paramount importance linking radiation physical dose to biological effects. Microdosimetry was developed as a tool to improve the description of radiation quality. By recording the energy deposition at the micrometric scale (the typical size of a cell nucleus), this approach takes into account the non-deterministic nature of atomic and nuclear processes and creates a direct link between the dose deposited by radiation and the biological effect induced. Microdosimeters measure the spectrum of lineal energy y, defined as the energy deposition in the detector divided by most probable track length travelled by radiation. The latter is provided by the so-called “Mean Chord Length” (MCL) approximation, and it is related to the detector geometry. To improve the characterization of the radiation field quality, we define a new quantity replacing the MCL with the actual particle track length inside the microdosimeter. In order to measure this new quantity, we propose a two-stage detector consisting of a commercial Tissue Equivalent Proportional Counter (TEPC) and 4 layers of Low Gain Avalanche Detectors (LGADs) strips. The TEPC detector records the energy deposition in a region equivalent to 2 um of tissue, while the LGADs are very suitable for particle tracking because of the thickness thinnable down to tens of micrometers and fast response to ionizing radiation. The concept of HDM has been investigated and validated with Monte Carlo simulations. Currently, a dedicated readout is under development. This two stages detector will require two different systems to join complementary information for each event: energy deposition in the TEPC and respective track length recorded by LGADs tracker. This challenge is being addressed by implementing SoC (System on Chip) technology, relying on Field Programmable Gated Arrays (FPGAs) based on the Zynq architecture. TEPC readout consists of three different signal amplification legs and is carried out thanks to 3 ADCs mounted on a FPGA board. LGADs activated strip signal is processed thanks to dedicated chips, and finally, the activated strip is stored relying again on FPGA-based solutions. In this work, we will provide a detailed description of HDM geometry and the SoC solutions that we are implementing for the readout.

Keywords: particle tracking, ion therapy, low gain avalanche diode, tissue equivalent proportional counter, microdosimetry

Procedia PDF Downloads 175
31267 Enhancing Wire Electric Discharge Machining Efficiency through ANOVA-Based Process Optimization

Authors: Rahul R. Gurpude, Pallvita Yadav, Amrut Mulay

Abstract:

In recent years, there has been a growing focus on advanced manufacturing processes, and one such emerging process is wire electric discharge machining (WEDM). WEDM is a precision machining process specifically designed for cutting electrically conductive materials with exceptional accuracy. It achieves material removal from the workpiece metal through spark erosion facilitated by electricity. Initially developed as a method for precision machining of hard materials, WEDM has witnessed significant advancements in recent times, with numerous studies and techniques based on electrical discharge phenomena being proposed. These research efforts and methods in the field of ED encompass a wide range of applications, including mirror-like finish machining, surface modification of mold dies, machining of insulating materials, and manufacturing of micro products. WEDM has particularly found extensive usage in the high-precision machining of complex workpieces that possess varying hardness and intricate shapes. During the cutting process, a wire with a diameter ranging from 0.18mm is employed. The evaluation of EDM performance typically revolves around two critical factors: material removal rate (MRR) and surface roughness (SR). To comprehensively assess the impact of machining parameters on the quality characteristics of EDM, an Analysis of Variance (ANOVA) was conducted. This statistical analysis aimed to determine the significance of various machining parameters and their relative contributions in controlling the response of the EDM process. By undertaking this analysis, optimal levels of machining parameters were identified to achieve desirable material removal rates and surface roughness.

Keywords: WEDM, MRR, optimization, surface roughness

Procedia PDF Downloads 75
31266 Enhanced Calibration Map for a Four-Hole Probe for Measuring High Flow Angles

Authors: Jafar Mortadha, Imran Qureshi

Abstract:

This research explains and compares the modern techniques used for measuring the flow angles of a flowing fluid with the traditional technique of using multi-hole pressure probes. In particular, the focus of the study is on four-hole probes, which offer great reliability and benefits in several applications where the use of modern measurement techniques is either inconvenient or impractical. Due to modern advancements in manufacturing, small multi-hole pressure probes can be made with high precision, which eliminates the need for calibrating every manufactured probe. This study aims to improve the range of calibration maps for a four-hole probe to allow high flow angles to be measured accurately. The research methodology comprises a literature review of the successful calibration definitions that have been implemented on five-hole probes. These definitions are then adapted and applied on a four-hole probe using a set of raw pressures data. A comparison of the different definitions will be carried out in Matlab and the results will be analyzed to determine the best calibration definition. Taking simplicity of implementation into account as well as the reliability of flow angles estimation, an adapted technique from a research paper written in 2002 offered the most promising outcome. Consequently, the method is seen as a good enhancement for four-hole probes and it can substitute for the existing calibration definitions that offer less accuracy.

Keywords: calibration definitions, calibration maps, flow measurement techniques, four-hole probes, multi-hole pressure probes

Procedia PDF Downloads 295
31265 Eliciting and Confirming Data, Information, Knowledge and Wisdom in a Specialist Health Care Setting - The Wicked Method

Authors: Sinead Impey, Damon Berry, Selma Furtado, Miriam Galvin, Loretto Grogan, Orla Hardiman, Lucy Hederman, Mark Heverin, Vincent Wade, Linda Douris, Declan O'Sullivan, Gaye Stephens

Abstract:

Healthcare is a knowledge-rich environment. This knowledge, while valuable, is not always accessible outside the borders of individual clinics. This research aims to address part of this problem (at a study site) by constructing a maximal data set (knowledge artefact) for motor neurone disease (MND). This data set is proposed as an initial knowledge base for a concurrent project to develop an MND patient data platform. It represents the domain knowledge at the study site for the duration of the research (12 months). A knowledge elicitation method was also developed from the lessons learned during this process - the WICKED method. WICKED is an anagram of the words: eliciting and confirming data, information, knowledge, wisdom. But it is also a reference to the concept of wicked problems, which are complex and challenging, as is eliciting expert knowledge. The method was evaluated at a second site, and benefits and limitations were noted. Benefits include that the method provided a systematic way to manage data, information, knowledge and wisdom (DIKW) from various sources, including healthcare specialists and existing data sets. Limitations surrounded the time required and how the data set produced only represents DIKW known during the research period. Future work is underway to address these limitations.

Keywords: healthcare, knowledge acquisition, maximal data sets, action design science

Procedia PDF Downloads 360
31264 Toward the Understanding of Shadow Port's Growth: The Level of Shadow Port

Authors: Chayakarn Bamrungbutr, James Sillitoe

Abstract:

The term ‘shadow port’ is used to describe a port whose markets are dominated by an adjacent port that has a more competitive capability. Recently, researchers have put effort into studying the mechanisms of how a regional port, in the shadow of a nearby predominant port which is a capital city port, can compete and grow. However, such mechanism is still unclear. This study thus focuses on understanding the growth of shadow port and the type of shadow port by using the two capital city ports of Thailand; Bangkok port (the former main port) and Laem Chabang port (the current main port), as the case study. By developing an understanding of the mechanisms of shadow, port could ultimately lead to an increase in the competitiveness. In this study, a framework of opportunity capture (introduced by Magala, 2004) will be used to create a framework for the study of the growth of the selected shadow port. In the process of building this framework, five groups of port development experts, consisting of government, council, academia, logistics provider and industry, will be interviewed. To facilitate this work, the Noticing, Collecting and Thinking model which was developed by Seidel (1998) will be used in an analysis of the dataset. The resulting analysis will be used to classify the type of shadow port. The type of these ports will be a significant factor for developing a feasible strategic guideline for the future management planning of ports, particularly, shadow ports, and then to increase the competitiveness of a nation’s maritime transport industry, and eventually lead to a boost in the national economy.

Keywords: shadow port, Bangkok Port, Laem Chabang Port, port growth

Procedia PDF Downloads 177
31263 Experimental Studies of the Reverse Load-Unloading Effect on the Mechanical, Linear and Nonlinear Elastic Properties of n-AMg6/C60 Nanocomposite

Authors: Aleksandr I. Korobov, Natalia V. Shirgina, Aleksey I. Kokshaiskiy, Vyacheslav M. Prokhorov

Abstract:

The paper presents the results of an experimental study of the effect of reverse mechanical load-unloading on the mechanical, linear, and nonlinear elastic properties of n-AMg6/C60 nanocomposite. Samples for experimental studies of n-AMg6/C60 nanocomposite were obtained by grinding AMg6 polycrystalline alloy in a planetary mill with 0.3 wt % of C60 fullerite in an argon atmosphere. The resulting product consisted of 200-500-micron agglomerates of nanoparticles. X-ray coherent scattering (CSL) method has shown that the average nanoparticle size is 40-60 nm. The resulting preform was extruded at high temperature. Modifications of C60 fullerite interferes the process of recrystallization at grain boundaries. In the samples of n-AMg6/C60 nanocomposite, the load curve is measured: the dependence of the mechanical stress σ on the strain of the sample ε under its multi-cycle load-unloading process till its destruction. The hysteresis dependence σ = σ(ε) was observed, and insignificant residual strain ε < 0.005 were recorded. At σ≈500 MPa and ε≈0.025, the sample was destroyed. The destruction of the sample was fragile. Microhardness was measured before and after destruction of the sample. It was found that the loading-unloading process led to an increase in its microhardness. The effect of the reversible mechanical stress on the linear and nonlinear elastic properties of the n-AMg6/C60 nanocomposite was studied experimentally by ultrasonic method on the automated complex Ritec RAM-5000 SNAP SYSTEM. In the n-AMg6/C60 nanocomposite, the velocities of the longitudinal and shear bulk waves were measured with the pulse method, and all the second-order elasticity coefficients and their dependence on the magnitude of the reversible mechanical stress applied to the sample were calculated. Studies of nonlinear elastic properties of the n-AMg6/C60 nanocomposite at reversible load-unloading of the sample were carried out with the spectral method. At arbitrary values of the strain of the sample (up to its breakage), the dependence of the amplitude of the second longitudinal acoustic harmonic at a frequency of 2f = 10MHz on the amplitude of the first harmonic at a frequency f = 5MHz of the acoustic wave is measured. Based on the results of these measurements, the values of the nonlinear acoustic parameter in the n-AMg6/C60 nanocomposite sample at different mechanical stress were determined. The obtained results can be used in solid-state physics, materials science, for development of new techniques for nondestructive testing of structural materials using methods of nonlinear acoustic diagnostics. This study was supported by the Russian Science Foundation (project №14-22-00042).

Keywords: nanocomposite, generation of acoustic harmonics, nonlinear acoustic parameter, hysteresis

Procedia PDF Downloads 151
31262 Mechanical Characterization of Banana by Inverse Analysis Method Combined with Indentation Test

Authors: Juan F. P. Ramírez, Jésica A. L. Isaza, Benjamín A. Rojano

Abstract:

This study proposes a novel use of a method to determine the mechanical properties of fruits by the use of the indentation tests. The method combines experimental results with a numerical finite elements model. The results presented correspond to a simplified numerical modeling of banana. The banana was assumed as one-layer material with an isotropic linear elastic mechanical behavior, the Young’s modulus found is 0.3Mpa. The method will be extended to multilayer models in further studies.

Keywords: finite element method, fruits, inverse analysis, mechanical properties

Procedia PDF Downloads 358
31261 Integrated Approach of Quality Function Deployment, Sensitivity Analysis and Multi-Objective Linear Programming for Business and Supply Chain Programs Selection

Authors: T. T. Tham

Abstract:

The aim of this study is to propose an integrated approach to determine the most suitable programs, based on Quality Function Deployment (QFD), Sensitivity Analysis (SA) and Multi-Objective Linear Programming model (MOLP). Firstly, QFD is used to determine business requirements and transform them into business and supply chain programs. From the QFD, technical scores of all programs are obtained. All programs are then evaluated through five criteria (productivity, quality, cost, technical score, and feasibility). Sets of weight of these criteria are built using Sensitivity Analysis. Multi-Objective Linear Programming model is applied to select suitable programs according to multiple conflicting objectives under a budget constraint. A case study from the Sai Gon-Mien Tay Beer Company is given to illustrate the proposed methodology. The outcome of the study provides a comprehensive picture for companies to select suitable programs to obtain the optimal solution according to their preference.

Keywords: business program, multi-objective linear programming model, quality function deployment, sensitivity analysis, supply chain management

Procedia PDF Downloads 123
31260 A Collaborative Action Research on the Teaching of Music Learning Center in Taiwan's Preschool

Authors: Mei-Ying Liao, Lee-Ching Wei, Jung-Hsiang Tseng

Abstract:

The main purpose of this study was to explore the process of planning and execution of the music learning center in preschool. This study was conducted through a collaborative action research method. The research members included a university music professor, a teaching guide, a preschool director, and a preschool teacher, leading a class of 5-6-year-old children to participate in this study. Five teaching cycles were performed with a subject of bird. In the whole process that lasted three months, the research members would maintain the conversation, reflection, and revision repeatedly. A triangular validated method was used to collect data, including archives, interviews, seminars, observations, journals, and learning evaluations to improve research on the validity and reliability. It was found that a successful music learning center required comprehensive planning and execution. It is also important to develop good listening, singing, respect, and homing habits at the beginning of running the music learning center. By timely providing diverse musical instruments, learning materials, and activities according to the teaching goals, children’s desire to learning was highly stimulated. Besides, peer interactions improved their ensemble and problem-solving abilities. The collaborative action research enhanced the preschool teacher’s confidence and promoted professional growth of the research members.

Keywords: collaborative action research, case study, music learning center, music development

Procedia PDF Downloads 372
31259 Measuring Greenhouse Gas Exchange from Paddy Field Using Eddy Covariance Method in Mekong Delta, Vietnam

Authors: Vu H. N. Khue, Marian Pavelka, Georg Jocher, Jiří Dušek, Le T. Son, Bui T. An, Ho Q. Bang, Pham Q. Huong

Abstract:

Agriculture is an important economic sector of Vietnam, the most popular of which is wet rice cultivation. These activities are also known as the main contributor to the national greenhouse gas. In order to understand more about greenhouse gas exchange in these activities and to investigate the factors influencing carbon cycling and sequestration in these types of ecosystems, since 2019, the first eddy covariance station has been installed in a paddy field in Long An province, Mekong Delta. The station was equipped with state-of-the-art equipment for CO₂ and CH₄ gas exchange and micrometeorology measurements. In this study, data from the station was processed following the ICOS recommendations (Integrated Carbon Observation System) standards for CO₂, while CH₄ was manually processed and gap-filled using a random forest model from methane-gapfill-ml, a machine learning package, as there is no standard method for CH₄ flux gap-filling yet. Finally, the carbon equivalent (Ce) balance based on CO₂ and CH₄ fluxes was estimated. The results show that in 2020, even though a new water management practice - alternate wetting and drying - was applied to reduce methane emissions, the paddy field released 928 g Cₑ.m⁻².yr⁻¹, and in 2021, it was reduced to 707 g Cₑ.m⁻².yr⁻¹. On a provincial level, rice cultivation activities in Long An, with a total area of 498,293 ha, released 4.6 million tons of Cₑ in 2020 and 3.5 million tons of Cₑ in 2021.

Keywords: eddy covariance, greenhouse gas, methane, rice cultivation, Mekong Delta

Procedia PDF Downloads 142
31258 Design of Identification Based Adaptive Control for Fermentation Process in Bioreactor

Authors: J. Ritonja

Abstract:

The biochemical technology has been developing extremely fast since the middle of the last century. The main reason for such development represents a requirement for large production of high-quality biologically manufactured products such as pharmaceuticals, foods, and beverages. The impact of the biochemical industry on the world economy is enormous. The great importance of this industry also results in intensive development in scientific disciplines relevant to the development of biochemical technology. In addition to developments in the fields of biology and chemistry, which enable to understand complex biochemical processes, development in the field of control theory and applications is also very important. In the paper, the control for the biochemical reactor for the milk fermentation was studied. During the fermentation process, the biophysical quantities must be precisely controlled to obtain the high-quality product. To control these quantities, the bioreactor’s stirring drive and/or heating system can be used. Available commercial biochemical reactors are equipped with open loop or conventional linear closed loop control system. Due to the outstanding parameters variations and the partial nonlinearity of the biochemical process, the results obtained with these control systems are not satisfactory. To improve the fermentation process, the self-tuning adaptive control system was proposed. The use of the self-tuning adaptive control is suggested because the parameters’ variations of the studied biochemical process are very slow in most cases. To determine the linearized mathematical model of the fermentation process, the recursive least square identification method was used. Based on the obtained mathematical model the linear quadratic regulator was tuned. The parameters’ identification and the controller’s synthesis are executed on-line and adapt the controller’s parameters to the fermentation process’ dynamics during the operation. The use of the proposed combination represents the original solution for the control of the milk fermentation process. The purpose of the paper is to contribute to the progress of the control systems for the biochemical reactors. The proposed adaptive control system was tested thoroughly. From the obtained results it is obvious that the proposed adaptive control system assures much better following of the reference signal as a conventional linear control system with fixed control parameters.

Keywords: adaptive control, biochemical reactor, linear quadratic regulator, recursive least square identification

Procedia PDF Downloads 124
31257 The Romero-System Clarinet: A Milestone in the 19th Century Clarinet Manufacture

Authors: Pedro Rubio

Abstract:

Antonio Romero y Andía, was one of the most active and interesting figures in 19th century Spanish music. He was not only an exceptional clarinetist, he was also a publisher, a brilliant oboist, a music critic, and he revitalized Madrid’s musical scene by promoting orchestras and a national opera. In 1849, Romero was appointed Professor of Clarinet at the Conservatory of Madrid. Shortly after, Romero introduced to Spain the Boehm-System clarinet recently appeared in France. However, when initial interest in that system waned, he conceived his own system in 1853. The clarinet was manufactured in Paris by Lefêvre, who registered its first patent in 1862. In 1867 a second version was patented, and a year earlier, in 1866, the Romero clarinet was adopted as an official instrument for teaching the clarinet at the Conservatory of Madrid. The Romero-System clarinet mechanism has incorporated numerous additional devices and several extra keys, its skillful combination in a single instrument represents not only one of the pinnacles in the manufacture of musical instruments of the 19th century, but also an authentic synthesis of knowledge and practice in an era in which woodwind instruments were shaped as we know them today. Through the description and analysis of the data related to the aforementioned historical period, this lecture will try to show a crucial time in the history of all woodwind instruments, a period of technological effervescence in which the Romero-System clarinet emerged. The different stages of conception of the clarinet will be described, as well as its manufacturing and marketing process. Romero played with his clarinet system over twenty-five years. The research has identified the repertoire associated with this instrument whose conclusions will be presented in its case in the Congress.

Keywords: Antonio Romero, clarinet, keywork, 19th century

Procedia PDF Downloads 126