Search results for: mining operation
1045 The Femoral Eversion Endarterectomy Technique with Transection: Safety and Efficacy
Authors: Hansraj Riteesh Bookun, Emily Maree Stevens, Jarryd Leigh Solomon, Anthony Chan
Abstract:
Objective: This was a retrospective cross-sectional study evaluating the safety and efficacy of femoral endarterectomy using the eversion technique with transection as opposed to the conventional endarterectomy technique with either vein or synthetic patch arterioplasty. Methods: Between 2010 to mid 2017, 19 patients with mean age of 75.4 years, underwent eversion femoral endarterectomy with transection by a single surgeon. There were 13 males (68.4%), and the comorbid burden was as follows: ischaemic heart disease (53.3%), diabetes (43.8%), stage 4 kidney impairment (13.3%) and current or ex-smoking (73.3%). The indications were claudication (45.5%), rest pain (18.2%) and tissue loss (36.3%). Results: The technical success rate was 100%. One patient required a blood transfusion following bleeding from intraoperative losses. Two patients required blood transfusions from low post operative haemogloblin concentrations – one of them in the context of myelodysplastic syndrome. There were no unexpected returns to theatre. The mean length of stay was 11.5 days with two patients having inpatient stays of 36 and 50 days respectively due to the need for rehabilitation. There was one death unrelated to the operation. Conclusion: The eversion technique with transection is safe and effective with low complication rates and a normally expected length of stay. It poses the advantage of not requiring a synthetic patch. This technique features minimal extraneous dissection as there is no need to harvest vein for a patch. Additionally, future endovascular interventions can be performed by puncturing the native vessel. There is no change to the femoral bifurcation anatomy after this technique. We posit that this is a useful adjunct to the surgeon’s panoply of vascular surgical techniques.Keywords: endarterectomy, eversion, femoral, vascular
Procedia PDF Downloads 1991044 Exploring Polar Syntactic Effects of Verbal Extensions in Basà Language
Authors: Imoh Philip
Abstract:
This work investigates four verbal extensions; two in each set resulting in two opposite effects of the valency of verbs in Basà language. Basà language is an indigenous language spoken in Kogi, Nasarawa, Benue, Niger states and all the Federal Capital Territory (FCT) councils. Crozier & Blench (1992) and Blench & Williamson (1988) classify Basà as belonging to Proto–Kru, under the sub-phylum Western –Kru. It studies the effects of such morphosyntactic operations in Basà language with special focus on ‘reflexives’ ‘reciprocals’ versus ‘causativization’ and ‘applicativization’ both sets are characterized by polar syntactic processes of either decreasing or increasing the verb’s valency by one argument vis-à-vis the basic number of arguments, but by the similar morphological processes. In addition to my native intuitions as a native speaker of Basà language, data elicited for this work include discourse observation, staged and elicited spoken data from fluent native speakers. The paper argues that affixes attached to the verb root, result in either deriving an intransitive verb from a transitive one or a transitive verb from a bi/ditransitive verb and equally increase the verb’s valence deriving either a bitransitive verb from a transitive verb or a transitive verb from a intransitive one. Where the operation increases the verb’s valency, it triggers a transformation of arguments in the derived structure. In this case, the applied arguments displace the inherent ones. This investigation can stimulate further study on other transformations that are either syntactic or morphosyntactic in Basà and can also be replicated in other African and non-African languages.Keywords: verbal extension, valency, reflexive, reciprocal, causativization, applicativization, Basà
Procedia PDF Downloads 2011043 Scheduling Method for Electric Heater in HEMS considering User’s Comfort
Authors: Yong-Sung Kim, Je-Seok Shin, Ho-Jun Jo, Jin-O Kim
Abstract:
Home Energy Management System (HEMS) which makes the residential consumers contribute to the demand response is attracting attention in recent years. An aim of HEMS is to minimize their electricity cost by controlling the use of their appliances according to electricity price. The use of appliances in HEMS may be affected by some conditions such as external temperature and electricity price. Therefore, the user’s usage pattern of appliances should be modeled according to the external conditions, and the resultant usage pattern is related to the user’s comfortability on use of each appliances. This paper proposes a methodology to model the usage pattern based on the historical data with the copula function. Through copula function, the usage range of each appliance can be obtained and is able to satisfy the appropriate user’s comfort according to the external conditions for next day. Within the usage range, an optimal scheduling for appliances would be conducted so as to minimize an electricity cost with considering user’s comfort. Among the home appliance, electric heater (EH) is a representative appliance which is affected by the external temperature. In this paper, an optimal scheduling algorithm for an electric heater (EH) is addressed based on the method of branch and bound. As a result, scenarios for the EH usage are obtained according to user’s comfort levels and then the residential consumer would select the best scenario. The case study shows the effects of the proposed algorithm compared with the traditional operation of the EH, and it also represents impacts of the comfort level on the scheduling result.Keywords: load scheduling, usage pattern, user’s comfort, copula function, branch and bound, electric heater
Procedia PDF Downloads 5851042 Solving the Economic Load Dispatch Problem Using Differential Evolution
Authors: Alaa Sheta
Abstract:
Economic Load Dispatch (ELD) is one of the vital optimization problems in power system planning. Solving the ELD problems mean finding the best mixture of power unit outputs of all members of the power system network such that the total fuel cost is minimized while sustaining operation requirements limits satisfied across the entire dispatch phases. Many optimization techniques were proposed to solve this problem. A famous one is the Quadratic Programming (QP). QP is a very simple and fast method but it still suffer many problem as gradient methods that might trapped at local minimum solutions and cannot handle complex nonlinear functions. Numbers of metaheuristic algorithms were used to solve this problem such as Genetic Algorithms (GAs) and Particle Swarm Optimization (PSO). In this paper, another meta-heuristic search algorithm named Differential Evolution (DE) is used to solve the ELD problem in power systems planning. The practicality of the proposed DE based algorithm is verified for three and six power generator system test cases. The gained results are compared to existing results based on QP, GAs and PSO. The developed results show that differential evolution is superior in obtaining a combination of power loads that fulfill the problem constraints and minimize the total fuel cost. DE found to be fast in converging to the optimal power generation loads and capable of handling the non-linearity of ELD problem. The proposed DE solution is able to minimize the cost of generated power, minimize the total power loss in the transmission and maximize the reliability of the power provided to the customers.Keywords: economic load dispatch, power systems, optimization, differential evolution
Procedia PDF Downloads 2821041 Performance Evaluation and Comparison between the Empirical Mode Decomposition, Wavelet Analysis, and Singular Spectrum Analysis Applied to the Time Series Analysis in Atmospheric Science
Authors: Olivier Delage, Hassan Bencherif, Alain Bourdier
Abstract:
Signal decomposition approaches represent an important step in time series analysis, providing useful knowledge and insight into the data and underlying dynamics characteristics while also facilitating tasks such as noise removal and feature extraction. As most of observational time series are nonlinear and nonstationary, resulting of several physical processes interaction at different time scales, experimental time series have fluctuations at all time scales and requires the development of specific signal decomposition techniques. Most commonly used techniques are data driven, enabling to obtain well-behaved signal components without making any prior-assumptions on input data. Among the most popular time series decomposition techniques, most cited in the literature, are the empirical mode decomposition and its variants, the empirical wavelet transform and singular spectrum analysis. With increasing popularity and utility of these methods in wide ranging applications, it is imperative to gain a good understanding and insight into the operation of these algorithms. In this work, we describe all of the techniques mentioned above as well as their ability to denoise signals, to capture trends, to identify components corresponding to the physical processes involved in the evolution of the observed system and deduce the dimensionality of the underlying dynamics. Results obtained with all of these methods on experimental total ozone columns and rainfall time series will be discussed and comparedKeywords: denoising, empirical mode decomposition, singular spectrum analysis, time series, underlying dynamics, wavelet analysis
Procedia PDF Downloads 1171040 Study of Pipes Scaling of Purified Wastewater Intended for the Irrigation of Agadir Golf Grass
Authors: A. Driouiche, S. Mohareb, A. Hadfi
Abstract:
In Morocco’s Agadir region, the reuse of treated wastewater for irrigation of green spaces has faced the problem of scaling of the pipes of these waters. This research paper aims at studying the phenomenon of scaling caused by the treated wastewater from the Mzar sewage treatment plant. These waters are used in the irrigation of golf turf for the Ocean Golf Resort. Ocean Golf, located about 10 km from the center of the city of Agadir, is one of the most important recreation centers in Morocco. The course is a Belt Collins design with 27 holes, and is quite open with deep challenging bunkers. The formation of solid deposits in the irrigation systems has led to a decrease in their lifetime and, consequently, a loss of load and performance. Thus, the sprinklers used in golf turf irrigation are plugged in the first weeks of operation. To study this phenomenon, the wastewater used for the irrigation of the golf turf was taken and analyzed at various points, and also samples of scale formed in the circuits of the passage of these waters were characterized. This characterization of the scale was performed by X-ray fluorescence spectrometry, X-ray diffraction (XRD), thermogravimetric analysis (TGA), differential thermal analysis (DTA), and scanning electron microscopy (SEM). The results of the physicochemical analysis of the waters show that they are full of bicarbonates (653 mg/L), chloride (478 mg/L), nitrate (412 mg/L), sodium (425 mg/L) and calcium (199mg/L). Their pH is slightly alkaline. The analysis of the scale reveals that it is rich in calcium and phosphorus. It is formed of calcium carbonate (CaCO₃), silica (SiO₂), calcium silicate (Ca₂SiO₄), hydroxylapatite (Ca₁₀P₆O₂₆), calcium carbonate and phosphate (Ca₁₀(PO₄) 6CO₃) and silicate calcium and magnesium (Ca₅MgSi₃O₁₂).Keywords: Agadir, irrigation, scaling water, wastewater
Procedia PDF Downloads 1201039 Design and Development of Power Sources for Plasma Actuators to Control Flow Separation
Authors: Himanshu J. Bahirat, Apoorva S. Janawlekar
Abstract:
Plasma actuators are essential for aerodynamic flow separation control due to their lack of mechanical parts, lightweight, and high response frequency, which have numerous applications in hypersonic or supersonic aircraft. The working of these actuators is based on the formation of a low-temperature plasma between a pair of parallel electrodes by the application of a high-voltage AC signal across the electrodes, after which air molecules from the air surrounding the electrodes are ionized and accelerated through the electric field. The high-frequency operation is required in dielectric discharge barriers to ensure plasma stability. To carry out flow separation control in a hypersonic flow, the optimal design and construction of a power supply to generate dielectric barrier discharges is carried out in this paper. In this paper, it is aspired to construct a simplified circuit topology to emulate the dielectric barrier discharge and study its various frequency responses. The power supply can generate high voltage pulses up to 20kV at the repetitive frequency range of 20-50kHz with an input power of 500W. The power supply has been designed to be short circuit proof and can endure variable plasma load conditions. Its general outline is to charge a capacitor through a half-bridge converter and then later discharge it through a step-up transformer at a high frequency in order to generate high voltage pulses. After simulating the circuit, the PCB design and, eventually, lab tests are carried out to study its effectiveness in controlling flow separation.Keywords: aircraft propulsion, dielectric barrier discharge, flow separation control, power source
Procedia PDF Downloads 1261038 Cross Professional Team-Assisted Teaching Effectiveness
Authors: Shan-Yu Hsu, Hsin-Shu Huang
Abstract:
The main purpose of this teaching research is to design an interdisciplinary team-assisted teaching method for trainees and interns and review the effectiveness of this teaching method on trainees' understanding of peritoneal dialysis. The teaching research object is the fifth and sixth-grade trainees in a medical center's medical school. The teaching methods include media teaching, demonstration of technical operation, face-to-face communication with patients, special case discussions, and field visits to the peritoneal dialysis room. Evaluate learning effectiveness before, after, and verbally. Statistical analysis was performed using the SPSS paired-sample t-test to analyze whether there is a difference in peritoneal dialysis professional cognition before and after teaching intervention. Descriptive statistics show that the average score of the previous test is 74.44, the standard deviation is 9.34, the average score of the post-test is 95.56, and the standard deviation is 5.06. The results of the t-test of the paired samples are shown as p-value = 0.006, showing the peritoneal dialysis professional cognitive test. Significant differences were observed before and after. The interdisciplinary team-assisted teaching method helps trainees and interns to improve their professional awareness of peritoneal dialysis. At the same time, trainee physicians have positive feedback on the inter-professional team-assisted teaching method. This teaching research finds that the clinical ability development education of trainees and interns can provide cross-professional team-assisted teaching methods to assist clinical teaching guidance.Keywords: monitor quality, patient safety, health promotion objective, cross-professional team-assisted teaching methods
Procedia PDF Downloads 1431037 Recommendations for Environmental Impact Assessment of Geothermal Projects on Mature Oil Fields
Authors: Daria Karasalihovic Sedlar, Lucija Jukic, Ivan Smajla, Marija Macenic
Abstract:
This paper analyses possible geothermal energy production from a mature oil reservoir based on exploitation of underlying aquifer thermal energy for the purpose of heating public buildings. Research was conducted based on the case study of the City of Ivanic-Grad public buildings energy demand and Ivanic oil filed that is situated in the same area. Since the City of Ivanic is one of the few cities in the EU where hydrocarbon exploitation has been taking place for decades almost entirely in urban area, decommissioning of oil wells is inevitable; therefore, the research goal was to investigate how to extend the life-time of the reservoir by exploiting geothermal brine beneath the oil reservoir in an environmental friendly manner. This kind of a project is extremely complex in all segments, from documentation preparation, implementation of technological solutions, and providing ecological measures for environmentally acceptable geothermal energy production and utilization. New mining activities that will be needed for the development of geothermal project at the observed Hydrocarbon Exploitation Field Ivanic will be carried out in order to prepare wells for increasing geothermal brine production. These operations involve the conversion of existing wells (well completion for conversion of the observation wells to production ones) along with workover activities, installation of new heat exchangers, and pipelines. Since the wells are in the urban area of the City of Ivanic-Grad in high density populated area, the inhabitants will be exposed to the different environmental impacts during preparation phase of the project. For the purpose of performing workovers, it will be necessary to secure access to wellheads of existing wells. This paper gives guidelines for describing potential impacts on environment components that could occur during geothermal production preparation on existing mature oil filed, recommends possible protection measures to mitigate these impacts, and gives recommendations for environmental monitoring.Keywords: geothermal energy production, mature oil filed, environmental impact assessment, underlying aquifer thermal energy
Procedia PDF Downloads 1491036 Optimal Image Representation for Linear Canonical Transform Multiplexing
Authors: Navdeep Goel, Salvador Gabarda
Abstract:
Digital images are widely used in computer applications. To store or transmit the uncompressed images requires considerable storage capacity and transmission bandwidth. Image compression is a means to perform transmission or storage of visual data in the most economical way. This paper explains about how images can be encoded to be transmitted in a multiplexing time-frequency domain channel. Multiplexing involves packing signals together whose representations are compact in the working domain. In order to optimize transmission resources each 4x4 pixel block of the image is transformed by a suitable polynomial approximation, into a minimal number of coefficients. Less than 4*4 coefficients in one block spares a significant amount of transmitted information, but some information is lost. Different approximations for image transformation have been evaluated as polynomial representation (Vandermonde matrix), least squares + gradient descent, 1-D Chebyshev polynomials, 2-D Chebyshev polynomials or singular value decomposition (SVD). Results have been compared in terms of nominal compression rate (NCR), compression ratio (CR) and peak signal-to-noise ratio (PSNR) in order to minimize the error function defined as the difference between the original pixel gray levels and the approximated polynomial output. Polynomial coefficients have been later encoded and handled for generating chirps in a target rate of about two chirps per 4*4 pixel block and then submitted to a transmission multiplexing operation in the time-frequency domain.Keywords: chirp signals, image multiplexing, image transformation, linear canonical transform, polynomial approximation
Procedia PDF Downloads 4121035 Toxicological Validation during the Development of New Catalytic Systems Using Air/Liquid Interface Cell Exposure
Authors: M. Al Zallouha, Y. Landkocz, J. Brunet, R. Cousin, J. M. Halket, E. Genty, P. J. Martin, A. Verdin, D. Courcot, S. Siffert, P. Shirali, S. Billet
Abstract:
Toluene is one of the most used Volatile Organic Compounds (VOCs) in the industry. Amongst VOCs, Benzene, Toluene, Ethylbenzene and Xylenes (BTEX) emitted into the atmosphere have a major and direct impact on human health. It is, therefore, necessary to minimize emissions directly at source. Catalytic oxidation is an industrial technique which provides remediation efficiency in the treatment of these organic compounds. However, during operation, the catalysts can release some compounds, called byproducts, more toxic than the original VOCs. The catalytic oxidation of a gas stream containing 1000ppm of toluene on Pd/α-Al2O3 can release a few ppm of benzene, according to the operating temperature of the catalyst. The development of new catalysts must, therefore, include chemical and toxicological validation phases. In this project, A549 human lung cells were exposed in air/liquid interface (Vitrocell®) to gas mixtures derived from the oxidation of toluene with a catalyst of Pd/α-Al2O3. Both exposure concentrations (i.e. 10 and 100% of catalytic emission) resulted in increased gene expression of Xenobiotics Metabolising Enzymes (XME) (CYP2E1 CYP2S1, CYP1A1, CYP1B1, EPHX1, and NQO1). Some of these XMEs are known to be induced by polycyclic organic compounds conventionally not searched during the development of catalysts for VOCs degradation. The increase in gene expression suggests the presence of undetected compounds whose toxicity must be assessed before the adoption of new catalyst. This enhances the relevance of toxicological validation of such systems before scaling-up and marketing.Keywords: BTEX toxicity, air/liquid interface cell exposure, Vitrocell®, catalytic oxidation
Procedia PDF Downloads 4111034 Predictive Analytics in Oil and Gas Industry
Authors: Suchitra Chnadrashekhar
Abstract:
Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.Keywords: hydrocarbon, information technology, SAS, predictive analytics
Procedia PDF Downloads 3601033 Triple Intercell Bar for Electrometallurgical Processes: A Design to Increase PV Energy Utilization
Authors: Eduardo P. Wiechmann, Jorge A. Henríquez, Pablo E. Aqueveque, Luis G. Muñoz
Abstract:
PV energy prices are declining rapidly. To take advantage of the benefits of those prices and lower the carbon footprint, operational practices must be modified. Undoubtedly, it challenges the electrowinning practice to operate at constant current throughout the day. This work presents a technology that contributes in providing modulation capacity to the electrode current distribution system. This is to raise the day time dc current and lower it at night. The system is a triple intercell bar that operates in current-source mode. The design is a capping board free dogbone type of bar that ensures an operation free of short circuits, hot swapability repairs and improved current balance. This current-source system eliminates the resetting currents circulating in equipotential bars. Twin auxiliary connectors are added to the main connectors providing secure current paths to bypass faulty or impaired contacts. All system conductive elements are positioned over a baseboard offering a large heat sink area to the ventilation of a facility. The system works with lower temperature than a conventional busbar. Of these attributes, the cathode current balance property stands out and is paramount for day/night modulation and the use of photovoltaic energy. A design based on a 3D finite element method model predicting electric and thermal performance under various industrial scenarios is presented. Preliminary results obtained in an electrowinning facility with industrial prototypes are included.Keywords: electrowinning, intercell bars, PV energy, current modulation
Procedia PDF Downloads 1541032 DWDM Network Implementation in the Honduran Telecommunications Company "Hondutel"
Authors: Tannia Vindel, Carlos Mejia, Damaris Araujo, Carlos Velasquez, Darlin Trejo
Abstract:
The DWDM (Dense Wavelenght Division Multiplexing) is in constant growth around the world by consumer demand to meet their needs. Since its inception in this operation arises the need for a system which enable us to expand the communication of an entire nation to improve the computing trends of their societies according to their customs and geographical location. The Honduran Company of Telecommunications (HONDUTEL), provides the internet services and data transport technology with a PDH and SDH, which represents in the Republic of Honduras C. A., the option of viability for the consumer in terms of purchase value and its ease of acquisition; but does not have the efficiency in terms of technological advance and represents an obstacle that limits the long-term socio-economic development in comparison with other countries in the region and to be able to establish a competition between telecommunications companies that are engaged in this heading. For that reason we propose to establish a new technological trend implemented in Europe and that is applied in our country that allows us to provide a data transfer in broadband as it is DWDM, in this way we will have a stable service and quality that will allow us to compete in this globalized world, and that must be replaced by one that would provide a better service and which must be in the forefront. Once implemented the DWDM is build upon the existing resources, such as the equipment used, and you will be given life to a new stage providing a business image to the Republic of Honduras C,A, as a nation, to ensure the data transport and broadband internet to a meaningful relationship. Same benefits in the first instance to existing customers and to all the institutions were bidden to these public and private need of such services.Keywords: demultiplexers, light detectors, multiplexers, optical amplifiers, optical fibers, PDH, SDH
Procedia PDF Downloads 2631031 Application of Groundwater Level Data Mining in Aquifer Identification
Authors: Liang Cheng Chang, Wei Ju Huang, You Cheng Chen
Abstract:
Investigation and research are keys for conjunctive use of surface and groundwater resources. The hydrogeological structure is an important base for groundwater analysis and simulation. Traditionally, the hydrogeological structure is artificially determined based on geological drill logs, the structure of wells, groundwater levels, and so on. In Taiwan, groundwater observation network has been built and a large amount of groundwater-level observation data are available. The groundwater level is the state variable of the groundwater system, which reflects the system response combining hydrogeological structure, groundwater injection, and extraction. This study applies analytical tools to the observation database to develop a methodology for the identification of confined and unconfined aquifers. These tools include frequency analysis, cross-correlation analysis between rainfall and groundwater level, groundwater regression curve analysis, and decision tree. The developed methodology is then applied to groundwater layer identification of two groundwater systems: Zhuoshui River alluvial fan and Pingtung Plain. The abovementioned frequency analysis uses Fourier Transform processing time-series groundwater level observation data and analyzing daily frequency amplitude of groundwater level caused by artificial groundwater extraction. The cross-correlation analysis between rainfall and groundwater level is used to obtain the groundwater replenishment time between infiltration and the peak groundwater level during wet seasons. The groundwater regression curve, the average rate of groundwater regression, is used to analyze the internal flux in the groundwater system and the flux caused by artificial behaviors. The decision tree uses the information obtained from the above mentioned analytical tools and optimizes the best estimation of the hydrogeological structure. The developed method reaches training accuracy of 92.31% and verification accuracy 93.75% on Zhuoshui River alluvial fan and training accuracy 95.55%, and verification accuracy 100% on Pingtung Plain. This extraordinary accuracy indicates that the developed methodology is a great tool for identifying hydrogeological structures.Keywords: aquifer identification, decision tree, groundwater, Fourier transform
Procedia PDF Downloads 1571030 Design and Implementation of Low-code Model-building Methods
Authors: Zhilin Wang, Zhihao Zheng, Linxin Liu
Abstract:
This study proposes a low-code model-building approach that aims to simplify the development and deployment of artificial intelligence (AI) models. With an intuitive way to drag and drop and connect components, users can easily build complex models and integrate multiple algorithms for training. After the training is completed, the system automatically generates a callable model service API. This method not only lowers the technical threshold of AI development and improves development efficiency but also enhances the flexibility of algorithm integration and simplifies the deployment process of models. The core strength of this method lies in its ease of use and efficiency. Users do not need to have a deep programming background and can complete the design and implementation of complex models with a simple drag-and-drop operation. This feature greatly expands the scope of AI technology, allowing more non-technical people to participate in the development of AI models. At the same time, the method performs well in algorithm integration, supporting many different types of algorithms to work together, which further improves the performance and applicability of the model. In the experimental part, we performed several performance tests on the method. The results show that compared with traditional model construction methods, this method can make more efficient use, save computing resources, and greatly shorten the model training time. In addition, the system-generated model service interface has been optimized for high availability and scalability, which can adapt to the needs of different application scenarios.Keywords: low-code, model building, artificial intelligence, algorithm integration, model deployment
Procedia PDF Downloads 291029 Anti-Corruption, an Important Challenge for the Construction Industry!
Authors: Ahmed Stifi, Sascha Gentes, Fritz Gehbauer
Abstract:
The construction industry is perhaps one of the oldest industry of the world. The ancient monuments like the egyptian pyramids, the temples of Greeks and Romans like Parthenon and Pantheon, the robust bridges, old Roman theatres, the citadels and many more are the best testament to that. The industry also has a symbiotic relationship with other . Some of the heavy engineering industry provide construction machineries, chemical industry develop innovative construction materials, finance sector provides fund solutions for complex construction projects and many more. Construction Industry is not only mammoth but also very complex in nature. Because of the complexity, construction industry is prone to various tribulations which may have the propensity to hamper its growth. The comparitive study of this industry with other depicts that it is associated with a state of tardiness and delay especially when we focus on the managerial aspects and the study of triple constraint (time, cost and scope). While some institutes says the complexity associated with it as a major reason, others like lean construction, refers to the wastes produced across the construction process as the prime reason. This paper introduces corruption as one of the prime factors for such delays.To support this many international reports and studies are available depicting that construction industry is one of the most corrupt sectors worldwide, and the corruption can take place throught the project cycle comprising project selection, planning, design, funding, pre-qualification, tendering, execution, operation and maintenance, and even through the reconstrction phase. It also happens in many forms such as bribe, fraud, extortion, collusion, embezzlement and conflict of interest and the self-sufficient. As a solution to cope the corruption in construction industry, the paper introduces the integrity as a key factor and build a new integrity framework to develop and implement an integrity management system for construction companies and construction projects.Keywords: corruption, construction industry, integrity, lean construction
Procedia PDF Downloads 3771028 Outcome Analysis of Surgical and Nonsurgical Treatment on Indicated Operative Chronic Subdural Hematoma: Serial Case in Cipto Mangunkusumo Hospital Indonesia
Authors: Novie Nuraini, Sari Hanifa, Yetty Ramli
Abstract:
Chronic subdural hematoma (cSDH) is a common condition after head trauma. Although the size of the thickness of cSDH has an important role in the decision to perform surgery, but the size limit of the thickness is not absolute. In this serial case report, we evaluate three case report of cSDH that indicated to get the surgical procedure because of deficit neurologic and neuroimaging finding with subfalcine herniation more than 0.5 cm and hematoma thickness more than one cm. On the first case, the patient got evacuation hematoma procedure, but the second and third case, we did nonsurgical treatment because the patient and family refused to do the operation. We did the conservative treatment with bed rest and mannitol. Serial radiologic evaluation is done when we found worsening condition. We also reevaluated radiologic examination two weeks after the treatment. The results in this serial case report, the first and second case have a good outcome. On the third case, there was a worsening condition, which in this patient there was a comorbid with type two diabetic mellitus, pneumonie and chronic kidney disease. Some conservative treatment such as bed rest, corticosteroid, mannitol or the other hyperosmolar has a good outcome in patient without neurologic deficits, small hematoma, and or patient without comorbid disease. Evacuate hematome is the best choice in cSDH treatment with deficit neurologic finding. Afterall, there is some condition that we can not do the surgical procedure. Serial radiologic examination needed after two weeks to evaluate the treatment or if there is any worsening condition.Keywords: chronic subdural hematoma, traumatic brain injury, surgical treatment, nonsurgical treatment, outcome
Procedia PDF Downloads 3321027 Excavation of Phylogenetically Diverse Bioactive Actinobacteria from Unexplored Regions of Sundarbans Mangrove Ecosystem for Mining of Economically Important Antimicrobial Compounds
Authors: Sohan Sengupta, Arnab Pramanik, Abhrajyoti Ghosh, Maitree Bhattacharyya
Abstract:
Newly emerged phyto-pathogens and multi drug resistance have been threating the world for last few decades. Actinomycetes, the most endowed group of microorganisms isolated from unexplored regions of the world may be the ultimate solution to these problems. Thus the aim of this study was to isolate several bioactive actinomycetes strains capable of producing antimicrobial secondary metabolite from Sundarbans, the only mangrove tiger land of the world. Fifty four actinomycetes were isolated and analyzed for antimicrobial activity against fifteen test organisms including three phytopathogens. Nine morphologically distinct and biologically active isolates were subjected to polyphasic identification study. 16s rDNA sequencing indicated eight isolates to reveal maximum similarity to the genus streptomyces, whereas one isolate presented only 93.57% similarity with Streptomyces albogriseolus NRRL B-1305T. Seventy-one carbon sources and twenty-three chemical sources utilization assay revealed their metabolic relatedness. Among these nine isolates three specific strains were found to have notably higher degree of antimicrobial potential effective in a broader range including phyto-pathogenic fungus. PCR base whole genome screen for PKS and NRPS genes, confirmed the occurrence of bio-synthetic gene cluster in some of the isolates for novel antibiotic production. Finally the strain SMS_SU21, which showed antimicrobial activity with MIC value of 0.05 mg ml-1and antioxidant activity with IC50 value of 0.242±0.33 mg ml-1 was detected to be the most potential one. True prospective of this strain was evaluated utilizing GC-MS and the bioactive compound responsible for antimicrobial activity was purified and characterized. Rare bioactive actinomycetes were isolated from unexplored heritage site. Diversity of the biosynthetic gene cluster for antimicrobial compound production has also been evaluated. Antimicrobial compound SU21-C has been identified and purified which is active against a broad range of pathogens.Keywords: actinomycetes, sundarbans, antimicrobial, pks nrps, phyto-pathogens, GC-MS
Procedia PDF Downloads 5051026 Modeling of Virtual Power Plant
Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.
Abstract:
Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling
Procedia PDF Downloads 631025 A Dataset of Program Educational Objectives Mapped to ABET Outcomes: Data Cleansing, Exploratory Data Analysis and Modeling
Authors: Addin Osman, Anwar Ali Yahya, Mohammed Basit Kamal
Abstract:
Datasets or collections are becoming important assets by themselves and now they can be accepted as a primary intellectual output of a research. The quality and usage of the datasets depend mainly on the context under which they have been collected, processed, analyzed, validated, and interpreted. This paper aims to present a collection of program educational objectives mapped to student’s outcomes collected from self-study reports prepared by 32 engineering programs accredited by ABET. The manual mapping (classification) of this data is a notoriously tedious, time consuming process. In addition, it requires experts in the area, which are mostly not available. It has been shown the operational settings under which the collection has been produced. The collection has been cleansed, preprocessed, some features have been selected and preliminary exploratory data analysis has been performed so as to illustrate the properties and usefulness of the collection. At the end, the collection has been benchmarked using nine of the most widely used supervised multiclass classification techniques (Binary Relevance, Label Powerset, Classifier Chains, Pruned Sets, Random k-label sets, Ensemble of Classifier Chains, Ensemble of Pruned Sets, Multi-Label k-Nearest Neighbors and Back-Propagation Multi-Label Learning). The techniques have been compared to each other using five well-known measurements (Accuracy, Hamming Loss, Micro-F, Macro-F, and Macro-F). The Ensemble of Classifier Chains and Ensemble of Pruned Sets have achieved encouraging performance compared to other experimented multi-label classification methods. The Classifier Chains method has shown the worst performance. To recap, the benchmark has achieved promising results by utilizing preliminary exploratory data analysis performed on the collection, proposing new trends for research and providing a baseline for future studies.Keywords: ABET, accreditation, benchmark collection, machine learning, program educational objectives, student outcomes, supervised multi-class classification, text mining
Procedia PDF Downloads 1721024 Optimum Performance of the Gas Turbine Power Plant Using Adaptive Neuro-Fuzzy Inference System and Statistical Analysis
Authors: Thamir K. Ibrahim, M. M. Rahman, Marwah Noori Mohammed
Abstract:
This study deals with modeling and performance enhancements of a gas-turbine combined cycle power plant. A clean and safe energy is the greatest challenges to meet the requirements of the green environment. These requirements have given way the long-time governing authority of steam turbine (ST) in the world power generation, and the gas turbine (GT) will replace it. Therefore, it is necessary to predict the characteristics of the GT system and optimize its operating strategy by developing a simulation system. The integrated model and simulation code for exploiting the performance of gas turbine power plant are developed utilizing MATLAB code. The performance code for heavy-duty GT and CCGT power plants are validated with the real power plant of Baiji GT and MARAFIQ CCGT plants the results have been satisfactory. A new technology of correlation was considered for all types of simulation data; whose coefficient of determination (R2) was calculated as 0.9825. Some of the latest launched correlations were checked on the Baiji GT plant and apply error analysis. The GT performance was judged by particular parameters opted from the simulation model and also utilized Adaptive Neuro-Fuzzy System (ANFIS) an advanced new optimization technology. The best thermal efficiency and power output attained were about 56% and 345MW respectively. Thus, the operation conditions and ambient temperature are strongly influenced on the overall performance of the GT. The optimum efficiency and power are found at higher turbine inlet temperatures. It can be comprehended that the developed models are powerful tools for estimating the overall performance of the GT plants.Keywords: gas turbine, optimization, ANFIS, performance, operating conditions
Procedia PDF Downloads 4251023 Heritage Impact Assessment Policy within Western Balkans, Albania
Authors: Anisa Duraj
Abstract:
As usually acknowledged, cultural heritage is the weakest component in EIA studies. The role of heritage impact assessment (HIA) in development projects is not often accounted for, and in those cases where it is, HIA is considered as a reactive response and not as a solutions provider. Because of continuous development projects, in most cases, heritage is unconsidered and often put under threat. Cultural protection and development challenges ask for prudent legal regulation and appropriate policy implementation. The challenges become even more peculiar in underdeveloped countries or endangered areas, which are generally characterized by numerous legal constraints. Therefore, the need for strategic proposals for HIA is of high importance. In order to trigger HIA as a proactive operation in the IA process and make sure to cover cultural heritage in the whole EIA framework, an appropriate system of evaluation of impacts should be provided. To obtain the required results for HIA, this last must be part of a regional policy, which will address and guide development projects toward a proper evaluation of their impacts affecting heritage. In order to get a clearer picture of existing gabs but also new possibilities for HIA, this paper will focus on the Western Balkans region and the undergoing changes that it faces. Concerning continuous development pressure in the region and within the aspiration of the Western Balkans countries to join the European Union (EU) as member states, attention should be paid to new development policies under the EU directives for conducting EIAs, and accurate support is required for the restructuration of existing policies as well as for the implementation of the UN Agenda for SDGs. In the framework of new emerging needs, if HIA is taken into account, the outcome would be an inclusive regional program that would help to overcome marginality issues of spaces and people.Keywords: cultural heritage, impact assessment, SDGs, urban development, western Balkans, regional policy, HIA, EIA
Procedia PDF Downloads 1141022 Geological Structure Identification in Semilir Formation: An Correlated Geological and Geophysical (Very Low Frequency) Data for Zonation Disaster with Current Density Parameters and Geological Surface Information
Authors: E. M. Rifqi Wilda Pradana, Bagus Bayu Prabowo, Meida Riski Pujiyati, Efraim Maykhel Hagana Ginting, Virgiawan Arya Hangga Reksa
Abstract:
The VLF (Very Low Frequency) method is an electromagnetic method that uses low frequencies between 10-30 KHz which results in a fairly deep penetration. In this study, the VLF method was used for zonation of disaster-prone areas by identifying geological structures in the form of faults. Data acquisition was carried out in Trimulyo Region, Jetis District, Bantul Regency, Special Region of Yogyakarta, Indonesia with 8 measurement paths. This study uses wave transmitters from Japan and Australia to obtain Tilt and Elipt values that can be used to create RAE (Rapat Arus Ekuivalen or Current Density) sections that can be used to identify areas that are easily crossed by electric current. This section will indicate the existence of a geological structure in the form of faults in the study area which is characterized by a high RAE value. In data processing of VLF method, it is obtained Tilt vs Elliptical graph and Moving Average (MA) Tilt vs Moving Average (MA) Elipt graph of each path that shows a fluctuating pattern and does not show any intersection at all. Data processing uses Matlab software and obtained areas with low RAE values that are 0%-6% which shows medium with low conductivity and high resistivity and can be interpreted as sandstone, claystone, and tuff lithology which is part of the Semilir Formation. Whereas a high RAE value of 10% -16% which shows a medium with high conductivity and low resistivity can be interpreted as a fault zone filled with fluid. The existence of the fault zone is strengthened by the discovery of a normal fault on the surface with strike N550W and dip 630E at coordinates X= 433256 and Y= 9127722 so that the activities of residents in the zone such as housing, mining activities and other activities can be avoided to reduce the risk of natural disasters.Keywords: current density, faults, very low frequency, zonation
Procedia PDF Downloads 1751021 Philippine Foreign Policy in the West Philippine Sea after the 2012 Scarborough Standoff: Implications for National Security
Authors: Rhisan Mae Enriquez-Morales
Abstract:
The primary concern of this study is to answer the question: How does the Philippine government formulate its foreign policy with respect to its territorial claims over areas in the West Philippine Sea after the Scarborough standoff in April 2012? Specifically, the study seeks to provide understanding on the political process in the formulation of foreign policy relating to the Philippine claims in the West Philippine Sea after the 2012 Scarborough Standoff, by looking into the relationship of bureaucracies and how it influences the decision-making process. Secondly, this study aims to determine the long and short term foreign policies of the Philippines with respect to its territorial claims over the West Philippine Sea. Lastly, this study seeks to determine the implication of Philippine foreign policy in settling the West Philippine Sea dispute on the country’s national security. The Bureaucratic Politics Model (BPM) in Foreign Policy Analysis (FPA) is the framework utilized in this study, which focuses primarily on the relationship of bureaucracies in the formulation of foreign policy and how these agencies influence the process of foreign policy formulation. The findings of this study reveal that: first, the Philippines foreign policy in the West Philippine Sea continues to develop to address current developments in the WPS. Second, as the government requires demilitarization there is a shift from traditional to non-traditional security approach. This shift caused inconvenience from the defense sector particularly the Navy thinking that they are being deprived of their traditional roles. Lastly, the Philippine government’s greater emphasis on internal security operation implies the need to reassess its security concerns and look into territorial security.Keywords: bureaucratic politics model, foreign policy analysis, security, West Philippine sea
Procedia PDF Downloads 3931020 AgriInnoConnect Pro System Using Iot and Firebase Console
Authors: Amit Barde, Dipali Khatave, Vaishali Savale, Atharva Chavan, Sapna Wagaj, Aditya Jilla
Abstract:
AgriInnoConnect Pro is an advanced agricultural automation system designed to enhance irrigation efficiency and overall farm management through IoT technology. Using MIT App Inventor, Telegram, Arduino IDE, and Firebase Console, it provides a user-friendly interface for farmers. Key hardware includes soil moisture sensors, DHT11 sensors, a 12V motor, a solenoid valve, a stepdown transformer, Smart Fencing, and AC switches. The system operates in automatic and manual modes. In automatic mode, the ESP32 microcontroller monitors soil moisture and autonomously controls irrigation to optimize water usage. In manual mode, users can control the irrigation motor via a mobile app. Telegram bots enable remote operation of the solenoid valve and electric fencing, enhancing farm security. Additionally, the system upgrades conventional devices to smart ones using AC switches, broadening automation capabilities. AgriInnoConnect Pro aims to improve farm productivity and resource management, addressing the critical need for sustainable water conservation and providing a comprehensive solution for modern farm management. The integration of smart technologies in AgriInnoConnect Pro ensures precision farming practices, promoting efficient resource allocation and sustainable agricultural development.Keywords: agricultural automation, IoT, soil moisture sensor, ESP32, MIT app inventor, telegram bot, smart farming, remote control, firebase console
Procedia PDF Downloads 431019 Estimation of Constant Coefficients of Bourgoyne and Young Drilling Rate Model for Drill Bit Wear Prediction
Authors: Ahmed Z. Mazen, Nejat Rahmanian, Iqbal Mujtaba, Ali Hassanpour
Abstract:
In oil and gas well drilling, the drill bit is an important part of the Bottom Hole Assembly (BHA), which is installed and designed to drill and produce a hole by several mechanisms. The efficiency of the bit depends on many drilling parameters such as weight on bit, rotary speed, and mud properties. When the bit is pulled out of the hole, the evaluation of the bit damage must be recorded very carefully to guide engineers in order to select the bits for further planned wells. Having a worn bit for hole drilling may cause severe damage to bit leading to cutter or cone losses in the bottom of hole, where a fishing job will have to take place, and all of these will increase the operating cost. The main factor to reduce the cost of drilling operation is to maximize the rate of penetration by analyzing real-time data to predict the drill bit wear while drilling. There are numerous models in the literature for prediction of the rate of penetration based on drilling parameters, mostly based on empirical approaches. One of the most commonly used approaches is Bourgoyne and Young model, where the rate of penetration can be estimated by the drilling parameters as well as a wear index using an empirical correlation, provided all the constants and coefficients are accurately determined. This paper introduces a new methodology to estimate the eight coefficients for Bourgoyne and Young model using the gPROMS parameters estimation GPE (Version 4.2.0). Real data collected form similar formations (12 ¼’ sections) in two different fields in Libya are used to estimate the coefficients. The estimated coefficients are then used in the equations and applied to nearby wells in the same field to predict the bit wear.Keywords: Bourgoyne and Young model, bit wear, gPROMS, rate of penetration
Procedia PDF Downloads 1541018 Study on the Process of Detumbling Space Target by Laser
Authors: Zhang Pinliang, Chen Chuan, Song Guangming, Wu Qiang, Gong Zizheng, Li Ming
Abstract:
The active removal of space debris and asteroid defense are important issues in human space activities. Both of them need a detumbling process, for almost all space debris and asteroid are in a rotating state, and it`s hard and dangerous to capture or remove a target with a relatively high tumbling rate. So it`s necessary to find a method to reduce the angular rate first. The laser ablation method is an efficient way to tackle this detumbling problem, for it`s a contactless technique and can work at a safe distance. In existing research, a laser rotational control strategy based on the estimation of the instantaneous angular velocity of the target has been presented. But their calculation of control torque produced by a laser, which is very important in detumbling operation, is not accurate enough, for the method they used is only suitable for the plane or regularly shaped target, and they did not consider the influence of irregular shape and the size of the spot. In this paper, based on the triangulation reconstruction of the target surface, we propose a new method to calculate the impulse of the irregularly shaped target under both the covered irradiation and spot irradiation of the laser and verify its accuracy by theoretical formula calculation and impulse measurement experiment. Then we use it to study the process of detumbling cylinder and asteroid by laser. The result shows that the new method is universally practical and has high precision; it will take more than 13.9 hours to stop the rotation of Bennu with 1E+05kJ laser pulse energy; the speed of the detumbling process depends on the distance between the spot and the centroid of the target, which can be found an optimal value in every particular case.Keywords: detumbling, laser ablation drive, space target, space debris remove
Procedia PDF Downloads 841017 The Production of Collagen and Collagen Peptides from Nile Tilapia Skin Using Membrane Technology
Authors: M. Thuanthong, W. Youravong, N. Sirinupong
Abstract:
Nile tilapia (Oreochromis niloticus) is one of fish species cultured in Thailand with a high production volume. A lot of skin is generated during fish processing. In addition, there are many research reported that fish skin contains abundant of collagen. Thus, the use of Nile tilapia skin as collagen source can increase the benefit of industrial waste. In this study, Acid soluble collagen (ASC) was extracted at 5, 15 or 25 ˚C with 0.5 M acetic acid then the acid was removed out and collagen was concentrated by ultrafiltration-diafiltration (UFDF). The triple helix collagen from UFDF process was used as substrate to produce collagen peptides by alcalase hydrolysis in an enzymatic membrane reactor (EMR) coupling with 1 kDa molecular weight cut off (MWCO) polysulfone hollow fiber membrane. The results showed that ASC extracted at high temperature (25 ˚C) with 0.5 M acetic acid for 5 h still preserved triple helix structure. In the UFDF process, the acid removal was higher than 90 % without any effect on ASC properties, particularly triple helix structure as indicated by circular dichroism spectrum. Moreover, Collagen from UFDF was used to produce collagen peptides by EMR. In EMR, collagen was pre-hydrolyzed by alcalase for 60 min before introduced to membrane separation. The EMR operation was operated for 10 h and provided a good of protein conversion stability. The results suggested that there is a successfulness of UF in application for acid removal to produce ASC with desirable preservation of its quality. In addition, the EMR was proven to be an effective process to produce low molecular weight peptides with ACE-inhibitory activity properties.Keywords: acid soluble collagen, ultrafiltration-diafiltration, enzymatic membrane reactor, ace-inhibitory activity
Procedia PDF Downloads 4771016 Microgrid: An Alternative of Electricity Supply to an Island in Thailand
Authors: Pawitchaya Srijaiwong, Surin Khomfoi
Abstract:
There are several solutions to supply electricity to an island in Thailand such as diesel generation, submarine power cable, and renewable energy power generation. However, each alternative has its own limitation like fuel and pollution of diesel generation, submarine power cable length resulting in loss of cable and cost of investment, and potential of renewable energy in the local area. This paper shows microgrid system which is a new alternative for power supply to an island. It integrates local power plant from renewable energy, energy storage system, and microgrid controller. The suitable renewable energy power generation on an island is selected from geographic location and potential evaluation. Thus, photovoltaic system and hydro power plant are taken into account. The capacity of energy storage system is also estimated by transient stability study in order to supply electricity demand sufficiently under normal condition. Microgrid controller plays an important role in conducting, communicating and operating for both sources and loads on an island so that its functions are discussed in this study. The conceptual design of microgrid operation is investigated in order to analyze the reliability and power quality. The result of this study shows that microgrid is able to operate in parallel with the main grid and in case of islanding. It is applicable for electricity supply to an island and a remote area. The advantages of operating microgrid on an island include the technical aspect like improving reliability and quality of power system and social aspects like outage cost saving and CO₂ reduction.Keywords: energy storage, islanding, microgrid, renewable energy
Procedia PDF Downloads 328