Search results for: aerodynamics-strength coupled optimization
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4554

Search results for: aerodynamics-strength coupled optimization

894 Governance Models of Higher Education Institutions

Authors: Zoran Barac, Maja Martinovic

Abstract:

Higher Education Institutions (HEIs) are a special kind of organization, with its unique purpose and combination of actors. From the societal point of view, they are central institutions in the society that are involved in the activities of education, research, and innovation. At the same time, their societal function derives complex relationships between involved actors, ranging from students, faculty and administration, business community and corporate partners, government agencies, to the general public. HEIs are also particularly interesting as objects of governance research because of their unique public purpose and combination of stakeholders. Furthermore, they are the special type of institutions from an organizational viewpoint. HEIs are often described as “loosely coupled systems” or “organized anarchies“ that implies the challenging nature of their governance models. Governance models of HEIs describe roles, constellations, and modes of interaction of the involved actors in the process of strategic direction and holistic control of institutions, taking into account each particular context. Many governance models of the HEIs are primarily based on the balance of power among the involved actors. Besides the actors’ power and influence, leadership style and environmental contingency could impact the governance model of an HEI. Analyzing them through the frameworks of institutional and contingency theories, HEI governance models originate as outcomes of their institutional and contingency adaptation. HEIs tend to fit to institutional context comprised of formal and informal institutional rules. By fitting to institutional context, HEIs are converging to each other in terms of their structures, policies, and practices. On the other hand, contingency framework implies that there is no governance model that is suitable for all situations. Consequently, the contingency approach begins with identifying contingency variables that might impact a particular governance model. In order to be effective, the governance model should fit to contingency variables. While the institutional context creates converging forces on HEI governance actors and approaches, contingency variables are the causes of divergence of actors’ behavior and governance models. Finally, an HEI governance model is a balanced adaptation of the HEIs to the institutional context and contingency variables. It also encompasses roles, constellations, and modes of interaction of involved actors influenced by institutional and contingency pressures. Actors’ adaptation to the institutional context brings benefits of legitimacy and resources. On the other hand, the adaptation of the actors’ to the contingency variables brings high performance and effectiveness. HEI governance models outlined and analyzed in this paper are collegial, bureaucratic, entrepreneurial, network, professional, political, anarchical, cybernetic, trustee, stakeholder, and amalgam models.

Keywords: governance, governance models, higher education institutions, institutional context, situational context

Procedia PDF Downloads 311
893 SAFECARE: Integrated Cyber-Physical Security Solution for Healthcare Critical Infrastructure

Authors: Francesco Lubrano, Fabrizio Bertone, Federico Stirano

Abstract:

Modern societies strongly depend on Critical Infrastructures (CI). Hospitals, power supplies, water supplies, telecommunications are just few examples of CIs that provide vital functions to societies. CIs like hospitals are very complex environments, characterized by a huge number of cyber and physical systems that are becoming increasingly integrated. Ensuring a high level of security within such critical infrastructure requires a deep knowledge of vulnerabilities, threats, and potential attacks that may occur, as well as defence and prevention or mitigation strategies. The possibility to remotely monitor and control almost everything is pushing the adoption of network-connected devices. This implicitly introduces new threats and potential vulnerabilities, posing a risk, especially to those devices connected to the Internet. Modern medical devices used in hospitals are not an exception and are more and more being connected to enhance their functionalities and easing the management. Moreover, hospitals are environments with high flows of people, that are difficult to monitor and can somehow easily have access to the same places used by the staff, potentially creating damages. It is therefore clear that physical and cyber threats should be considered, analysed, and treated together as cyber-physical threats. This means that an integrated approach is required. SAFECARE, an integrated cyber-physical security solution, tries to respond to the presented issues within healthcare infrastructures. The challenge is to bring together the most advanced technologies from the physical and cyber security spheres, to achieve a global optimum for systemic security and for the management of combined cyber and physical threats and incidents and their interconnections. Moreover, potential impacts and cascading effects are evaluated through impact propagation models that rely on modular ontologies and a rule-based engine. Indeed, SAFECARE architecture foresees i) a macroblock related to cyber security field, where innovative tools are deployed to monitor network traffic, systems and medical devices; ii) a physical security macroblock, where video management systems are coupled with access control management, building management systems and innovative AI algorithms to detect behavior anomalies; iii) an integration system that collects all the incoming incidents, simulating their potential cascading effects, providing alerts and updated information regarding assets availability.

Keywords: cyber security, defence strategies, impact propagation, integrated security, physical security

Procedia PDF Downloads 143
892 Sustainable Manufacturing Industries and Energy-Water Nexus Approach

Authors: Shahbaz Abbas, Lin Han Chiang Hsieh

Abstract:

The significant population growth and climate change issues have contributed to the natural resources depletion and their sustainability in the future. Manufacturing industries have a substantial impact on every country’s economy, but the sustainability of the industrial resources is challenging, and the policymakers have been developing the possible solutions to manage the sustainability of industrial resources such as raw material, energy, water, and industrial supply chain. In order to address these challenges, nexus approach is one of the optimization and modelling techniques in the recent sustainable environmental research. The interactions between the nexus components acknowledge that all components are dependent upon each other, and they are interrelated; therefore, their sustainability is also associated with each other. In addition, the nexus concept does not only provide the resources sustainability but also environmental sustainability can be achieved through nexus approach by utilizing the industrial waste as a resource for the industrial processes. Based on energy-water nexus, this study has developed a resource-energy-water for the sugar industry to understand the interactions between sugarcane, energy, and water towards the sustainable sugar industry. In particular, the focus of the research is the Taiwanese sugar industry; however, the same approach can be adapted worldwide to optimize the sustainability of sugar industries. It has been concluded that there are significant interactions between sugarcane, energy consumption, and water consumption in the sugar industry to manage the scarcity of resources in the future. The interactions between sugarcane and energy also deliver a mechanism to reuse the sugar industrial waste as a source of energy, consequently validating industrial and environmental sustainability. The desired outcomes from the nexus can be achieved with the modifications in the policy and regulations of Taiwanese industrial sector.

Keywords: energy-water nexus, environmental sustainability, industrial sustainability, natural resource management

Procedia PDF Downloads 97
891 Transformer Fault Diagnostic Predicting Model Using Support Vector Machine with Gradient Decent Optimization

Authors: R. O. Osaseri, A. R. Usiobaifo

Abstract:

The power transformer which is responsible for the voltage transformation is of great relevance in the power system and oil-immerse transformer is widely used all over the world. A prompt and proper maintenance of the transformer is of utmost importance. The dissolved gasses content in power transformer, oil is of enormous importance in detecting incipient fault of the transformer. There is a need for accurate prediction of the incipient fault in transformer oil in order to facilitate the prompt maintenance and reducing the cost and error minimization. Study on fault prediction and diagnostic has been the center of many researchers and many previous works have been reported on the use of artificial intelligence to predict incipient failure of transformer faults. In this study machine learning technique was employed by using gradient decent algorithms and Support Vector Machine (SVM) in predicting incipient fault diagnosis of transformer. The method focuses on creating a system that improves its performance on previous result and historical data. The system design approach is basically in two phases; training and testing phase. The gradient decent algorithm is trained with a training dataset while the learned algorithm is applied to a set of new data. This two dataset is used to prove the accuracy of the proposed model. In this study a transformer fault diagnostic model based on Support Vector Machine (SVM) and gradient decent algorithms has been presented with a satisfactory diagnostic capability with high percentage in predicting incipient failure of transformer faults than existing diagnostic methods.

Keywords: diagnostic model, gradient decent, machine learning, support vector machine (SVM), transformer fault

Procedia PDF Downloads 293
890 Rapid Soil Classification Using Computer Vision, Electrical Resistivity and Soil Strength

Authors: Eugene Y. J. Aw, J. W. Koh, S. H. Chew, K. E. Chua, Lionel L. J. Ang, Algernon C. S. Hong, Danette S. E. Tan, Grace H. B. Foo, K. Q. Hong, L. M. Cheng, M. L. Leong

Abstract:

This paper presents a novel rapid soil classification technique that combines computer vision with four-probe soil electrical resistivity method and cone penetration test (CPT), to improve the accuracy and productivity of on-site classification of excavated soil. In Singapore, excavated soils from local construction projects are transported to Staging Grounds (SGs) to be reused as fill material for land reclamation. Excavated soils are mainly categorized into two groups (“Good Earth” and “Soft Clay”) based on particle size distribution (PSD) and water content (w) from soil investigation reports and on-site visual survey, such that proper treatment and usage can be exercised. However, this process is time-consuming and labour-intensive. Thus, a rapid classification method is needed at the SGs. Computer vision, four-probe soil electrical resistivity and CPT were combined into an innovative non-destructive and instantaneous classification method for this purpose. The computer vision technique comprises soil image acquisition using industrial grade camera; image processing and analysis via calculation of Grey Level Co-occurrence Matrix (GLCM) textural parameters; and decision-making using an Artificial Neural Network (ANN). Complementing the computer vision technique, the apparent electrical resistivity of soil (ρ) is measured using a set of four probes arranged in Wenner’s array. It was found from the previous study that the ANN model coupled with ρ can classify soils into “Good Earth” and “Soft Clay” in less than a minute, with an accuracy of 85% based on selected representative soil images. To further improve the technique, the soil strength is measured using a modified mini cone penetrometer, and w is measured using a set of time-domain reflectometry (TDR) probes. Laboratory proof-of-concept was conducted through a series of seven tests with three types of soils – “Good Earth”, “Soft Clay” and an even mix of the two. Validation was performed against the PSD and w of each soil type obtained from conventional laboratory tests. The results show that ρ, w and CPT measurements can be collectively analyzed to classify soils into “Good Earth” or “Soft Clay”. It is also found that these parameters can be integrated with the computer vision technique on-site to complete the rapid soil classification in less than three minutes.

Keywords: Computer vision technique, cone penetration test, electrical resistivity, rapid and non-destructive, soil classification

Procedia PDF Downloads 185
889 Estimation of Effective Radiation Dose Following Computed Tomography Urography at Aminu Kano Teaching Hospital, Kano Nigeria

Authors: Idris Garba, Aisha Rabiu Abdullahi, Mansur Yahuza, Akintade Dare

Abstract:

Background: CT urography (CTU) is efficient radiological examination for the evaluation of the urinary system disorders. However, patients are exposed to a significant radiation dose which is in a way associated with increased cancer risks. Objectives: To determine Computed Tomography Dose Index following CTU, and to evaluate organs equivalent doses. Materials and Methods: A prospective cohort study was carried at a tertiary institution located in Kano northwestern. Ethical clearance was sought and obtained from the research ethics board of the institution. Demographic, scan parameters and CT radiation dose data were obtained from patients that had CTU procedure. Effective dose, organ equivalent doses, and cancer risks were estimated using SPSS statistical software version 16 and CT dose calculator software. Result: A total of 56 patients were included in the study, consisting of 29 males and 27 females. The common indication for CTU examination was found to be renal cyst seen commonly among young adults (15-44yrs). CT radiation dose values in DLP, CTDI and effective dose for CTU were 2320 mGy cm, CTDIw 9.67 mGy and 35.04 mSv respectively. The probability of cancer risks was estimated to be 600 per a million CTU examinations. Conclusion: In this study, the radiation dose for CTU is considered significantly high, with increase in cancer risks probability. Wide radiation dose variations between patient doses suggest that optimization is not fulfilled yet. Patient radiation dose estimate should be taken into consideration when imaging protocols are established for CT urography.

Keywords: CT urography, cancer risks, effective dose, radiation exposure

Procedia PDF Downloads 313
888 Arsenic Contamination in Drinking Water Is Associated with Dyslipidemia in Pregnancy

Authors: Begum Rokeya, Rahelee Zinnat, Fatema Jebunnesa, Israt Ara Hossain, A. Rahman

Abstract:

Background and Aims: Arsenic in drinking water is a global environmental health problem, and the exposure may increase dyslipidemia and cerebrovascular diseases mortalities, most likely through causing atherosclerosis. However, the mechanism of lipid metabolism, atherosclerosis formation, arsenic exposure and impact in pregnancy is still unclear. Recent epidemiological evidences indicate close association between inorganic arsenic exposure via drinking water and Dyslipidemia. However, the exact mechanism of this arsenic-mediated increase in atherosclerosis risk factors remains enigmatic. We explore the association of the effect of arsenic on serum lipid profile in pregnant subjects. Methods: A total 200 pregnant mother screened in this study from arsenic exposed area. Our study group included 100 exposed subjects were cases and 100 Non exposed healthy pregnant were controls requited by a cross-sectional study. Clinical and anthropometric measurements were done by standard techniques. Lipidemic status was assessed by enzymatic endpoint method. Urinary As was measured by inductively coupled plasma-mass spectrometry and adjusted with specific gravity and Arsenic exposure was assessed by the level of urinary arsenic level > 100 μg/L was categorized as arsenic exposed and < 100 μg/L were categorized as non-exposed. Multivariate logistic regression and Student’s t - test was used for statistical analysis. Results: Systolic and diastolic blood pressure both were significantly higher in the Arsenic exposed pregnant subjects compared to the Non-exposed group (p<0.001). Arsenic exposed subjects had 2 times higher chance of developing hypertensive pregnancy (Odds Ratio 2.2). In parallel to the findings in Ar exposed subjects showed significantly higher proportion of triglyceride and total cholesterol and low density of lipo protein when compare to non- arsenic exposed pregnant subjects. Significant correlation of urinary arsenic level was also found with SBP, DBP, TG, T chol and serum LDL-Cholesterol. On multivariate logistic regression showed urinary arsenic had a positive association with DBP, SBP, Triglyceride and LDL-c. Conclusion: In conclusion, arsenic exposure may induce dyslipidemia like atherosclerosis through modifying reverse cholesterol transport in cholesterol metabolism. For decreasing atherosclerosis related mortality associated with arsenic, preventing exposure from environmental sources in early life is an important element.

Keywords: Arsenic Exposure, Dyslipidemia, Gestational Diabetes Mellitus, Serum lipid profile

Procedia PDF Downloads 98
887 Impinging Acoustics Induced Combustion: An Alternative Technique to Prevent Thermoacoustic Instabilities

Authors: Sayantan Saha, Sambit Supriya Dash, Vinayak Malhotra

Abstract:

Efficient propulsive systems development is an area of major interest and concern in aerospace industry. Combustion forms the most reliable and basic form of propulsion for ground and space applications. The generation of large amount of energy from a small volume relates mostly to the flaming combustion. This study deals with instabilities associated with flaming combustion. Combustion is always accompanied by acoustics be it external or internal. Chemical propulsion oriented rockets and space systems are well known to encounter acoustic instabilities. Acoustic brings in changes in inter-energy conversion and alter the reaction rates. The modified heat fluxes, owing to wall temperature, reaction rates, and non-linear heat transfer are observed. The thermoacoustic instabilities significantly result in reduced combustion efficiency leading to uncontrolled liquid rocket engine performance, serious hazards to systems, assisted testing facilities, enormous loss of resources and every year a substantial amount of money is spent to prevent them. Present work attempts to fundamentally understand the mechanisms governing the thermoacoustic combustion in liquid rocket engine using a simplified experimental setup comprising a butane cylinder and an impinging acoustic source. Rocket engine produces sound pressure level in excess of 153 Db. The RL-10 engine generates noise of 180 Db at its base. Systematic studies are carried out for varying fuel flow rates, acoustic levels and observations are made on the flames. The work is expected to yield a good physical insight into the development of acoustic devices that when coupled with the present propulsive devices could effectively enhance combustion efficiency leading to better and safer missions. The results would be utilized to develop impinging acoustic devices that impinge sound on the combustion chambers leading to stable combustion thus, improving specific fuel consumption, specific impulse, reducing emissions, enhanced performance and fire safety. The results can be effectively applied to terrestrial and space application.

Keywords: combustion instability, fire safety, improved performance, liquid rocket engines, thermoacoustics

Procedia PDF Downloads 123
886 Design Optimization of Miniature Mechanical Drive Systems Using Tolerance Analysis Approach

Authors: Eric Mxolisi Mkhondo

Abstract:

Geometrical deviations and interaction of mechanical parts influences the performance of miniature systems.These deviations tend to cause costly problems during assembly due to imperfections of components, which are invisible to a naked eye.They also tend to cause unsatisfactory performance during operation due to deformation cause by environmental conditions.One of the effective tools to manage the deviations and interaction of parts in the system is tolerance analysis.This is a quantitative tool for predicting the tolerance variations which are defined during the design process.Traditional tolerance analysis assumes that the assembly is static and the deviations come from the manufacturing discrepancies, overlooking the functionality of the whole system and deformation of parts due to effect of environmental conditions. This paper presents an integrated tolerance analysis approach for miniature system in operation.In this approach, a computer-aided design (CAD) model is developed from system’s specification.The CAD model is then used to specify the geometrical and dimensional tolerance limits (upper and lower limits) that vary component’s geometries and sizes while conforming to functional requirements.Worst-case tolerances are analyzed to determine the influenced of dimensional changes due to effects of operating temperatures.The method is used to evaluate the nominal conditions, and worse case conditions in maximum and minimum dimensions of assembled components.These three conditions will be evaluated under specific operating temperatures (-40°C,-18°C, 4°C, 26°C, 48°C, and 70°C). A case study on the mechanism of a zoom lens system is used to illustrate the effectiveness of the methodology.

Keywords: geometric dimensioning, tolerance analysis, worst-case analysis, zoom lens mechanism

Procedia PDF Downloads 141
885 Purchasing Decision-Making in Supply Chain Management: A Bibliometric Analysis

Authors: Ahlem Dhahri, Waleed Omri, Audrey Becuwe, Abdelwahed Omri

Abstract:

In industrial processes, decision-making ranges across different scales, from process control to supply chain management. The purchasing decision-making process in the supply chain is presently gaining more attention as a critical contributor to the company's strategic success. Given the scarcity of thorough summaries in the prior studies, this bibliometric analysis aims to adopt a meticulous approach to achieve quantitative knowledge on the constantly evolving subject of purchasing decision-making in supply chain management. Through bibliometric analysis, we examine a sample of 358 peer-reviewed articles from the Scopus database. VOSviewer and Gephi software were employed to analyze, combine, and visualize the data. Data analytic techniques, including citation network, page-rank analysis, co-citation, and publication trends, have been used to identify influential works and outline the discipline's intellectual structure. The outcomes of this descriptive analysis highlight the most prominent articles, authors, journals, and countries based on their citations and publications. The findings from the research illustrate an increase in the number of publications, exhibiting a slightly growing trend in this field. Co-citation analysis coupled with content analysis of the most cited articles identified five research themes mentioned as follows integrating sustainability into the supplier selection process, supplier selection under disruption risks assessment and mitigation strategies, Fuzzy MCDM approaches for supplier evaluation and selection, purchasing decision in vendor problems, decision-making techniques in supplier selection and order lot sizing problems. With the help of a graphic timeline, this exhaustive map of the field illustrates a visual representation of the evolution of publications that demonstrate a gradual shift from research interest in vendor selection problems to integrating sustainability in the supplier selection process. These clusters offer insights into a wide variety of purchasing methods and conceptual frameworks that have emerged; however, they have not been validated empirically. The findings suggest that future research would emerge with a greater depth of practical and empirical analysis to enrich the theories. These outcomes provide a powerful road map for further study in this area.

Keywords: bibliometric analysis, citation analysis, co-citation, Gephi, network analysis, purchasing, SCM, VOSviewer

Procedia PDF Downloads 62
884 Encapsulation of Probiotic Bacteria in Complex Coacervates

Authors: L. A. Bosnea, T. Moschakis, C. Biliaderis

Abstract:

Two probiotic strains of Lactobacillus paracasei subsp. paracasei (E6) and Lactobacillus paraplantarum (B1), isolated from traditional Greek dairy products, were microencapsulated by complex coacervation using whey protein isolate (WPI, 3% w/v) and gum arabic (GA, 3% w/v) solutions mixed at different polymer ratio (1:1, 2:1 and 4:1). The effect of total biopolymer concentration on cell viability was assessed using WPI and GA solutions of 1, 3 and 6% w/v at a constant ratio of 2:1. Also, several parameters were examined for optimization of the microcapsule formation, such as inoculum concentration and the effect of ionic strength. The viability of the bacterial cells during heat treatment and under simulated gut conditions was also evaluated. Among the different WPI/GA weight ratios tested (1:1, 2:1, and 4:1), the highest survival rate was observed for the coacervate structures made with the ratio of 2:1. The protection efficiency at low pH values is influenced by both concentration and the ratio of the added biopolymers. Moreover, the inoculum concentration seems to affect the efficiency of microcapsules to entrap the bacterial cells since an optimum level was noted at less than 8 log cfu/ml. Generally, entrapment of lactobacilli in the complex coacervate structure enhanced the viability of the microorganisms when exposed to a low pH environment (pH 2.0). Both encapsulated strains retained high viability in simulated gastric juice (>73%), especially in comparison with non-encapsulated (free) cells (<19%). The encapsulated lactobacilli also exhibited enhanced viability after 10–30 min of heat treatment (65oC) as well as at different NaCl concentrations (pH 4.0). Overall, the results of this study suggest that complex coacervation with WPI/GA has a potential to deliver live probiotics in low pH food systems and fermented dairy products; the complexes can dissolve at pH 7.0 (gut environment), releasing the microbial cells.

Keywords: probiotic, complex coacervation, whey, encapsulation

Procedia PDF Downloads 275
883 Bioleaching of Metals Contained in Spent Catalysts by Acidithiobacillus thiooxidans DSM 26636

Authors: Andrea M. Rivas-Castillo, Marlenne Gómez-Ramirez, Isela Rodríguez-Pozos, Norma G. Rojas-Avelizapa

Abstract:

Spent catalysts are considered as hazardous residues of major concern, mainly due to the simultaneous presence of several metals in elevated concentrations. Although hydrometallurgical, pyrometallurgical and chelating agent methods are available to remove and recover some metals contained in spent catalysts; these procedures generate potentially hazardous wastes and the emission of harmful gases. Thus, biotechnological treatments are currently gaining importance to avoid the negative impacts of chemical technologies. To this end, diverse microorganisms have been used to assess the removal of metals from spent catalysts, comprising bacteria, archaea and fungi, whose resistance and metal uptake capabilities differ depending on the microorganism tested. Acidophilic sulfur oxidizing bacteria have been used to investigate the biotreatment and extraction of valuable metals from spent catalysts, namely Acidithiobacillus thiooxidans and Acidithiobacillus ferroxidans, as they present the ability to produce leaching agents such as sulfuric acid and sulfur oxidation intermediates. In the present work, the ability of A. thiooxidans DSM 26636 for the bioleaching of metals contained in five different spent catalysts was assessed by growing the culture in modified Starkey mineral medium (with elemental sulfur at 1%, w/v), and 1% (w/v) pulp density of each residue for up to 21 days at 30 °C and 150 rpm. Sulfur-oxidizing activity was periodically evaluated by determining sulfate concentration in the supernatants according to the NMX-k-436-1977 method. The production of sulfuric acid was assessed in the supernatants as well, by a titration procedure using NaOH 0.5 M with bromothymol blue as acid-base indicator, and by measuring pH using a digital potentiometer. On the other hand, Inductively Coupled Plasma - Optical Emission Spectrometry was used to analyze metal removal from the five different spent catalysts by A. thiooxidans DSM 26636. Results obtained show that, as could be expected, sulfuric acid production is directly related to the diminish of pH, and also to highest metal removal efficiencies. It was observed that Al and Fe are recurrently removed from refinery spent catalysts regardless of their origin and previous usage, although these removals may vary from 9.5 ± 2.2 to 439 ± 3.9 mg/kg for Al, and from 7.13 ± 0.31 to 368.4 ± 47.8 mg/kg for Fe, depending on the spent catalyst proven. Besides, bioleaching of metals like Mg, Ni, and Si was also obtained from automotive spent catalysts, which removals were of up to 66 ± 2.2, 6.2±0.07, and 100±2.4, respectively. Hence, the data presented here exhibit the potential of A. thiooxidans DSM 26636 for the simultaneous bioleaching of metals contained in spent catalysts from diverse provenance.

Keywords: bioleaching, metal removal, spent catalysts, Acidithiobacillus thiooxidans

Procedia PDF Downloads 114
882 Application of Value Engineering Approach for Improving the Quality and Productivity of Ready-Mixed Concrete Used in Construction and Hydraulic Projects

Authors: Adel Mohamed El-Baghdady, Walid Sayed Abdulgalil, Ahmad Asran, Ibrahim Nosier

Abstract:

This paper studies the effectiveness of applying value engineering to actual concrete mixtures. The study was conducted in the State of Qatar on a number of strategic construction projects with international engineering specifications for the 2022 World Cup projects. The study examined the concrete mixtures of Doha Metro project and the development of KAHRAMAA’s (Qatar Electricity and Water Company) Abu Funtas Strategic Desalination Plant, in order to generally improve the quality and productivity of ready-mixed concrete used in construction and hydraulic projects. The application of value engineering to such concrete mixtures resulted in the following: i) improving the quality of concrete mixtures and increasing the durability of buildings in which they are used; ii) reducing the waste of excess materials of concrete mixture, optimizing the use of resources, and enhancing sustainability; iii) reducing the use of cement, thus reducing CO₂ emissions which ensures the protection of environment and public health; iv) reducing actual costs of concrete mixtures and, in turn, reducing the costs of construction projects; and v) increasing the market share and competitiveness of concrete producers. This research shows that applying the methodology of value engineering to ready-mixed concrete is an effective way to save around 5% of the total cost of concrete mixtures supplied to construction and hydraulic projects, improve the quality according to the technical requirements and as per the standards and specifications for ready-mixed concrete, improve the environmental impact, and promote sustainability.

Keywords: value management, cost of concrete, performance, optimization, sustainability, environmental impact

Procedia PDF Downloads 326
881 Protective Role of Autophagy Challenging the Stresses of Type 2 Diabetes and Dyslipidemia

Authors: Tanima Chatterjee, Maitree Bhattacharyya

Abstract:

The global challenge of type 2 diabetes mellitus is a major health concern in this millennium, and researchers are continuously exploring new targets to develop a novel therapeutic strategy. Type 2 diabetes mellitus (T2DM) is often coupled with dyslipidemia increasing the risks for cardiovascular (CVD) complications. Enhanced oxidative and nitrosative stresses appear to be the major risk factors underlying insulin resistance, dyslipidemia, β-cell dysfunction, and T2DM pathogenesis. Autophagy emerges to be a promising defense mechanism against stress-mediated cell damage regulating tissue homeostasis, cellular quality control, and energy production, promoting cell survival. In this study, we have attempted to explore the pivotal role of autophagy in T2DM subjects with or without dyslipidemia in peripheral blood mononuclear cells and insulin-resistant HepG2 cells utilizing flow cytometric platform, confocal microscopy, and molecular biology techniques like western blotting, immunofluorescence, and real-time polymerase chain reaction. In the case of T2DM with dyslipidemia higher population of autophagy, positive cells were detected compared to patients with the only T2DM, which might have resulted due to higher stress. Autophagy was observed to be triggered both by oxidative and nitrosative stress revealing a novel finding of our research. LC3 puncta was observed in peripheral blood mononuclear cells and periphery of HepG2 cells in the case of the diabetic and diabetic-dyslipidemic conditions. Increased expression of ATG5, LC3B, and Beclin supports the autophagic pathway in both PBMC and insulin-resistant Hep G2 cells. Upon blocking autophagy by 3-methyl adenine (3MA), the apoptotic cell population increased significantly, as observed by caspase‐3 cleavage and reduced expression of Bcl2. Autophagy has also been evidenced to control oxidative stress-mediated up-regulation of inflammatory markers like IL-6 and TNF-α. To conclude, this study elucidates autophagy to play a protective role in the case of diabetes mellitus with dyslipidemia. In the present scenario, this study demands to have a significant impact on developing a new therapeutic strategy for diabetic dyslipidemic subjects by enhancing autophagic activity.

Keywords: autophagy, apoptosis, dyslipidemia, reactive oxygen species, reactive nitrogen species, Type 2 diabetes

Procedia PDF Downloads 107
880 Catalytic Effect on Eco Friendly Functional Material in Flame Retardancy of Cellulose

Authors: Md. Abdul Hannan

Abstract:

Two organophosphorus compounds, namely diethyloxymethyl-9-oxa-10- phosphaphenanthrene-10-oxide (DOPAC) and diethyl (2,2-diethoxyethyl) phosphonate (DPAC) were applied on cotton cellulose to impart non-carcinogenic and durable (in alkaline washing) flame retardant property to it. Some acidic catalysts, sodium dihydrogen phosphate (NaH2PO4), ammonium dihydrogen phosphate (NH4H2PO4) and phosphoric acid (H3PO4) were successfully used. Synergistic acidic catalyzing effect of NaH2PO4+H3PO4 and NaH2PO4+NH4H2PO4 was also investigated. Appreciable limiting oxygen index (LOI) value of 23.2% was achieved in case of the samples treated with flame retardant (FR) compound DPAC along with the combined acidic catalyzing effect. A distinguishing outcome of total heat of combustion (THC) 3.27 KJ/g was revealed during pyrolysis combustion flow calorimetry (PCFC) test of the treated sample. In respect of thermal degradation, low temperature dehydration in conjugation with sufficient amount of char residue (30.5%) was obtained in case of DPAC treated sample. Consistently, the temperature of peak heat release rate (TPHRR) (325°C) of DPAC treated sample supported the expected low temperature pyrolysis in condensed phase mechanism. Subsequent thermogravimetric analysis (TGA) also reported inspiring weight retention% of the treated samples. Furthermore, for both of the flame retardant compounds, effect of different catalysts, considering both individual and combined, effect of solvents and overall the optimization of the process parameters were studied in detail.

Keywords: cotton cellulose, organophosphorus flame retardant, acetal linkage, THC, HRR, PHHR, char residue, LOI

Procedia PDF Downloads 239
879 Heuristics for Optimizing Power Consumption in the Smart Grid

Authors: Zaid Jamal Saeed Almahmoud

Abstract:

Our increasing reliance on electricity, with inefficient consumption trends, has resulted in several economical and environmental threats. These threats include wasting billions of dollars, draining limited resources, and elevating the impact of climate change. As a solution, the smart grid is emerging as the future power grid, with smart techniques to optimize power consumption and electricity generation. Minimizing the peak power consumption under a fixed delay requirement is a significant problem in the smart grid. In addition, matching demand to supply is a key requirement for the success of the future electricity. In this work, we consider the problem of minimizing the peak demand under appliances constraints by scheduling power jobs with uniform release dates and deadlines. As the problem is known to be NP-Hard, we propose two versions of a heuristic algorithm for solving this problem. Our theoretical analysis and experimental results show that our proposed heuristics outperform existing methods by providing a better approximation to the optimal solution. In addition, we consider dynamic pricing methods to minimize the peak load and match demand to supply in the smart grid. Our contribution is the proposal of generic, as well as customized pricing heuristics to minimize the peak demand and match demand with supply. In addition, we propose optimal pricing algorithms that can be used when the maximum deadline period of the power jobs is relatively small. Finally, we provide theoretical analysis and conduct several experiments to evaluate the performance of the proposed algorithms.

Keywords: heuristics, optimization, smart grid, peak demand, power supply

Procedia PDF Downloads 66
878 An Efficient Tool for Mitigating Voltage Unbalance with Reactive Power Control of Distributed Grid-Connected Photovoltaic Systems

Authors: Malinwo Estone Ayikpa

Abstract:

With the rapid increase of grid-connected PV systems over the last decades, genuine challenges have arisen for engineers and professionals of energy field in the planning and operation of existing distribution networks with the integration of new generation sources. However, the conventional distribution network, in its design was not expected to receive other generation outside the main power supply. The tools generally used to analyze the networks become inefficient and cannot take into account all the constraints related to the operation of grid-connected PV systems. Some of these constraints are voltage control difficulty, reverse power flow, and especially voltage unbalance which could be due to the poor distribution of single-phase PV systems in the network. In order to analyze the impact of the connection of small and large number of PV systems to the distribution networks, this paper presents an efficient optimization tool that minimizes voltage unbalance in three-phase distribution networks with active and reactive power injections from the allocation of single-phase and three-phase PV plants. Reactive power can be generated or absorbed using the available capacity and the adjustable power factor of the inverter. Good reduction of voltage unbalance can be achieved by reactive power control of the PV systems. The presented tool is based on the three-phase current injection method and the PV systems are modeled via an equivalent circuit. The primal-dual interior point method is used to obtain the optimal operating points for the systems.

Keywords: Photovoltaic system, Primal-dual interior point method, Three-phase optimal power flow, Voltage unbalance

Procedia PDF Downloads 316
877 Predictive Analytics in Oil and Gas Industry

Authors: Suchitra Chnadrashekhar

Abstract:

Earlier looked as a support function in an organization information technology has now become a critical utility to manage their daily operations. Organizations are processing huge amount of data which was unimaginable few decades before. This has opened the opportunity for IT sector to help industries across domains to handle the data in the most intelligent manner. Presence of IT has been a leverage for the Oil & Gas industry to store, manage and process the data in most efficient way possible thus deriving the economic value in their day-to-day operations. Proper synchronization between Operational data system and Information Technology system is the need of the hour. Predictive analytics supports oil and gas companies by addressing the challenge of critical equipment performance, life cycle, integrity, security, and increase their utilization. Predictive analytics go beyond early warning by providing insights into the roots of problems. To reach their full potential, oil and gas companies need to take a holistic or systems approach towards asset optimization and thus have the functional information at all levels of the organization in order to make the right decisions. This paper discusses how the use of predictive analysis in oil and gas industry is redefining the dynamics of this sector. Also, the paper will be supported by real time data and evaluation of the data for a given oil production asset on an application tool, SAS. The reason for using SAS as an application for our analysis is that SAS provides an analytics-based framework to improve uptimes, performance and availability of crucial assets while reducing the amount of unscheduled maintenance, thus minimizing maintenance-related costs and operation disruptions. With state-of-the-art analytics and reporting, we can predict maintenance problems before they happen and determine root causes in order to update processes for future prevention.

Keywords: hydrocarbon, information technology, SAS, predictive analytics

Procedia PDF Downloads 324
876 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 24
875 Smart Campus Digital Twin: Basic Framework - Current State, Trends and Challenges

Authors: Enido Fabiano de Ramos, Ieda Kanashiro Makiya, Francisco I. Giocondo Cesar

Abstract:

This study presents an analysis of the Digital Twin concept applied to the academic environment, focusing on the development of a Digital Twin Smart Campus Framework. Using bibliometric analysis methodologies and literature review, the research investigates the evolution and applications of the Digital Twin in educational contexts, comparing these findings with the advances of Industry 4.0. It was identified gaps in the existing literature and highlighted the need to adapt Digital Twin principles to meet the specific demands of a smart campus. By integrating Industry 4.0 concepts such as automation, Internet of Things, and real-time data analytics, we propose an innovative framework for the successful implementation of the Digital Twin in academic settings. The results of this study provide valuable insights for university campus managers, allowing for a better understanding of the potential applications of the Digital Twin for operations, security, and user experience optimization. In addition, our framework offers practical guidance for transitioning from a digital campus to a digital twin smart campus, promoting innovation and efficiency in the educational environment. This work contributes to the growing literature on Digital Twins and Industry 4.0, while offering a specific and tailored approach to transforming university campuses into smart and connected spaces, high demanded by Society 5.0 trends. It is hoped that this framework will serve as a basis for future research and practical implementations in the field of higher education and educational technology.

Keywords: smart campus, digital twin, industry 4.0, education trends, society 5.0

Procedia PDF Downloads 26
874 Success Factors for Innovations in SME Networks

Authors: J. Gochermann

Abstract:

Due to complex markets and products, and increasing need to innovate, cooperation between small and medium size enterprises arose during the last decades, which are not prior driven by process optimization or sales enhancement. Especially small and medium sized enterprises (SME) collaborate increasingly in innovation and knowledge networks to enhance their knowledge and innovation potential, and to find strategic partners for product and market development. These networks are characterized by dual objectives, the superordinate goal of the total network, and the specific objectives of the network members, which can cause target conflicts. Moreover, most SMEs do not have structured innovation processes and they are not accustomed to collaborate in complex innovation projects in an open network structure. On the other hand, SMEs have suitable characteristics for promising networking. They are flexible and spontaneous, they have flat hierarchies, and the acting people are not anonymous. These characteristics indeed distinguish them from bigger concerns. Investigation of German SME networks have been done to identify success factors for SME innovation networks. The fundamental network principles, donation-return and confidence, could be confirmed and identified as basic success factors. Further factors are voluntariness, adequate number of network members, quality of communication, neutrality and competence of the network management, as well as reliability and obligingness of the network services. Innovation and knowledge networks with an appreciable number of members from science and technology institutions need also active sense-making to bring different disciplines into successful collaboration. It has also been investigated, whether and how the involvement in an innovation network impacts the innovation structure and culture inside the member companies. The degree of reaction grows with time and intensity of commitment.

Keywords: innovation and knowledge networks, SME, success factors, innovation structure and culture

Procedia PDF Downloads 261
873 A Numerical Study on Semi-Active Control of a Bridge Deck under Seismic Excitation

Authors: A. Yanik, U. Aldemir

Abstract:

This study investigates the benefits of implementing the semi-active devices in relation to passive viscous damping in the context of seismically isolated bridge structures. Since the intrinsically nonlinear nature of semi-active devices prevents the direct evaluation of Laplace transforms, frequency response functions are compiled from the computed time history response to sinusoidal and pulse-like seismic excitation. A simple semi-active control policy is used in regard to passive linear viscous damping and an optimal non-causal semi-active control strategy. The control strategy requires optimization. Euler-Lagrange equations are solved numerically during this procedure. The optimal closed-loop performance is evaluated for an idealized controllable dash-pot. A simplified single-degree-of-freedom model of an isolated bridge is used as numerical example. Two bridge cases are investigated. These cases are; bridge deck without the isolation bearing and bridge deck with the isolation bearing. To compare the performances of the passive and semi-active control cases, frequency dependent acceleration, velocity and displacement response transmissibility ratios Ta(w), Tv(w), and Td(w) are defined. To fully investigate the behavior of the structure subjected to the sinusoidal and pulse type excitations, different damping levels are considered. Numerical results showed that, under the effect of external excitation, bridge deck with semi-active control showed better structural performance than the passive bridge deck case.

Keywords: bridge structures, passive control, seismic, semi-active control, viscous damping

Procedia PDF Downloads 219
872 Fracture Behaviour of Functionally Graded Materials Using Graded Finite Elements

Authors: Mohamad Molavi Nojumi, Xiaodong Wang

Abstract:

In this research fracture behaviour of linear elastic isotropic functionally graded materials (FGMs) are investigated using modified finite element method (FEM). FGMs are advantageous because they enhance the bonding strength of two incompatible materials, and reduce the residual stress and thermal stress. Ceramic/metals are a main type of FGMs. Ceramic materials are brittle. So, there is high possibility of crack existence during fabrication or in-service loading. In addition, damage analysis is necessary for a safe and efficient design. FEM is a strong numerical tool for analyzing complicated problems. Thus, FEM is used to investigate the fracture behaviour of FGMs. Here an accurate 9-node biquadratic quadrilateral graded element is proposed in which the influence of the variation of material properties is considered at the element level. The stiffness matrix of graded elements is obtained using the principle of minimum potential energy. The implementation of graded elements prevents the forced sudden jump of material properties in traditional finite elements for modelling FGMs. Numerical results are verified with existing solutions. Different numerical simulations are carried out to model stationary crack problems in nonhomogeneous plates. In these simulations, material variation is supposed to happen in directions perpendicular and parallel to the crack line. Two special linear and exponential functions have been utilized to model the material gradient as they are mostly discussed in literature. Also, various sizes of the crack length are considered. A major difference in the fracture behaviour of FGMs and homogeneous materials is related to the break of material symmetry. For example, when the material gradation direction is normal to the crack line, even under applying the mode I loading there exists coupled modes I and II of fracture which originates from the induced shear in the model. Therefore, the necessity of the proper modelling of the material variation should be considered in capturing the fracture behaviour of FGMs specially, when the material gradient index is high. Fracture properties such as mode I and mode II stress intensity factors (SIFs), energy release rates, and field variables near the crack tip are investigated and compared with results obtained using conventional homogeneous elements. It is revealed that graded elements provide higher accuracy with less effort in comparison with conventional homogeneous elements.

Keywords: finite element, fracture mechanics, functionally graded materials, graded element

Procedia PDF Downloads 153
871 Optimization of Reaction Parameters' Influences on Production of Bio-Oil from Fast Pyrolysis of Oil Palm Empty Fruit Bunch Biomass in a Fluidized Bed Reactor

Authors: Chayanoot Sangwichien, Taweesak Reungpeerakul, Kyaw Thu

Abstract:

Oil palm mills in Southern Thailand produced a large amount of biomass solid wastes. Lignocellulose biomass is the main source for production of biofuel which can be combined or used as an alternative to fossil fuels. Biomass composed of three main constituents of cellulose, hemicellulose, and lignin. Thermochemical conversion process applied to produce biofuel from biomass. Pyrolysis of biomass is the best way to thermochemical conversion of biomass into pyrolytic products (bio-oil, gas, and char). Operating parameters play an important role to optimize the product yields from fast pyrolysis of biomass. This present work concerns with the modeling of reaction kinetics parameters for fast pyrolysis of empty fruit bunch in the fluidized bed reactor. A global kinetic model used to predict the product yields from fast pyrolysis of empty fruit bunch. The reaction temperature and vapor residence time parameters are mainly affected by product yields of EFB pyrolysis. The reaction temperature and vapor residence time parameters effects on empty fruit bunch pyrolysis are considered at the reaction temperature in the range of 450-500˚C and at a vapor residence time of 2 s, respectively. The optimum simulated bio-oil yield of 53 wt.% obtained at the reaction temperature and vapor residence time of 450˚C and 2 s, 500˚C and 1 s, respectively. The simulated data are in good agreement with the reported experimental data. These simulated data can be applied to the performance of experiment work for the fast pyrolysis of biomass.

Keywords: kinetics, empty fruit bunch, fast pyrolysis, modeling

Procedia PDF Downloads 180
870 Acceptability Process of a Congestion Charge

Authors: Amira Mabrouk

Abstract:

This paper deals with the acceptability of urban toll in Tunisia. The price-based regulation, i.e. urban toll, is the outcome of a political process hampered by three-fold objectives: effectiveness, equity and social acceptability. This produces both economic interest groups and functions that are of incongruent preferences. The plausibility of this speculation goes hand in hand with the fact that these economic interest groups are also taxpayers who undeniably perceive urban toll as an additional charge. This wariness is coupled with an inquiry about the conditions of usage, the redistribution of the collected tax revenue and the idea of the leviathan state completes the picture. In a nutshell, if researches related to road congestion proliferate, no de facto legitimacy can be pleaded. Nonetheless, the theory on urban tolls engenders economists’ questioning of ways to reduce negative external effects linked to it. Only then does the urban toll appear to bear an answer to these issues. Undeniably, the urban toll suggests inherent conflicts due to the apparent no-payment principal of a public asset as well as to the social perception of the new measure as a mere additional charge. However, when the main concern is effectiveness is its broad sense and the social well-being, the main factors that determine the acceptability of such a tariff measure along with the type of incentives should be the object of a thorough, in-depth analysis. Before adopting this economic role, one has to recognize the factors that intervene in the acceptability of a congestion toll which brought about a copious number of articles and reports that lacked mostly solid theoretical content. It is noticeable that nowadays uncertainties float over the exact nature of the acceptability process. Accepting a congestion tariff could differ from one era to another, from one region to another and from one population to another, etc. Notably, this article, within a convenient time frame, attempts at bringing into focus a link between the social acceptability of the urban congestion toll and the value of time through a survey method barely employed in Tunisia, that of stated preference method. How can the urban toll, as a tax, be defined, justified and made acceptable? How can an equitable and effective tariff of congestion toll be reached? How can the costs of this urban toll be covered? In what way can we make the redistribution of the urban toll revenue visible and economically equitable? How can the redistribution of the revenue of urban toll compensate the disadvantaged while introducing such a tariff measure? This paper will offer answers to these research questions and it follows the line of contribution of JULES DUPUIT in 1844.

Keywords: congestion charge, social perception, acceptability, stated preferences

Procedia PDF Downloads 260
869 Modeling of Virtual Power Plant

Authors: Muhammad Fanseem E. M., Rama Satya Satish Kumar, Indrajeet Bhausaheb Bhavar, Deepak M.

Abstract:

Keeping the right balance of electricity between the supply and demand sides of the grid is one of the most important objectives of electrical grid operation. Power generation and demand forecasting are the core of power management and generation scheduling. Large, centralized producing units were used in the construction of conventional power systems in the past. A certain level of balance was possible since the generation kept up with the power demand. However, integrating renewable energy sources into power networks has proven to be a difficult challenge due to its intermittent nature. The power imbalance caused by rising demands and peak loads is negatively affecting power quality and dependability. Demand side management and demand response were one of the solutions, keeping generation the same but altering or rescheduling or shedding completely the load or demand. However, shedding the load or rescheduling is not an efficient way. There comes the significance of virtual power plants. The virtual power plant integrates distributed generation, dispatchable load, and distributed energy storage organically by using complementing control approaches and communication technologies. This would eventually increase the utilization rate and financial advantages of distributed energy resources. Most of the writing on virtual power plant models ignored technical limitations, and modeling was done in favor of a financial or commercial viewpoint. Therefore, this paper aims to address the modeling intricacies of VPPs and their technical limitations, shedding light on a holistic understanding of this innovative power management approach.

Keywords: cost optimization, distributed energy resources, dynamic modeling, model quality tests, power system modeling

Procedia PDF Downloads 32
868 Organotin (IV) Based Complexes as Promiscuous Antibacterials: Synthesis in vitro, in Silico Pharmacokinetic, and Docking Studies

Authors: Wajid Rehman, Sirajul Haq, Bakhtiar Muhammad, Syed Fahad Hassan, Amin Badshah, Muhammad Waseem, Fazal Rahim, Obaid-Ur-Rahman Abid, Farzana Latif Ansari, Umer Rashid

Abstract:

Five novel triorganotin (IV) compounds have been synthesized and characterized. The tin atom is penta-coordinated to assume trigonal-bipyramidal geometry. Using in silico derived parameters; the objective of our study is to design and synthesize promiscuous antibacterials potent enough to combat resistance. Among various synthesized organotin (IV) complexes, compound 5 was found as potent antibacterial agent against various bacterial strains. Further lead optimization of drug-like properties was evaluated through in silico predictions. Data mining and computational analysis were utilized to derive compound promiscuity phenomenon to avoid drug attrition rate in designing antibacterials. Xanthine oxidase and human glucose- 6-phosphatase were found as only true positive off-target hits by ChEMBL database and others utilizing similarity ensemble approach. Propensity towards a-3 receptor, human macrophage migration factor and thiazolidinedione were found as false positive off targets with E-value 1/4> 10^-4 for compound 1, 3, and 4. Further, displaying positive drug-drug interaction of compound 1 as uricosuric was validated by all databases and docked protein targets with sequence similarity and compositional matrix alignment via BLAST software. Promiscuity of the compound 5 was further confirmed by in silico binding to different antibacterial targets.

Keywords: antibacterial activity, drug promiscuity, ADMET prediction, metallo-pharmaceutical, antimicrobial resistance

Procedia PDF Downloads 479
867 Dynamic Wetting and Solidification

Authors: Yulii D. Shikhmurzaev

Abstract:

The modelling of the non-isothermal free-surface flows coupled with the solidification process has become the topic of intensive research with the advent of additive manufacturing, where complex 3-dimensional structures are produced by successive deposition and solidification of microscopic droplets of different materials. The issue is that both the spreading of liquids over solids and the propagation of the solidification front into the fluid and along the solid substrate pose fundamental difficulties for their mathematical modelling. The first of these processes, known as ‘dynamic wetting’, leads to the well-known ‘moving contact-line problem’ where, as shown recently both experimentally and theoretically, the contact angle formed by the free surfac with the solid substrate is not a function of the contact-line speed but is rather a functional of the flow field. The modelling of the propagating solidification front requires generalization of the classical Stefan problem, which would be able to describe the onset of the process and the non-equilibrium regime of solidification. Furthermore, given that both dynamic wetting and solification occur concurrently and interactively, they should be described within the same conceptual framework. The present work addresses this formidable problem and presents a mathematical model capable of describing the key element of additive manufacturing in a self-consistent and singularity-free way. The model is illustrated simple examples highlighting its main features. The main idea of the work is that both dynamic wetting and solidification, as well as some other fluid flows, are particular cases in a general class of flows where interfaces form and/or disappear. This conceptual framework allows one to derive a mathematical model from first principles using the methods of irreversible thermodynamics. Crucially, the interfaces are not considered as zero-mass entities introduced using Gibbsian ‘dividing surface’ but the 2-dimensional surface phases produced by the continuum limit in which the thickness of what physically is an interfacial layer vanishes, and its properties are characterized by ‘surface’ parameters (surface tension, surface density, etc). This approach allows for the mass exchange between the surface and bulk phases, which is the essence of the interface formation. As shown numerically, the onset of solidification is preceded by the pure interface formation stage, whilst the Stefan regime is the final stage where the temperature at the solidification front asymptotically approaches the solidification temperature. The developed model can also be applied to the flow with the substrate melting as well as a complex flow where both types of phase transition take place.

Keywords: dynamic wetting, interface formation, phase transition, solidification

Procedia PDF Downloads 44
866 Optimizing Hydrogen Production from Biomass Pyro-Gasification in a Multi-Staged Fluidized Bed Reactor

Authors: Chetna Mohabeer, Luis Reyes, Lokmane Abdelouahed, Bechara Taouk

Abstract:

In the transition to sustainability and the increasing use of renewable energy, hydrogen will play a key role as an energy carrier. Biomass has the potential to accelerate the realization of hydrogen as a major fuel of the future. Pyro-gasification allows the conversion of organic matter mainly into synthesis gas, or “syngas”, majorly constituted by CO, H2, CH4, and CO2. A second, condensable fraction of biomass pyro-gasification products are “tars”. Under certain conditions, tars may decompose into hydrogen and other light hydrocarbons. These conditions include two types of cracking: homogeneous cracking, where tars decompose under the effect of temperature ( > 1000 °C), and heterogeneous cracking, where catalysts such as olivine, dolomite or biochar are used. The latter process favors cracking of tars at temperatures close to pyro-gasification temperatures (~ 850 °C). Pyro-gasification of biomass coupled with water-gas shift is the most widely practiced process route for biomass to hydrogen today. In this work, an innovating solution will be proposed for this conversion route, in that all the pyro-gasification products, not only methane, will undergo processes that aim to optimize hydrogen production. First, a heterogeneous cracking step was included in the reaction scheme, using biochar (remaining solid from the pyro-gasification reaction) as catalyst and CO2 and H2O as gasifying agents. This process was followed by a catalytic steam methane reforming (SMR) step. For this, a Ni-based catalyst was tested under different reaction conditions to optimize H2 yield. Finally, a water-gas shift (WGS) reaction step with a Fe-based catalyst was added to optimize the H2 yield from CO. The reactor used for cracking was a fluidized bed reactor, and the one used for SMR and WGS was a fixed bed reactor. The gaseous products were analyzed continuously using a µ-GC (Fusion PN 074-594-P1F). With biochar as bed material, it was seen that more H2 was obtained with steam as a gasifying agent (32 mol. % vs. 15 mol. % with CO2 at 900 °C). CO and CH4 productions were also higher with steam than with CO2. Steam as gasifying agent and biochar as bed material were hence deemed efficient parameters for the first step. Among all parameters tested, CH4 conversions approaching 100 % were obtained from SMR reactions using Ni/γ-Al2O3 as a catalyst, 800 °C, and a steam/methane ratio of 5. This gave rise to about 45 mol % H2. Experiments about WGS reaction are currently being conducted. At the end of this phase, the four reactions are performed consecutively, and the results analyzed. The final aim is the development of a global kinetic model of the whole system in a multi-stage fluidized bed reactor that can be transferred on ASPEN PlusTM.

Keywords: multi-staged fluidized bed reactor, pyro-gasification, steam methane reforming, water-gas shift

Procedia PDF Downloads 110
865 A Survey of Field Programmable Gate Array-Based Convolutional Neural Network Accelerators

Authors: Wei Zhang

Abstract:

With the rapid development of deep learning, neural network and deep learning algorithms play a significant role in various practical applications. Due to the high accuracy and good performance, Convolutional Neural Networks (CNNs) especially have become a research hot spot in the past few years. However, the size of the networks becomes increasingly large scale due to the demands of the practical applications, which poses a significant challenge to construct a high-performance implementation of deep learning neural networks. Meanwhile, many of these application scenarios also have strict requirements on the performance and low-power consumption of hardware devices. Therefore, it is particularly critical to choose a moderate computing platform for hardware acceleration of CNNs. This article aimed to survey the recent advance in Field Programmable Gate Array (FPGA)-based acceleration of CNNs. Various designs and implementations of the accelerator based on FPGA under different devices and network models are overviewed, and the versions of Graphic Processing Units (GPUs), Application Specific Integrated Circuits (ASICs) and Digital Signal Processors (DSPs) are compared to present our own critical analysis and comments. Finally, we give a discussion on different perspectives of these acceleration and optimization methods on FPGA platforms to further explore the opportunities and challenges for future research. More helpfully, we give a prospect for future development of the FPGA-based accelerator.

Keywords: deep learning, field programmable gate array, FPGA, hardware accelerator, convolutional neural networks, CNN

Procedia PDF Downloads 98