Search results for: reliability modeling
338 Vapour Liquid Equilibrium Measurement of CO₂ Absorption in Aqueous 2-Aminoethylpiperazine (AEP)
Authors: Anirban Dey, Sukanta Kumar Dash, Bishnupada Mandal
Abstract:
Carbondioxide (CO2) is a major greenhouse gas responsible for global warming and fossil fuel power plants are the main emitting sources. Therefore the capture of CO2 is essential to maintain the emission levels according to the standards. Carbon capture and storage (CCS) is considered as an important option for stabilization of atmospheric greenhouse gases and minimizing global warming effects. There are three approaches towards CCS: Pre combustion capture where carbon is removed from the fuel prior to combustion, Oxy-fuel combustion, where coal is combusted with oxygen instead of air and Post combustion capture where the fossil fuel is combusted to produce energy and CO2 is removed from the flue gases left after the combustion process. Post combustion technology offers some advantage as existing combustion technologies can still be used without adopting major changes on them. A number of separation processes could be utilized part of post –combustion capture technology. These include (a) Physical absorption (b) Chemical absorption (c) Membrane separation (d) Adsorption. Chemical absorption is one of the most extensively used technologies for large scale CO2 capture systems. The industrially important solvents used are primary amines like Monoethanolamine (MEA) and Diglycolamine (DGA), secondary amines like diethanolamine (DEA) and Diisopropanolamine (DIPA) and tertiary amines like methyldiethanolamine (MDEA) and Triethanolamine (TEA). Primary and secondary amines react fast and directly with CO2 to form stable carbamates while Tertiary amines do not react directly with CO2 as in aqueous solution they catalyzes the hydrolysis of CO2 to form a bicarbonate ion and a protonated amine. Concentrated Piperazine (PZ) has been proposed as a better solvent as well as activator for CO2 capture from flue gas with a 10 % energy benefit compared to conventional amines such as MEA. However, the application of concentrated PZ is limited due to its low solubility in water at low temperature and lean CO2 loading. So following the performance of PZ its derivative 2-Aminoethyl piperazine (AEP) which is a cyclic amine can be explored as an activator towards the absorption of CO2. Vapour liquid equilibrium (VLE) in CO2 capture systems is an important factor for the design of separation equipment and gas treating processes. For proper thermodynamic modeling accurate equilibrium data for the solvent system over a wide range of temperatures, pressure and composition is essential. The present work focuses on the determination of VLE data for (AEP + H2O) system at 40 °C for various composition range.Keywords: absorption, aminoethyl piperazine, carbondioxide, vapour liquid equilibrium
Procedia PDF Downloads 267337 Calibration of Contact Model Parameters and Analysis of Microscopic Behaviors of Cuxhaven Sand Using The Discrete Element Method
Authors: Anjali Uday, Yuting Wang, Andres Alfonso Pena Olare
Abstract:
The Discrete Element Method is a promising approach to modeling microscopic behaviors of granular materials. The quality of the simulations however depends on the model parameters utilized. The present study focuses on calibration and validation of the discrete element parameters for Cuxhaven sand based on the experimental data from triaxial and oedometer tests. A sensitivity analysis was conducted during the sample preparation stage and the shear stage of the triaxial tests. The influence of parameters like rolling resistance, inter-particle friction coefficient, confining pressure and effective modulus were investigated on the void ratio of the sample generated. During the shear stage, the effect of parameters like inter-particle friction coefficient, effective modulus, rolling resistance friction coefficient and normal-to-shear stiffness ratio are examined. The calibration of the parameters is carried out such that the simulations reproduce the macro mechanical characteristics like dilation angle, peak stress, and stiffness. The above-mentioned calibrated parameters are then validated by simulating an oedometer test on the sand. The oedometer test results are in good agreement with experiments, which proves the suitability of the calibrated parameters. In the next step, the calibrated and validated model parameters are applied to forecast the micromechanical behavior including the evolution of contact force chains, buckling of columns of particles, observation of non-coaxiality, and sample inhomogeneity during a simple shear test. The evolution of contact force chains vividly shows the distribution, and alignment of strong contact forces. The changes in coordination number are in good agreement with the volumetric strain exhibited during the simple shear test. The vertical inhomogeneity of void ratios is documented throughout the shearing phase, which shows looser structures in the top and bottom layers. Buckling of columns is not observed due to the small rolling resistance coefficient adopted for simulations. The non-coaxiality of principal stress and strain rate is also well captured. Thus the micromechanical behaviors are well described using the calibrated and validated material parameters.Keywords: discrete element model, parameter calibration, triaxial test, oedometer test, simple shear test
Procedia PDF Downloads 120336 Measurement of in-situ Horizontal Root Tensile Strength of Herbaceous Vegetation for Improved Evaluation of Slope Stability in the Alps
Authors: Michael T. Lobmann, Camilla Wellstein, Stefan Zerbe
Abstract:
Vegetation plays an important role for the stabilization of slopes against erosion processes, such as shallow erosion and landslides. Plant roots reinforce the soil, increase soil cohesion and often cross possible shear planes. Hence, plant roots reduce the risk of slope failure. Generally, shrub and tree roots penetrate deeper into the soil vertically, while roots of forbs and grasses are concentrated horizontally in the topsoil and organic layer. Therefore, shrubs and trees have a higher potential for stabilization of slopes with deep soil layers than forbs and grasses. Consequently, research mainly focused on the vertical root effects of shrubs and trees. Nevertheless, a better understanding of the stabilizing effects of grasses and forbs is needed for better evaluation of the stability of natural and artificial slopes with herbaceous vegetation. Despite the importance of vertical root effects, field observations indicate that horizontal root effects also play an important role for slope stabilization. Not only forbs and grasses, but also some shrubs and trees form tight horizontal networks of fine and coarse roots and rhizomes in the topsoil. These root networks increase soil cohesion and horizontal tensile strength. Available methods for physical measurements, such as shear-box tests, pullout tests and singular root tensile strength measurement can only provide a detailed picture of vertical effects of roots on slope stabilization. However, the assessment of horizontal root effects is largely limited to computer modeling. Here, a method for measurement of in-situ cumulative horizontal root tensile strength is presented. A traction machine was developed that allows fixation of rectangular grass sods (max. 30x60cm) on the short ends with a 30x30cm measurement zone in the middle. On two alpine grass slopes in South Tyrol (northern Italy), 30x60cm grass sods were cut out (max. depth 20cm). Grass sods were pulled apart measuring the horizontal tensile strength over 30cm width over the time. The horizontal tensile strength of the sods was measured and compared for different soil depths, hydrological conditions, and root physiological properties. The results improve our understanding of horizontal root effects on slope stabilization and can be used for improved evaluation of grass slope stability.Keywords: grassland, horizontal root effect, landslide, mountain, pasture, shallow erosion
Procedia PDF Downloads 166335 Effect of Different Parameters of Converging-Diverging Vortex Finders on Cyclone Separator Performance
Abstract:
The present study is done to explore design modifications of the vortex finder, as it has a significant effect on the cyclone separator performance. It is evident that modifications of the vortex finder improve the performance of the cyclone separator significantly. The study conducted strives to improve the overall performance of cyclone separators by utilizing a converging-diverging (CD) vortex finder instead of the traditional uniform diameter vortex finders. The velocity and pressure fields inside a Stairmand cyclone separator with body diameter 0.29m and vortex finder diameter 0.1305m are calculated. The commercial software, Ansys Fluent v14.0 is used to simulate the flow field in a uniform diameter cyclone and six cyclones modified with CD vortex finders. Reynolds stress model is used to simulate the effects of turbulence on the fluid and particulate phases, discrete phase model is used to calculate the particle trajectories. The performance of the modified vortex finders is compared with the traditional vortex finder. The effects of the lengths of the converging and diverging sections, the throat diameter and the end diameters of the convergent divergent section are also studied to achieve enhanced performance. The pressure and velocity fields inside the vortex finder are presented by means of contour plots and velocity vectors and changes in the flow pattern due to variation of the geometrical variables are also analysed. Results indicate that a convergent-divergent vortex finder is capable of decreasing the pressure drop than that achieved through a uniform diameter vortex finder. It is also observed that the end diameters of the CD vortex finder, the throat diameter and the length of the diverging part of the vortex finder have a significant impact on the cyclone separator performance. Increase in the lower diameter of the vortex finder by 66% results in 11.5% decrease in the dimensionless pressure drop (Euler number) with 5.8% decrease in separation efficiency. Whereas 50% decrease in the throat diameter gives 5.9% increase in the Euler number with 10.2% increase in the separation efficiency and increasing the length of the diverging part gives 10.28% increase in the Euler number with 5.74% increase in the separation efficiency. Increasing the upper diameter of the CD vortex finder is seen to produce an adverse effect on the performance as it increases the pressure drop significantly and decreases the separation efficiency. Increase in length of the converging is not seen to affect the performance significantly. From the present study, it is concluded that convergent-divergent vortex finders can be used in place of uniform diameter vortex finders to achieve a better cyclone separator performance.Keywords: convergent-divergent vortex finder, cyclone separator, discrete phase modeling, Reynolds stress model
Procedia PDF Downloads 172334 Well Inventory Data Entry: Utilization of Developed Technologies to Progress the Integrated Asset Plan
Authors: Danah Al-Selahi, Sulaiman Al-Ghunaim, Bashayer Sadiq, Fatma Al-Otaibi, Ali Ameen
Abstract:
In light of recent changes affecting the Oil & Gas Industry, optimization measures have become imperative for all companies globally, including Kuwait Oil Company (KOC). To keep abreast of the dynamic market, a detailed Integrated Asset Plan (IAP) was developed to drive optimization across the organization, which was facilitated through the in-house developed software “Well Inventory Data Entry” (WIDE). This comprehensive and integrated approach enabled centralization of all planned asset components for better well planning, enhancement of performance, and to facilitate continuous improvement through performance tracking and midterm forecasting. Traditionally, this was hard to achieve as, in the past, various legacy methods were used. This paper briefly describes the methods successfully adopted to meet the company’s objective. IAPs were initially designed using computerized spreadsheets. However, as data captured became more complex and the number of stakeholders requiring and updating this information grew, the need to automate the conventional spreadsheets became apparent. WIDE, existing in other aspects of the company (namely, the Workover Optimization project), was utilized to meet the dynamic requirements of the IAP cycle. With the growth of extensive features to enhance the planning process, the tool evolved into a centralized data-hub for all asset-groups and technical support functions to analyze and infer from, leading WIDE to become the reference two-year operational plan for the entire company. To achieve WIDE’s goal of operational efficiency, asset-groups continuously add their parameters in a series of predefined workflows that enable the creation of a structured process which allows risk factors to be flagged and helps mitigation of the same. This tool dictates assigned responsibilities for all stakeholders in a method that enables continuous updates for daily performance measures and operational use. The reliable availability of WIDE, combined with its user-friendliness and easy accessibility, created a platform of cross-functionality amongst all asset-groups and technical support groups to update contents of their respective planning parameters. The home-grown entity was implemented across the entire company and tailored to feed in internal processes of several stakeholders across the company. Furthermore, the implementation of change management and root cause analysis techniques captured the dysfunctionality of previous plans, which in turn resulted in the improvement of already existing mechanisms of planning within the IAP. The detailed elucidation of the 2 year plan flagged any upcoming risks and shortfalls foreseen in the plan. All results were translated into a series of developments that propelled the tool’s capabilities beyond planning and into operations (such as Asset Production Forecasts, setting KPIs, and estimating operational needs). This process exemplifies the ability and reach of applying advanced development techniques to seamlessly integrated the planning parameters of various assets and technical support groups. These techniques enables the enhancement of integrating planning data workflows that ultimately lay the founding plans towards an epoch of accuracy and reliability. As such, benchmarks of establishing a set of standard goals are created to ensure the constant improvement of the efficiency of the entire planning and operational structure.Keywords: automation, integration, value, communication
Procedia PDF Downloads 146333 Investigation of Rehabilitation Effects on Fire Damaged High Strength Concrete Beams
Authors: Eun Mi Ryu, Ah Young An, Ji Yeon Kang, Yeong Soo Shin, Hee Sun Kim
Abstract:
As the number of fire incidents has been increased, fire incidents significantly damage economy and human lives. Especially when high strength reinforced concrete is exposed to high temperature due to a fire, deterioration occurs such as loss in strength and elastic modulus, cracking, and spalling of the concrete. Therefore, it is important to understand risk of structural safety in building structures by studying structural behaviors and rehabilitation of fire damaged high strength concrete structures. This paper aims at investigating rehabilitation effect on fire damaged high strength concrete beams using experimental and analytical methods. In the experiments, flexural specimens with high strength concrete are exposed to high temperatures according to ISO 834 standard time temperature curve. After heated, the fire damaged reinforced concrete (RC) beams having different cover thicknesses and fire exposure time periods are rehabilitated by removing damaged part of cover thickness and filling polymeric mortar into the removed part. From four-point loading test, results show that maximum loads of the rehabilitated RC beams are 1.8~20.9% higher than those of the non-fire damaged RC beam. On the other hand, ductility ratios of the rehabilitated RC beams are decreased than that of the non-fire damaged RC beam. In addition, structural analyses are performed using ABAQUS 6.10-3 with same conditions as experiments to provide accurate predictions on structural and mechanical behaviors of rehabilitated RC beams. For the rehabilitated RC beam models, integrated temperature–structural analyses are performed in advance to obtain geometries of the fire damaged RC beams. After spalled and damaged parts are removed, rehabilitated part is added to the damaged model with material properties of polymeric mortar. Three dimensional continuum brick elements are used for both temperature and structural analyses. The same loading and boundary conditions as experiments are implemented to the rehabilitated beam models and nonlinear geometrical analyses are performed. Structural analytical results show good rehabilitation effects, when the result predicted from the rehabilitated models are compared to structural behaviors of the non-damaged RC beams. In this study, fire damaged high strength concrete beams are rehabilitated using polymeric mortar. From four point loading tests, it is found that such rehabilitation is able to make the structural performance of fire damaged beams similar to non-damaged RC beams. The predictions from the finite element models show good agreements with the experimental results and the modeling approaches can be used to investigate applicability of various rehabilitation methods for further study.Keywords: fire, high strength concrete, rehabilitation, reinforced concrete beam
Procedia PDF Downloads 445332 Concept of a Pseudo-Lower Bound Solution for Reinforced Concrete Slabs
Authors: M. De Filippo, J. S. Kuang
Abstract:
In construction industry, reinforced concrete (RC) slabs represent fundamental elements of buildings and bridges. Different methods are available for analysing the structural behaviour of slabs. In the early ages of last century, the yield-line method has been proposed to attempt to solve such problem. Simple geometry problems could easily be solved by using traditional hand analyses which include plasticity theories. Nowadays, advanced finite element (FE) analyses have mainly found their way into applications of many engineering fields due to the wide range of geometries to which they can be applied. In such cases, the application of an elastic or a plastic constitutive model would completely change the approach of the analysis itself. Elastic methods are popular due to their easy applicability to automated computations. However, elastic analyses are limited since they do not consider any aspect of the material behaviour beyond its yield limit, which turns to be an essential aspect of RC structural performance. Furthermore, their applicability to non-linear analysis for modeling plastic behaviour gives very reliable results. Per contra, this type of analysis is computationally quite expensive, i.e. not well suited for solving daily engineering problems. In the past years, many researchers have worked on filling this gap between easy-to-implement elastic methods and computationally complex plastic analyses. This paper aims at proposing a numerical procedure, through which a pseudo-lower bound solution, not violating the yield criterion, is achieved. The advantages of moment distribution are taken into account, hence the increase in strength provided by plastic behaviour is considered. The lower bound solution is improved by detecting over-yielded moments, which are used to artificially rule the moment distribution among the rest of the non-yielded elements. The proposed technique obeys Nielsen’s yield criterion. The outcome of this analysis provides a simple, yet accurate, and non-time-consuming tool of predicting the lower-bound solution of the collapse load of RC slabs. By using this method, structural engineers can find the fracture patterns and ultimate load bearing capacity. The collapse triggering mechanism is found by detecting yield-lines. An application to the simple case of a square clamped slab is shown, and a good match was found with the exact values of collapse load.Keywords: computational mechanics, lower bound method, reinforced concrete slabs, yield-line
Procedia PDF Downloads 178331 Enhancing Scalability in Ethereum Network Analysis: Methods and Techniques
Authors: Stefan K. Behfar
Abstract:
The rapid growth of the Ethereum network has brought forth the urgent need for scalable analysis methods to handle the increasing volume of blockchain data. In this research, we propose efficient methodologies for making Ethereum network analysis scalable. Our approach leverages a combination of graph-based data representation, probabilistic sampling, and parallel processing techniques to achieve unprecedented scalability while preserving critical network insights. Data Representation: We develop a graph-based data representation that captures the underlying structure of the Ethereum network. Each block transaction is represented as a node in the graph, while the edges signify temporal relationships. This representation ensures efficient querying and traversal of the blockchain data. Probabilistic Sampling: To cope with the vastness of the Ethereum blockchain, we introduce a probabilistic sampling technique. This method strategically selects a representative subset of transactions and blocks, allowing for concise yet statistically significant analysis. The sampling approach maintains the integrity of the network properties while significantly reducing the computational burden. Graph Convolutional Networks (GCNs): We incorporate GCNs to process the graph-based data representation efficiently. The GCN architecture enables the extraction of complex spatial and temporal patterns from the sampled data. This combination of graph representation and GCNs facilitates parallel processing and scalable analysis. Distributed Computing: To further enhance scalability, we adopt distributed computing frameworks such as Apache Hadoop and Apache Spark. By distributing computation across multiple nodes, we achieve a significant reduction in processing time and enhanced memory utilization. Our methodology harnesses the power of parallelism, making it well-suited for large-scale Ethereum network analysis. Evaluation and Results: We extensively evaluate our methodology on real-world Ethereum datasets covering diverse time periods and transaction volumes. The results demonstrate its superior scalability, outperforming traditional analysis methods. Our approach successfully handles the ever-growing Ethereum data, empowering researchers and developers with actionable insights from the blockchain. Case Studies: We apply our methodology to real-world Ethereum use cases, including detecting transaction patterns, analyzing smart contract interactions, and predicting network congestion. The results showcase the accuracy and efficiency of our approach, emphasizing its practical applicability in real-world scenarios. Security and Robustness: To ensure the reliability of our methodology, we conduct thorough security and robustness evaluations. Our approach demonstrates high resilience against adversarial attacks and perturbations, reaffirming its suitability for security-critical blockchain applications. Conclusion: By integrating graph-based data representation, GCNs, probabilistic sampling, and distributed computing, we achieve network scalability without compromising analytical precision. This approach addresses the pressing challenges posed by the expanding Ethereum network, opening new avenues for research and enabling real-time insights into decentralized ecosystems. Our work contributes to the development of scalable blockchain analytics, laying the foundation for sustainable growth and advancement in the domain of blockchain research and application.Keywords: Ethereum, scalable network, GCN, probabilistic sampling, distributed computing
Procedia PDF Downloads 76330 Water Quality in Buyuk Menderes Graben, Turkey
Authors: Tugbanur Ozen Balaban, Gultekin Tarcan, Unsal Gemici, Mumtaz Colak, I. Hakki Karamanderesi
Abstract:
Buyuk Menderes Graben is located in the Western Anatolia (Turkey). The graben has become the largest industrial and agricultural area with a total population exceeding 3.000.000. There are two big cities within the study areas from west to east as Aydın and Denizli. The study area is very rich with regard to cold ground waters and thermal waters. Electrical production using geothermal potential has become very popular in the last decades in this area. Buyuk Menderes Graben is a tectonically active extensional region and is undergoing a north–south extensional tectonic regime which commenced at the latest during Early Middle Miocene period. The basement of the study area consists of Menderes massif rocks that are made up of high-to low-grade metamorphics and they are aquifer for both cold ground waters and thermal waters depending on the location. Neogene terrestrial sediments, which are mainly composed by alluvium fan deposits unconformably cover the basement rocks in different facies have very low permeability and locally may act as cap rocks for the geothermal systems. The youngest unit is Quaternary alluvium which is the shallow regional aquifer consists of Holocene alluvial deposits in the study area. All the waters are of meteoric origin and reflect shallow or deep circulation according to the 8O, 2H and 3H contents. Meteoric waters move to deep zones by fractured system and rise to the surface along the faults. Water samples (drilling well, spring and surface waters) and local seawater were collected between 2010 and 2012 years. Geochemical modeling was calculated distribution of the aqueous species and exchange processes by using PHREEQCi speciation code. Geochemical analyses show that cold ground water types are evolving from Ca–Mg–HCO3 to Na–Cl–SO4 and geothermal aquifer waters reflect the water types of Na-Cl-HCO3 in Aydın. Water types of Denizli are Ca-Mg-HCO3 and Ca-Mg-HCO3-SO4. Thermal water types reflect generally Na-HCO3-SO4. The B versus Cl rates increase from east to west with the proportion of seawater introduced into the fresh water aquifers and geothermal reservoirs. Concentrations of some elements (As, B, Fe and Ni) are higher than the tolerance limit of the drinking water standard of Turkey (TS 266) and international drinking water standards (WHO, FAO etc).Keywords: Buyuk Menderes, isotope chemistry, geochemical modelling, water quality
Procedia PDF Downloads 536329 Concentrated Whey Protein Drink with Orange Flavor: Protein Modification and Formulation
Authors: Shahram Naghizadeh Raeisi, Ali Alghooneh
Abstract:
The application of whey protein in drink industry to enhance the nutritional value of the products is important. Furthermore, the gelification of protein during thermal treatment and shelf life makes some limitations in its application. So, the main goal of this research is manufacturing of high concentrate whey protein orange drink with appropriate shelf life. In this way, whey protein was 5 to 30% hydrolyzed ( in 5 percent intervals at six stages), then thermal stability of samples with 10% concentration of protein was tested in acidic condition (T= 90 °C, pH=4.2, 5 minutes ) and neutral condition (T=120° C, pH:6.7, 20 minutes.) Furthermore, to study the shelf life of heat treated samples in 4 months at 4 and 24 °C, the time sweep rheological test were done. At neutral conditions, 5 to 20% hydrolyzed sample showed gelling during thermal treatment, whereas at acidic condition, was happened only in 5 to 10 percent hydrolyzed samples. This phenomenon could be related to the difference in hydrodynamic radius and zeta potential of samples with different level of hydrolyzation at acidic and neutral conditions. To study the gelification of heat resistant protein solutions during shelf life, for 4 months with 7 days intervals, the time sweep analysis were performed. Cross over was observed for all heat resistant neutral samples at both storage temperature, while in heat resistant acidic samples with degree of hydrolysis, 25 and 30 percentage at 4 and 20 °C, it was not seen. It could be concluded that the former sample was stable during heat treatment and 4 months storage, which made them a good choice for manufacturing high protein drinks. The Scheffe polynomial model and numerical optimization were employed for modeling and high protein orange drink formula optimization. Scheffe model significantly predicted the overal acceptance index (Pvalue<0.05) of sensorial analysis. The coefficient of determination (R2) of 0.94, the adjusted coefficient of determination (R2Adj) of 0.90, insignificance of the lack-of-fit test and F value of 64.21 showed the accuracy of the model. Moreover, the coefficient of variable (C.V) was 6.8% which suggested the replicability of the experimental data. The desirability function had been achieved to be 0.89, which indicates the high accuracy of optimization. The optimum formulation was found as following: Modified whey protein solution (65.30%), natural orange juice (33.50%), stevia sweetener (0.05%), orange peel oil (0.15%) and citric acid (1 %), respectively. Its worth mentioning that this study made an appropriate model for application of whey protein in drink industry without bitter flavor and gelification during heat treatment and shelf life.Keywords: croos over, orange beverage, protein modification, optimization
Procedia PDF Downloads 62328 Gender Bias After Failure: How Crowd Lenders Disadvantage Female-Led Social Ventures
Authors: Caroline Lindlar, Eva Jakob
Abstract:
Female entrepreneurs often face significant barriers in accessing funding due to biases from business angels, venture capitalists, and financial institutions, which tend to favor male entrepreneurs. These biases contribute to persistent funding disparities, with female entrepreneurs receiving less financial support than their male counterparts. The situation worsens when female entrepreneurs have prior experiences with venture failure, which diminishes their attractiveness to traditional investors. Venture failure, defined as the cessation of operations due to declining revenues, rising costs, or ownership changes, plays a substantial role in shaping funding opportunities. In response, female entrepreneurs frequently turn to alternative funding sources such as crowdlending, where gender biases are often reversed in favor of women, particularly when their ventures emphasize social value creation. While existing research highlights the positive impact of gender on crowdfunding success, it remains unclear how venture failure, known to negatively bias female entrepreneurs in traditional funding contexts, interacts with the positive effects of gender in crowdlending. This interaction is particularly relevant because crowdlending often involves non-professional funders who make repeated investment decisions under uncertainty, based on limited information and past experiences. Given that approximately one-third of ventures fail to deliver promised returns, the role of gender bias after failure in crowdlending is an important area of investigation. This study addresses How failure affects crowd funders’ gender bias in future funding decisions? Drawing on social role and role congruity theory, we posit that societal perceptions of women as more communal conflict with the agentic qualities traditionally associated with entrepreneurship. This incongruence may result in reduced confidence in the success of female entrepreneurs after failure, limiting their access to future funding. However, we also hypothesize that social framing may mitigate this bias by aligning perceptions of female entrepreneurs with traits such as warmth and caring, enhancing their appeal after failure. To test these assertions, it conducted a between-subject audio vignette experiment with 155 participants who listened to entrepreneur pitches manipulated by gender (male vs. female) and venture framing (social vs. commercial). Participants made initial investment decisions, received failure-related news about the venture, and then made subsequent investment decisions. Pre-tests with 159 participants ensured the validity and reliability of the experimental manipulations. Moreover, we did a metric conjoint analysis with 100 participants, and they had to decide between different crowdfunding campaigns based on the attributes of previous failure, gender, and venture mission. it findings reveal that failure activates gender biases in crowdlending. Female-led ventures receive significantly less funding after failure compared to male-led ventures, suggesting the positive bias toward female entrepreneurs in the pre-funding phase does not persist post-failure. Moreover, framing a venture as socially oriented exacerbates the negative effect of failure for female entrepreneurs, as they secure fewer funds after failure compared to male entrepreneurs leading similar social ventures. This indicates that role-congruent framing does not mitigate gender bias after failure. This study contributes to research on gender in entrepreneurship by exploring how failure impacts future funding for female entrepreneurs. It also expands social crowdfunding literature by examining social value framing and adds to the entrepreneurial failure literature by focusing on crowd funders’ post-failure behavior.Keywords: gender bias, crowdfunding, investment failure, investment behavior, social entrepreneurship
Procedia PDF Downloads 10327 Composing Method of Decision-Making Function for Construction Management Using Active 4D/5D/6D Objects
Authors: Hyeon-Seung Kim, Sang-Mi Park, Sun-Ju Han, Leen-Seok Kang
Abstract:
As BIM (Building Information Modeling) application continually expands, the visual simulation techniques used for facility design and construction process information are becoming increasingly advanced and diverse. For building structures, BIM application is design - oriented to utilize 3D objects for conflict management, whereas for civil engineering structures, the usability of nD object - oriented construction stage simulation is important in construction management. Simulations of 5D and 6D objects, for which cost and resources are linked along with process simulation in 4D objects, are commonly used, but they do not provide a decision - making function for process management problems that occur on site because they mostly focus on the visual representation of current status for process information. In this study, an nD CAD system is constructed that facilitates an optimized schedule simulation that minimizes process conflict, a construction duration reduction simulation according to execution progress status, optimized process plan simulation according to project cost change by year, and optimized resource simulation for field resource mobilization capability. Through this system, the usability of conventional simple simulation objects is expanded to the usability of active simulation objects with which decision - making is possible. Furthermore, to close the gap between field process situations and planned 4D process objects, a technique is developed to facilitate a comparative simulation through the coordinated synchronization of an actual video object acquired by an on - site web camera and VR concept 4D object. This synchronization and simulation technique can also be applied to smartphone video objects captured in the field in order to increase the usability of the 4D object. Because yearly project costs change frequently for civil engineering construction, an annual process plan should be recomposed appropriately according to project cost decreases/increases compared with the plan. In the 5D CAD system provided in this study, an active 5D object utilization concept is introduced to perform a simulation in an optimized process planning state by finding a process optimized for the changed project cost without changing the construction duration through a technique such as genetic algorithm. Furthermore, in resource management, an active 6D object utilization function is introduced that can analyze and simulate an optimized process plan within a possible scope of moving resources by considering those resources that can be moved under a given field condition, instead of using a simple resource change simulation by schedule. The introduction of an active BIM function is expected to increase the field utilization of conventional nD objects.Keywords: 4D, 5D, 6D, active BIM
Procedia PDF Downloads 275326 Numerical Investigation of Plasma-Fuel System (PFS) for Coal Ignition and Combustion
Authors: Vladimir Messerle, Alexandr Ustimenko, Oleg Lavrichshev
Abstract:
To enhance the efficiency of solid fuels’ use, to decrease the fuel oil rate in the thermal power plants fuel balance and to minimize harmful emissions, a plasma technology of coal ignition, gasification and incineration is successfully applied. This technology is plasma thermochemical preparation of fuel for burning (PTCPF). In the framework of this concept, some portion of pulverized solid fuel (PF) is separated from the main PF flow and undergone the activation by arc plasma in a specific chamber with plasma torch – PFS. The air plasma flame is a source of heat and additional oxidation, it provides a high-temperature medium enriched with radicals, where the fuel mixture is heated, volatile components of coal are extracted, and carbon is partially gasified. This active blended fuel can ignite the main PF flow supplied into the furnace. This technology provides the boiler start-up and stabilization of PF flame and eliminates the necessity for addition of highly reactive fuel. In the report, a model of PTCPF, implemented as a program PlasmaKinTherm for the PFS calculation is described. The model combines thermodynamic and kinetic methods for describing the process of PTCPF in PFS. The numerical investigation of operational parameters of PFS depending on the electric power of the plasma generator and steam coal ash content revealed the temperature and velocity of gas and coal particles, and concentrations of PTCPF products dependences on the PFS length. Main mechanisms of PTCPF were disclosed. It was found that in the range of electric power of plasma generator from 40 to 100 kW high ash bituminous coal, having consumption 1667 kg/h is ignited stably. High level of temperature (1740 K) and concentration of combustible components (44%) at the PFS exit is a confirmation of it. Augmentation in power of plasma generator results displacement maxima temperatures and speeds of PTCPF products upstream (in the direction of the plasma source). The maximum temperature and velocity vary in a narrow range of values and practically do not depend on the power of the plasma torch. The numerical study of indicators of the process of PTCPF depending on the ash content in the range of its values 20-70% demonstrated that at the exit of PFS concentration of combustible components decreases with an increase in coal ash, the temperature of the gaseous products is increasing, and coal carbon conversion rate is increased to a maximum value when the ash content of 60%, dramatically decreasing with further increase in the ash content.Keywords: coal, efficiency, ignition, numerical modeling, plasma generator, plasma-fuel system
Procedia PDF Downloads 298325 Modeling of in 738 LC Alloy Mechanical Properties Based on Microstructural Evolution Simulations for Different Heat Treatment Conditions
Authors: M. Tarik Boyraz, M. Bilge Imer
Abstract:
Conventionally cast nickel-based super alloys, such as commercial alloy IN 738 LC, are widely used in manufacturing of industrial gas turbine blades. With carefully designed microstructure and the existence of alloying elements, the blades show improved mechanical properties at high operating temperatures and corrosive environment. The aim of this work is to model and estimate these mechanical properties of IN 738 LC alloy solely based on simulations for projected heat treatment conditions or service conditions. The microstructure (size, fraction and frequency of gamma prime- γ′ and carbide phases in gamma- γ matrix, and grain size) of IN 738 LC needs to be optimized to improve the high temperature mechanical properties by heat treatment process. This process can be performed at different soaking temperature, time and cooling rates. In this work, micro-structural evolution studies were performed experimentally at various heat treatment process conditions, and these findings were used as input for further simulation studies. The operation time, soaking temperature and cooling rate provided by experimental heat treatment procedures were used as micro-structural simulation input. The results of this simulation were compared with the size, fraction and frequency of γ′ and carbide phases, and grain size provided by SEM (EDS module and mapping), EPMA (WDS module) and optical microscope for before and after heat treatment. After iterative comparison of experimental findings and simulations, an offset was determined to fit the real time and theoretical findings. Thereby, it was possible to estimate the final micro-structure without any necessity to carry out the heat treatment experiment. The output of this microstructure simulation based on heat treatment was used as input to estimate yield stress and creep properties. Yield stress was calculated mainly as a function of precipitation, solid solution and grain boundary strengthening contributors in microstructure. Creep rate was calculated as a function of stress, temperature and microstructural factors such as dislocation density, precipitate size, inter-particle spacing of precipitates. The estimated yield stress values were compared with the corresponding experimental hardness and tensile test values. The ability to determine best heat treatment conditions that achieve the desired microstructural and mechanical properties were developed for IN 738 LC based completely on simulations.Keywords: heat treatment, IN738LC, simulations, super-alloys
Procedia PDF Downloads 248324 Environmental Monitoring by Using Unmanned Aerial Vehicle (UAV) Images and Spatial Data: A Case Study of Mineral Exploitation in Brazilian Federal District, Brazil
Authors: Maria De Albuquerque Bercot, Caio Gustavo Mesquita Angelo, Daniela Maria Moreira Siqueira, Augusto Assucena De Vasconcellos, Rodrigo Studart Correa
Abstract:
Mining is an important socioeconomic activity in Brazil although it negatively impacts the environment. Mineral operations cause irreversible changes in topography, removal of vegetation and topsoil, habitat destruction, displacement of fauna, loss of biodiversity, soil erosion, siltation of watercourses and have potential to enhance climate change. Due to the impacts and its pollution potential, mining activity in Brazil is legally subjected to environmental licensing. Unlicensed mining operations or operations that not abide to the terms of an obtained license are taken as environmental crimes in the country. This work reports a case analyzed in the Forensic Institute of the Brazilian Federal District Civil Police. The case consisted of detecting illegal aspects of sand exploitation from a licensed mine in Federal District, nearby Brasilia city. The fieldwork covered an area of roughly 6 ha, which was surveyed with an unmanned aerial vehicle (UAV) (PHANTOM 3 ADVANCED). The overflight with UAV took about 20 min, with maximum flight height of 100 m. 592 UAV georeferenced images were obtained and processed in a photogrammetric software (AGISOFT PHOTOSCAN 1.1.4), which generated a mosaic of geo-referenced images and a 3D model in less than six working hours. The 3D model was analyzed in a forensic software for accurate modeling and volumetric analysis. (MAPTEK I-SITE FORENSIC 2.2). To ensure the 3D model was a true representation of the mine site, coordinates of ten control points and reference measures were taken during fieldwork and compared to respective spatial data in the model. Finally, these spatial data were used for measuring mining area, excavation depth and volume of exploited sand. Results showed that mine holder had not complied with some terms and conditions stated in the granted license, such as sand exploration beyond authorized extension, depth and volume. Easiness, the accuracy and expedition of procedures used in this case highlight the employment of UAV imagery and computational photogrammetry as efficient tools for outdoor forensic exams, especially on environmental issues.Keywords: computational photogrammetry, environmental monitoring, mining, UAV
Procedia PDF Downloads 318323 Investigations on Pyrolysis Model for Radiatively Dominant Diesel Pool Fire Using Fire Dynamic Simulator
Authors: Siva K. Bathina, Sudheer Siddapureddy
Abstract:
Pool fires are formed when the flammable liquid accidentally spills on the ground or water and ignites. Pool fire is a kind of buoyancy-driven and diffusion flame. There have been many pool fire accidents caused during processing, handling and storing of liquid fuels in chemical and oil industries. Such kind of accidents causes enormous damage to property as well as the loss of lives. Pool fires are complex in nature due to the strong interaction among the combustion, heat and mass transfers and pyrolysis at the fuel surface. Moreover, the experimental study of such large complex fires involves fire safety issues and difficulties in performing experiments. In the present work, large eddy simulations are performed to study such complex fire scenarios using fire dynamic simulator. A 1 m diesel pool fire is considered for the studied cases, and diesel is chosen as it is most commonly involved fuel in fire accidents. Fire simulations are performed by specifying two different boundary conditions: one the fuel is in liquid state and pyrolysis model is invoked, and the other by assuming the fuel is initially in a vapor state and thereby prescribing the mass loss rate. A domain of size 11.2 m × 11.2 m × 7.28 m with uniform structured grid is chosen for the numerical simulations. Grid sensitivity analysis is performed, and a non-dimensional grid size of 12 corresponding to 8 cm grid size is considered. Flame properties like mass burning rate, irradiance, and time-averaged axial flame temperature profile are predicted. The predicted steady-state mass burning rate is 40 g/s and is within the uncertainty limits of the previously reported experimental data (39.4 g/s). Though the profile of the irradiance at a distance from the fire along the height is somewhat in line with the experimental data and the location of the maximum value of irradiance is shifted to a higher location. This may be due to the lack of sophisticated models for the species transportation along with combustion and radiation in the continuous zone. Furthermore, the axial temperatures are not predicted well (for any of the boundary conditions) in any of the zones. The present study shows that the existing models are not sufficient enough for modeling blended fuels like diesel. The predictions are strongly dependent on the experimental values of the soot yield. Future experiments are necessary for generalizing the soot yield for different fires.Keywords: burning rate, fire accidents, fire dynamic simulator, pyrolysis
Procedia PDF Downloads 196322 Investigation of Fluid-Structure-Seabed Interaction of Gravity Anchor under Liquefaction and Scour
Authors: Vinay Kumar Vanjakula, Frank Adam, Nils Goseberg, Christian Windt
Abstract:
When a structure is installed on a seabed, the presence of the structure will influence the flow field around it. The changes in the flow field include, formation of vortices, turbulence generation, waves or currents flow breaking and pressure differentials around the seabed sediment. These changes allow the local seabed sediment to be carried off and results in Scour (erosion). These are a threat to the structure's stability. In recent decades, rapid developments of research work and the knowledge of scour On fixed structures (bridges and Monopiles) in rivers and oceans has been carried out, and very limited research work on scour and liquefaction for gravity anchors, particularly for floating Tension Leg Platform (TLP) substructures. Due to its importance and need for enhancement of knowledge in scour and liquefaction around marine structures, the MarTERA funded a three-year (2020-2023) research program called NuLIMAS (Numerical Modeling of Liquefaction Around Marine Structures). It’s a group consists of European institutions (Universities, laboratories, and consulting companies). The objective of this study is to build a numerical model that replicates the reality, which indeed helps to simulate (predict) underwater flow conditions and to study different marine scour and Liquefication situations. It helps to design a heavyweight anchor for the TLP substructure and to minimize the time and expenditure on experiments. And also, the achieved results and the numerical model will be a basis for the development of other design and concepts For marine structures. The Computational Fluid Dynamics (CFD) numerical model will build in OpenFOAM. A conceptual design of heavyweight anchor for TLP substructure is designed through taking considerations of available state-of-the-art knowledge on scour and Liquefication concepts and references to Previous existing designs. These conceptual designs are validated with the available similar experimental benchmark data and also with the CFD numerical benchmark standards (CFD quality assurance study). CFD optimization model/tool is designed as to minimize the effect of fluid flow, scour, and Liquefication. A parameterized model is also developed to automate the calculation process to reduce user interactions. The parameters such as anchor Lowering Process, flow optimized outer contours, seabed interaction study, and FSSI (Fluid-Structure-Seabed Interactions) are investigated and used to carve the model as to build an optimized anchor.Keywords: gravity anchor, liquefaction, scour, computational fluid dynamics
Procedia PDF Downloads 144321 Railway Composite Flooring Design: Numerical Simulation and Experimental Studies
Authors: O. Lopez, F. Pedro, A. Tadeu, J. Antonio, A. Coelho
Abstract:
The future of the railway industry lies in the innovation of lighter, more efficient and more sustainable trains. Weight optimizations in railway vehicles allow reducing power consumption and CO₂ emissions, increasing the efficiency of the engines and the maximum speed reached. Additionally, they reduce wear of wheels and rails, increase the space available for passengers, etc. Among the various systems that integrate railway interiors, the flooring system is one which has greater impact both on passenger safety and comfort, as well as on the weight of the interior systems. Due to the high weight saving potential, relative high mechanical resistance, good acoustic and thermal performance, ease of modular design, cost-effectiveness and long life, the use of new sustainable composite materials and panels provide the latest innovations for competitive solutions in the development of flooring systems. However, one of the main drawbacks of the flooring systems is their relatively poor resistance to point loads. Point loads in railway interiors can be caused by passengers or by components fixed to the flooring system, such as seats and restraint systems, handrails, etc. In this way, they can originate higher fatigue solicitations under service loads or zones with high stress concentrations under exceptional loads (higher longitudinal, transverse and vertical accelerations), thus reducing its useful life. Therefore, to verify all the mechanical and functional requirements of the flooring systems, many physical prototypes would be created during the design phase, with all of the high costs associated with it. Nowadays, the use of virtual prototyping methods by computer-aided design (CAD) and computer-aided engineering (CAE) softwares allow validating a product before committing to making physical test prototypes. The scope of this work was to current computer tools and integrate the processes of innovation, development, and manufacturing to reduce the time from design to finished product and optimise the development of the product for higher levels of performance and reliability. In this case, the mechanical response of several sandwich panels with different cores, polystyrene foams, and composite corks, were assessed, to optimise the weight and the mechanical performance of a flooring solution for railways. Sandwich panels with aluminum face sheets were tested to characterise its mechanical performance and determine the polystyrene foam and cork properties when used as inner cores. Then, a railway flooring solution was fully modelled (including the elastomer pads to provide the required vibration isolation from the car body) and perform structural simulations using FEM analysis to comply all the technical product specifications for the supply of a flooring system. Zones with high stress concentrations are studied and tested. The influence of vibration modes on the comfort level and stability is discussed. The information obtained with the computer tools was then completed with several mechanical tests performed on some solutions, and on specific components. The results of the numerical simulations and experimental campaign carried out are presented in this paper. This research work was performed as part of the POCI-01-0247-FEDER-003474 (coMMUTe) Project funded by Portugal 2020 through COMPETE 2020.Keywords: cork agglomerate core, mechanical performance, numerical simulation, railway flooring system
Procedia PDF Downloads 179320 Simulation of Solar Assisted Absorption Cooling and Electricity Generation along with Thermal Storage
Authors: Faezeh Mosallat, Eric L. Bibeau, Tarek El Mekkawy
Abstract:
Availability of a wide variety of renewable resources, such as large reserves of hydro, biomass, solar and wind in Canada provides significant potential to improve the sustainability of energy uses. As buildings represent a considerable portion of energy use in Canada, application of distributed solar energy systems for heating and cooling may increase the amount of renewable energy use. Parabolic solar trough systems have seen limited deployments in cold northern climates as they are more suitable for electricity production in southern latitudes. Heat production by concentrating solar rays using parabolic troughs can overcome the poor efficiencies of flat panels and evacuated tubes in cold climates. A numerical dynamic model is developed to simulate an installed parabolic solar trough facility in Winnipeg. The results of the numerical model are validated using the experimental data obtained from this system. The model is developed in Simulink and will be utilized to simulate a tri-generation system for heating, cooling and electricity generation in remote northern communities. The main objective of this simulation is to obtain operational data of solar troughs in cold climates as this is lacking in the literature. In this paper, the validated Simulink model is applied to simulate a solar assisted absorption cooling system along with electricity generation using organic Rankine cycle (ORC) and thermal storage. A control strategy is employed to distribute the heated oil from solar collectors among the above three systems considering the temperature requirements. This modeling provides dynamic performance results using real time minutely meteorological data which are collected at the same location the solar system is installed. This is a big step ahead of the current models by accurately calculating the available solar energy at each time step considering the solar radiation fluctuations due to passing clouds. The solar absorption cooling is modeled to use the generated heat from the solar trough system and provide cooling in summer for a greenhouse which is located next to the solar field. A natural gas water heater provides the required excess heat for the absorption cooling at low or no solar radiation periods. The results of the simulation are presented for a summer month in Winnipeg which includes the amount of generated electric power from ORC and contribution of solar energy in the cooling load provisionKeywords: absorption cooling, parabolic solar trough, remote community, validated model
Procedia PDF Downloads 216319 Wildlife Habitat Corridor Mapping in Urban Environments: A GIS-Based Approach Using Preliminary Category Weightings
Authors: Stefan Peters, Phillip Roetman
Abstract:
The global loss of biodiversity is threatening the benefits nature provides to human populations and has become a more pressing issue than climate change and requires immediate attention. While there have been successful global agreements for environmental protection, such as the Montreal Protocol, these are rare, and we cannot rely on them solely. Thus, it is crucial to take national and local actions to support biodiversity. Australia is one of the 17 countries in the world with a high level of biodiversity, and its cities are vital habitats for endangered species, with more of them found in urban areas than in non-urban ones. However, the protection of biodiversity in metropolitan Adelaide has been inadequate, with over 130 species disappearing since European colonization in 1836. In this research project we conceptualized, developed and implemented a framework for wildlife Habitat Hotspots and Habitat Corridor modelling in an urban context using geographic data and GIS modelling and analysis. We used detailed topographic and other geographic data provided by a local council, including spatial and attributive properties of trees, parcels, water features, vegetated areas, roads, verges, traffic, and census data. Weighted factors considered in our raster-based Habitat Hotspot model include parcel size, parcel shape, population density, canopy cover, habitat quality and proximity to habitats and water features. Weighted factors considered in our raster-based Habitat Corridor model include habitat potential (resulting from the Habitat Hotspot model), verge size, road hierarchy, road widths, human density, and presence of remnant indigenous vegetation species. We developed a GIS model, using Python scripting and ArcGIS-Pro Model-Builder, to establish an automated reproducible and adjustable geoprocessing workflow, adaptable to any study area of interest. Our habitat hotspot and corridor modelling framework allow to determine and map existing habitat hotspots and wildlife habitat corridors. Our research had been applied to the study case of Burnside, a local council in Adelaide, Australia, which encompass an area of 30 km2. We applied end-user expertise-based category weightings to refine our models and optimize the use of our habitat map outputs towards informing local strategic decision-making.Keywords: biodiversity, GIS modeling, habitat hotspot, wildlife corridor
Procedia PDF Downloads 115318 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective
Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg
Abstract:
The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.Keywords: Industry 4.0., mass customization, production networks, virtual process-chain
Procedia PDF Downloads 277317 Renewable Energy Micro-Grid Control Using Microcontroller in LabVIEW
Authors: Meena Agrawal, Chaitanya P. Agrawal
Abstract:
The power systems are transforming and becoming smarter with innovations in technologies to enable embark simultaneously upon the sustainable energy needs, rising environmental concerns, economic benefits and quality requirements. The advantages provided by inter-connection of renewable energy resources are becoming more viable and dependable with the smart controlling technologies. The limitation of most renewable resources have their diversity and intermittency causing problems in power quality, grid stability, reliability, security etc. is being cured by these efforts. A necessitate of optimal energy management by intelligent Micro-Grids at the distribution end of the power system has been accredited to accommodate sustainable renewable Distributed Energy Resources on large scale across the power grid. All over the world Smart Grids are emerging now as foremost concern infrastructure upgrade programs. The hardware setup includes NI cRIO 9022, Compact Reconfigurable Input Output microcontroller board connected to the PC on a LAN router with three hardware modules. The Real-Time Embedded Controller is reconfigurable controller device consisting of an embedded real-time processor controller for communication and processing, a reconfigurable chassis housing the user-programmable FPGA, Eight hot-swappable I/O modules, and graphical LabVIEW system design software. It has been employed for signal analysis, controls and acquisition and logging of the renewable sources with the LabVIEW Real-Time applications. The employed cRIO chassis controls the timing for the module and handles communication with the PC over the USB, Ethernet, or 802.11 Wi-Fi buses. It combines modular I/O, real-time processing, and NI LabVIEW programmable. In the presented setup, the Analog Input Module NI 9205 five channels have been used for input analog voltage signals from renewable energy sources and NI 9227 four channels have been used for input analog current signals of the renewable sources. For switching actions based on the programming logic developed in software, a module having Electromechanical Relays (single-pole single throw) with 4-Channels, electrically isolated and LED indicating the state of that channel have been used for isolating the renewable Sources on fault occurrence, which is decided by the logic in the program. The module for Ethernet based Data Acquisition Interface ENET 9163 Ethernet Carrier, which is connected on the LAN Router for data acquisition from a remote source over Ethernet also has the module NI 9229 installed. The LabVIEW platform has been employed for efficient data acquisition, monitoring and control. Control logic utilized in program for operation of the hardware switching Related to Fault Relays has been portrayed as a flowchart. A communication system has been successfully developed amongst the sources and loads connected on different computers using Hypertext transfer protocol, HTTP or Ethernet Local Stacked area Network TCP/IP protocol. There are two main I/O interfacing clients controlling the operation of the switching control of the renewable energy sources over internet or intranet. The paper presents experimental results of the briefed setup for intelligent control of the micro-grid for renewable energy sources, besides the control of Micro-Grid with data acquisition and control hardware based on a microcontroller with visual program developed in LabVIEW.Keywords: data acquisition and control, LabVIEW, microcontroller cRIO, Smart Micro-Grid
Procedia PDF Downloads 333316 Modeling Standpipe Pressure Using Multivariable Regression Analysis by Combining Drilling Parameters and a Herschel-Bulkley Model
Authors: Seydou Sinde
Abstract:
The aims of this paper are to formulate mathematical expressions that can be used to estimate the standpipe pressure (SPP). The developed formulas take into account the main factors that, directly or indirectly, affect the behavior of SPP values. Fluid rheology and well hydraulics are some of these essential factors. Mud Plastic viscosity, yield point, flow power, consistency index, flow rate, drillstring, and annular geometries are represented by the frictional pressure (Pf), which is one of the input independent parameters and is calculated, in this paper, using Herschel-Bulkley rheological model. Other input independent parameters include the rate of penetration (ROP), applied load or weight on the bit (WOB), bit revolutions per minute (RPM), bit torque (TRQ), and hole inclination and direction coupled in the hole curvature or dogleg (DL). The technique of repeating parameters and Buckingham PI theorem are used to reduce the number of the input independent parameters into the dimensionless revolutions per minute (RPMd), the dimensionless torque (TRQd), and the dogleg, which is already in the dimensionless form of radians. Multivariable linear and polynomial regression technique using PTC Mathcad Prime 4.0 is used to analyze and determine the exact relationships between the dependent parameter, which is SPP, and the remaining three dimensionless groups. Three models proved sufficiently satisfactory to estimate the standpipe pressure: multivariable linear regression model 1 containing three regression coefficients for vertical wells; multivariable linear regression model 2 containing four regression coefficients for deviated wells; and multivariable polynomial quadratic regression model containing six regression coefficients for both vertical and deviated wells. Although that the linear regression model 2 (with four coefficients) is relatively more complex and contains an additional term over the linear regression model 1 (with three coefficients), the former did not really add significant improvements to the later except for some minor values. Thus, the effect of the hole curvature or dogleg is insignificant and can be omitted from the input independent parameters without significant losses of accuracy. The polynomial quadratic regression model is considered the most accurate model due to its relatively higher accuracy for most of the cases. Data of nine wells from the Middle East were used to run the developed models with satisfactory results provided by all of them, even if the multivariable polynomial quadratic regression model gave the best and most accurate results. Development of these models is useful not only to monitor and predict, with accuracy, the values of SPP but also to early control and check for the integrity of the well hydraulics as well as to take the corrective actions should any unexpected problems appear, such as pipe washouts, jet plugging, excessive mud losses, fluid gains, kicks, etc.Keywords: standpipe, pressure, hydraulics, nondimensionalization, parameters, regression
Procedia PDF Downloads 84315 Predictive Modelling of Aircraft Component Replacement Using Imbalanced Learning and Ensemble Method
Authors: Dangut Maren David, Skaf Zakwan
Abstract:
Adequate monitoring of vehicle component in other to obtain high uptime is the goal of predictive maintenance, the major challenge faced by businesses in industries is the significant cost associated with a delay in service delivery due to system downtime. Most of those businesses are interested in predicting those problems and proactively prevent them in advance before it occurs, which is the core advantage of Prognostic Health Management (PHM) application. The recent emergence of industry 4.0 or industrial internet of things (IIoT) has led to the need for monitoring systems activities and enhancing system-to-system or component-to- component interactions, this has resulted to a large generation of data known as big data. Analysis of big data represents an increasingly important, however, due to complexity inherently in the dataset such as imbalance classification problems, it becomes extremely difficult to build a model with accurate high precision. Data-driven predictive modeling for condition-based maintenance (CBM) has recently drowned research interest with growing attention to both academics and industries. The large data generated from industrial process inherently comes with a different degree of complexity which posed a challenge for analytics. Thus, imbalance classification problem exists perversely in industrial datasets which can affect the performance of learning algorithms yielding to poor classifier accuracy in model development. Misclassification of faults can result in unplanned breakdown leading economic loss. In this paper, an advanced approach for handling imbalance classification problem is proposed and then a prognostic model for predicting aircraft component replacement is developed to predict component replacement in advanced by exploring aircraft historical data, the approached is based on hybrid ensemble-based method which improves the prediction of the minority class during learning, we also investigate the impact of our approach on multiclass imbalance problem. We validate the feasibility and effectiveness in terms of the performance of our approach using real-world aircraft operation and maintenance datasets, which spans over 7 years. Our approach shows better performance compared to other similar approaches. We also validate our approach strength for handling multiclass imbalanced dataset, our results also show good performance compared to other based classifiers.Keywords: prognostics, data-driven, imbalance classification, deep learning
Procedia PDF Downloads 174314 Connecting MRI Physics to Glioma Microenvironment: Comparing Simulated T2-Weighted MRI Models of Fixed and Expanding Extracellular Space
Authors: Pamela R. Jackson, Andrea Hawkins-Daarud, Cassandra R. Rickertsen, Kamala Clark-Swanson, Scott A. Whitmire, Kristin R. Swanson
Abstract:
Glioblastoma Multiforme (GBM), the most common primary brain tumor, often presents with hyperintensity on T2-weighted or T2-weighted fluid attenuated inversion recovery (T2/FLAIR) magnetic resonance imaging (MRI). This hyperintensity corresponds with vasogenic edema, however there are likely many infiltrating tumor cells within the hyperintensity as well. While MRIs do not directly indicate tumor cells, MRIs do reflect the microenvironmental water abnormalities caused by the presence of tumor cells and edema. The inherent heterogeneity and resulting MRI features of GBMs complicate assessing disease response. To understand how hyperintensity on T2/FLAIR MRI may correlate with edema in the extracellular space (ECS), a multi-compartmental MRI signal equation which takes into account tissue compartments and their associated volumes with input coming from a mathematical model of glioma growth that incorporates edema formation was explored. The reasonableness of two possible extracellular space schema was evaluated by varying the T2 of the edema compartment and calculating the possible resulting T2s in tumor and peripheral edema. In the mathematical model, gliomas were comprised of vasculature and three tumor cellular phenotypes: normoxic, hypoxic, and necrotic. Edema was characterized as fluid leaking from abnormal tumor vessels. Spatial maps of tumor cell density and edema for virtual tumors were simulated with different rates of proliferation and invasion and various ECS expansion schemes. These spatial maps were then passed into a multi-compartmental MRI signal model for generating simulated T2/FLAIR MR images. Individual compartments’ T2 values in the signal equation were either from literature or estimated and the T2 for edema specifically was varied over a wide range (200 ms – 9200 ms). T2 maps were calculated from simulated images. T2 values based on simulated images were evaluated for regions of interest (ROIs) in normal appearing white matter, tumor, and peripheral edema. The ROI T2 values were compared to T2 values reported in literature. The expanding scheme of extracellular space is had T2 values similar to the literature calculated values. The static scheme of extracellular space had a much lower T2 values and no matter what T2 was associated with edema, the intensities did not come close to literature values. Expanding the extracellular space is necessary to achieve simulated edema intensities commiserate with acquired MRIs.Keywords: extracellular space, glioblastoma multiforme, magnetic resonance imaging, mathematical modeling
Procedia PDF Downloads 235313 GenAI Agents in Product Management: A Case Study from the Manufacturing Sector
Authors: Aron Witkowski, Andrzej Wodecki
Abstract:
Purpose: This study aims to explore the feasibility and effectiveness of utilizing Generative Artificial Intelligence (GenAI) agents as product managers within the manufacturing sector. It seeks to evaluate whether current GenAI capabilities can fulfill the complex requirements of product management and deliver comparable outcomes to human counterparts. Study Design/Methodology/Approach: This research involved the creation of a support application for product managers, utilizing high-quality sources on product management and generative AI technologies. The application was designed to assist in various aspects of product management tasks. To evaluate its effectiveness, a study was conducted involving 10 experienced product managers from the manufacturing sector. These professionals were tasked with using the application and providing feedback on the tool's responses to common questions and challenges they encounter in their daily work. The study employed a mixed-methods approach, combining quantitative assessments of the tool's performance with qualitative interviews to gather detailed insights into the user experience and perceived value of the application. Findings: The findings reveal that GenAI-based product management agents exhibit significant potential in handling routine tasks, data analysis, and predictive modeling. However, there are notable limitations in areas requiring nuanced decision-making, creativity, and complex stakeholder interactions. The case study demonstrates that while GenAI can augment human capabilities, it is not yet fully equipped to independently manage the holistic responsibilities of a product manager in the manufacturing sector. Originality/Value: This research provides an analysis of GenAI's role in product management within the manufacturing industry, contributing to the limited body of literature on the application of GenAI agents in this domain. It offers practical insights into the current capabilities and limitations of GenAI, helping organizations make informed decisions about integrating AI into their product management strategies. Implications for Academic and Practical Fields: For academia, the study suggests new avenues for research in AI-human collaboration and the development of advanced AI systems capable of higher-level managerial functions. Practically, it provides industry professionals with a nuanced understanding of how GenAI can be leveraged to enhance product management, guiding investments in AI technologies and training programs to bridge identified gaps.Keywords: generative artificial intelligence, GenAI, NPD, new product development, product management, manufacturing
Procedia PDF Downloads 49312 Mathematical Model to Simulate Liquid Metal and Slag Accumulation, Drainage and Heat Transfer in Blast Furnace Hearth
Authors: Hemant Upadhyay, Tarun Kumar Kundu
Abstract:
It is utmost important for a blast furnace operator to understand the mechanisms governing the liquid flow, accumulation, drainage and heat transfer between various phases in blast furnace hearth for a stable and efficient blast furnace operation. Abnormal drainage behavior may lead to high liquid build up in the hearth. Operational problems such as pressurization, low wind intake, and lower material descent rates, normally be encountered if the liquid levels in the hearth exceed a critical limit when Hearth coke and Deadman start to float. Similarly, hot metal temperature is an important parameter to be controlled in the BF operation; it should be kept at an optimal level to obtain desired product quality and a stable BF performance. It is not possible to carry out any direct measurement of above due to the hostile conditions in the hearth with chemically aggressive hot liquids. The objective here is to develop a mathematical model to simulate the variation in hot metal / slag accumulation and temperature during the tapping of the blast furnace based on the computed drainage rate, production rate, mass balance, heat transfer between metal and slag, metal and solids, slag and solids as well as among the various zones of metal and slag itself. For modeling purpose, the BF hearth is considered as a pressurized vessel, filled with solid coke particles. Liquids trickle down in hearth from top and accumulate in voids between the coke particles which are assumed thermally saturated. A set of generic mass balance equations gives the amount of metal and slag intake in hearth. A small drainage (tap hole) is situated at the bottom of the hearth and flow rate of liquids from tap hole is computed taking in account the amount of both the phases accumulated their level in hearth, pressure from gases in the furnace and erosion behaviors of tap hole itself. Heat transfer equations provide the exchange of heat between various layers of liquid metal and slag, and heat loss to cooling system through refractories. Based on all that information a dynamic simulation is carried out which provides real time information of liquids accumulation in hearth before and during tapping, drainage rate and its variation, predicts critical event timings during tapping and expected tapping temperature of metal and slag on preset time intervals. The model is in use at JSPL, India BF-II and its output is regularly cross-checked with actual tapping data, which are in good agreement.Keywords: blast furnace, hearth, deadman, hotmetal
Procedia PDF Downloads 184311 Stuttering Persistence in Children: Effectiveness of the Psicodizione Method in a Small Italian Cohort
Authors: Corinna Zeli, Silvia Calati, Marco Simeoni, Chiara Comastri
Abstract:
Developmental stuttering affects about 10% of preschool children; although the high percentage of natural recovery, a quarter of them will become an adult who stutters. An effective early intervention should help those children with high persistence risk for the future. The Psicodizione method for early stuttering is an Italian behavior indirect treatment for preschool children who stutter in which method parents act as good guides for communication, modeling their own fluency. In this study, we give a preliminary measure to evaluate the long-term effectiveness of Psicodizione method on stuttering preschool children with a high persistence risk. Among all Italian children treated with the Psicodizione method between 2018 and 2019, we selected 8 kids with at least 3 high risk persistence factors from the Illinois Prediction Criteria proposed by Yairi and Seery. The factors chosen for the selection were: one parent who stutters (1pt mother; 1.5pt father), male gender, ≥ 4 years old at onset; ≥ 12 months from onset of symptoms before treatment. For this study, the families were contacted after an average period of time of 14,7 months (range 3 - 26 months). Parental reports were gathered with a standard online questionnaire in order to obtain data reflecting fluency from a wide range of the children’s life situations. The minimum worthwhile outcome was set at "mild evidence" in a 5 point Likert scale (1 mild evidence- 5 high severity evidence). A second group of 6 children, among those treated with the Piscodizione method, was selected as high potential for spontaneous remission (low persistence risk). The children in this group had to fulfill all the following criteria: female gender, symptoms for less than 12 months (before treatment), age of onset <4 years old, none of the parents with persistent stuttering. At the time of this follow-up, the children were aged 6–9 years, with a mean of 15 months post-treatment. Among the children in the high persistence risk group, 2 (25%) hadn’t had stutter anymore, and 3 (37,5%) had mild stutter based on parental reports. In the low persistency risk group, the children were aged 4–6 years, with a mean of 14 months post-treatment, and 5 (84%) hadn’t had stutter anymore (for the past 16 months on average).62,5% of children at high risk of persistence after Psicodizione treatment showed mild evidence of stutter at most. 75% of parents confirmed a better fluency than before the treatment. The low persistence risk group seemed to be representative of spontaneous recovery. This study’s design could help to better evaluate the success of the proposed interventions for stuttering preschool children and provides a preliminary measure of the effectiveness of the Psicodizione method on high persistence risk children.Keywords: early treatment, fluency, preschool children, stuttering
Procedia PDF Downloads 215310 Analysis and Design Modeling for Next Generation Network Intrusion Detection and Prevention System
Authors: Nareshkumar Harale, B. B. Meshram
Abstract:
The continued exponential growth of successful cyber intrusions against today’s businesses has made it abundantly clear that traditional perimeter security measures are no longer adequate and effective. We evolved the network trust architecture from trust-untrust to Zero-Trust, With Zero Trust, essential security capabilities are deployed in a way that provides policy enforcement and protection for all users, devices, applications, data resources, and the communications traffic between them, regardless of their location. Information exchange over the Internet, in spite of inclusion of advanced security controls, is always under innovative, inventive and prone to cyberattacks. TCP/IP protocol stack, the adapted standard for communication over network, suffers from inherent design vulnerabilities such as communication and session management protocols, routing protocols and security protocols are the major cause of major attacks. With the explosion of cyber security threats, such as viruses, worms, rootkits, malwares, Denial of Service attacks, accomplishing efficient and effective intrusion detection and prevention is become crucial and challenging too. In this paper, we propose a design and analysis model for next generation network intrusion detection and protection system as part of layered security strategy. The proposed system design provides intrusion detection for wide range of attacks with layered architecture and framework. The proposed network intrusion classification framework deals with cyberattacks on standard TCP/IP protocol, routing protocols and security protocols. It thereby forms the basis for detection of attack classes and applies signature based matching for known cyberattacks and data mining based machine learning approaches for unknown cyberattacks. Our proposed implemented software can effectively detect attacks even when malicious connections are hidden within normal events. The unsupervised learning algorithm applied to network audit data trails results in unknown intrusion detection. Association rule mining algorithms generate new rules from collected audit trail data resulting in increased intrusion prevention though integrated firewall systems. Intrusion response mechanisms can be initiated in real-time thereby minimizing the impact of network intrusions. Finally, we have shown that our approach can be validated and how the analysis results can be used for detecting and protection from the new network anomalies.Keywords: network intrusion detection, network intrusion prevention, association rule mining, system analysis and design
Procedia PDF Downloads 227309 Force Sensing Resistor Testing of Hand Forces and Grasps during Daily Functional Activities in the Covid-19 Pandemic
Authors: Monique M. Keller, Roline Barnes, Corlia Brandt
Abstract:
Introduction Scientific evidence on the hand forces and the types of grasps measurement during daily tasks are lacking, leaving a gap in the field of hand rehabilitation and robotics. Measuring the grasp forces and types produced by the individual fingers during daily functional tasks is valuable to inform and grade rehabilitation practices for second to fifth metacarpal fractures with robust scientific evidence. Feix et al, 2016 identified the most extensive and complete grasp study that resulted in the GRASP taxonomy. Covid-19 virus changed data collection across the globe and safety precautions in research are essential to ensure the health of participants and researchers. Methodology A cross-sectional study investigated six healthy adults aged 20 to 59 years, pilot participants’ hand forces during 105 tasks. The tasks were categorized into five sections namely, personal care, transport and moving around, home environment and inside, gardening and outside, and office. The predominant grasp of each task was identified guided by the GRASP Taxonomy. Grasp forces were measured with 13mm force-sensing resistors glued onto a glove attached to each of the dominant and non-dominant hand’s individual fingers. Testing equipment included Flexiforce 13millimetres FSR .5" circle, calibrated prior to testing, 10k 1/4w resistors, Arduino pro mini 5.0v – compatible, Esp-01-kit, Arduino uno r3 – compatible board, USB ab cable - 1m, Ftdi ft232 mini USB to serial, Sil 40 inline connectors, ribbon cable combo male header pins, female to female, male to female, two gloves, glue to attach the FSR to glove, Arduino software programme downloaded on a laptop. Grip strength measurements with Jamar dynamometer prior to testing and after every 25 daily tasks were taken to will avoid fatigue and ensure reliability in testing. Covid-19 precautions included wearing face masks at all times, screening questionnaires, temperatures taken, wearing surgical gloves before putting on the testing gloves 1.5 metres long wires attaching the FSR to the Arduino to maintain social distance. Findings Predominant grasps observed during 105 tasks included, adducted thumb (17), lateral tripod (10), prismatic three fingers (12), small diameter (9), prismatic two fingers (9), medium wrap (7), fixed hook (5), sphere four fingers (4), palmar (4), parallel extension (4), index finger extension (3), distal (3), power sphere (2), tripod (2), quadpod (2), prismatic four fingers (2), lateral (2), large-diameter (2), ventral (2), precision sphere (1), palmar pinch (1), light tool (1), inferior pincher (1), and writing tripod (1). Range of forces applied per category, personal care (1-25N), transport and moving around (1-9 N), home environment and inside (1-41N), gardening and outside (1-26.5N), and office (1-20N). Conclusion Scientifically measurements of finger forces with careful consideration to types of grasps used in daily tasks should guide rehabilitation practices and robotic design to ensure a return to the full participation of the individual into the community.Keywords: activities of daily living (ADL), Covid-19, force-sensing resistors, grasps, hand forces
Procedia PDF Downloads 190