Search results for: Francisco J. Real
4420 Numerical Investigation of the Needle Opening Process in a High Pressure Gas Injector
Authors: Matthias Banholzer, Hagen Müller, Michael Pfitzner
Abstract:
Gas internal combustion engines are widely used as propulsion systems or in power plants to generate heat and electricity. While there are different types of injection methods including the manifold port fuel injection and the direct injection, the latter has more potential to increase the specific power by avoiding air displacement in the intake and to reduce combustion anomalies such as backfire or pre-ignition. During the opening process of the injector, multiple flow regimes occur: subsonic, transonic and supersonic. To cover the wide range of Mach numbers a compressible pressure-based solver is used. While the standard Pressure Implicit with Splitting of Operators (PISO) method is used for the coupling between velocity and pressure, a high-resolution non-oscillatory central scheme established by Kurganov and Tadmor calculates the convective fluxes. A blending function based on the local Mach- and CFL-number switches between the compressible and incompressible regimes of the developed model. As the considered operating points are well above the critical state of the used fluids, the ideal gas assumption is not valid anymore. For the real gas thermodynamics, the models based on the Soave-Redlich-Kwong equation of state were implemented. The caloric properties are corrected using a departure formalism, for the viscosity and the thermal conductivity the empirical correlation of Chung is used. For the injector geometry, the dimensions of a diesel injector were adapted. Simulations were performed using different nozzle and needle geometries and opening curves. It can be clearly seen that there is a significant influence of all three parameters.Keywords: high pressure gas injection, hybrid solver, hydrogen injection, needle opening process, real-gas thermodynamics
Procedia PDF Downloads 4614419 Design and Construction Validation of Pile Performance through High Strain Pile Dynamic Tests for both Contiguous Flight Auger and Drilled Displacement Piles
Authors: S. Pirrello
Abstract:
Sydney’s booming real estate market has pushed property developers to invest in historically “no-go” areas, which were previously too expensive to develop. These areas are usually near rivers where the sites are underlain by deep alluvial and estuarine sediments. In these ground conditions, conventional bored pile techniques are often not competitive. Contiguous Flight Auger (CFA) and Drilled Displacement (DD) Piles techniques are on the other hand suitable for these ground conditions. This paper deals with the design and construction challenges encountered with these piling techniques for a series of high-rise towers in Sydney’s West. The advantages of DD over CFA piles such as reduced overall spoil with substantial cost savings and achievable rock sockets in medium strength bedrock are discussed. Design performances were assessed with PIGLET. Pile performances are validated in two stages, during constructions with the interpretation of real-time data from the piling rigs’ on-board computer data, and after construction with analyses of results from high strain pile dynamic testing (PDA). Results are then presented and discussed. High Strain testing data are presented as Case Pile Wave Analysis Program (CAPWAP) analyses.Keywords: contiguous flight auger (CFA) , DEFPIG, case pile wave analysis program (CAPWAP), drilled displacement piles (DD), pile dynamic testing (PDA), PIGLET, PLAXIS, repute, pile performance
Procedia PDF Downloads 2834418 Erosion Modeling of Surface Water Systems for Long Term Simulations
Authors: Devika Nair, Sean Bellairs, Ken Evans
Abstract:
Flow and erosion modeling provides an avenue for simulating the fine suspended sediment in surface water systems like streams and creeks. Fine suspended sediment is highly mobile, and many contaminants that may have been released by any sort of catchment disturbance attach themselves to these sediments. Therefore, a knowledge of fine suspended sediment transport is important in assessing contaminant transport. The CAESAR-Lisflood Landform Evolution Model, which includes a hydrologic model (TOPMODEL) and a hydraulic model (Lisflood), is being used to assess the sediment movement in tropical streams on account of a disturbance in the catchment of the creek and to determine the dynamics of sediment quantity in the creek through the years by simulating the model for future years. The accuracy of future simulations depends on the calibration and validation of the model to the past and present events. Calibration and validation of the model involve finding a combination of parameters of the model, which, when applied and simulated, gives model outputs similar to those observed for the real site scenario for corresponding input data. Calibrating the sediment output of the CAESAR-Lisflood model at the catchment level and using it for studying the equilibrium conditions of the landform is an area yet to be explored. Therefore, the aim of the study was to calibrate the CAESAR-Lisflood model and then validate it so that it could be run for future simulations to study how the landform evolves over time. To achieve this, the model was run for a rainfall event with a set of parameters, plus discharge and sediment data for the input point of the catchment, to analyze how similar the model output would behave when compared with the discharge and sediment data for the output point of the catchment. The model parameters were then adjusted until the model closely approximated the real site values of the catchment. It was then validated by running the model for a different set of events and checking that the model gave similar results to the real site values. The outcomes demonstrated that while the model can be calibrated to a greater extent for hydrology (discharge output) throughout the year, the sediment output calibration may be slightly improved by having the ability to change parameters to take into account the seasonal vegetation growth during the start and end of the wet season. This study is important to assess hydrology and sediment movement in seasonal biomes. The understanding of sediment-associated metal dispersion processes in rivers can be used in a practical way to help river basin managers more effectively control and remediate catchments affected by present and historical metal mining.Keywords: erosion modelling, fine suspended sediments, hydrology, surface water systems
Procedia PDF Downloads 864417 Cytotoxicological Evaluation of a Folate Receptor Targeting Drug Delivery System Based on Cyclodextrins
Authors: Caroline Mendes, Mary McNamara, Orla Howe
Abstract:
For chemotherapy, a drug delivery system should be able to specifically target cancer cells and deliver the therapeutic dose without affecting normal cells. Folate receptors (FR) can be considered key targets since they are commonly over-expressed in cancer cells and they are the molecular marker used in this study. Here, cyclodextrin (CD) has being studied as a vehicle for delivering the chemotherapeutic drug, methotrexate (MTX). CDs have the ability to form inclusion complexes, in which molecules of suitable dimensions are included within the CD cavity. In this study, β-CD has been modified using folic acid so as to specifically target the FR molecular marker. Thus, the system studied here for drug delivery consists of β-CD, folic acid and MTX (CDEnFA:MTX). Cellular uptake of folic acid is mediated with high affinity by folate receptors while the cellular uptake of antifolates, such as MTX, is mediated with high affinity by the reduced folate carriers (RFCs). This study addresses the gene (mRNA) and protein expression levels of FRs and RFCs in the cancer cell lines CaCo-2, SKOV-3, HeLa, MCF-7, A549 and the normal cell line BEAS-2B, quantified by real-time polymerase chain reaction (real-time PCR) and flow cytometry, respectively. From that, four cell lines with different levels of FRs, were chosen for cytotoxicity assays of MTX and CDEnFA:MTX using the MTT assay. Real-time PCR and flow cytometry data demonstrated that all cell lines ubiquitously express moderate levels of RFC. These experiments have also shown that levels of FR protein in CaCo-2 cells are high, while levels in SKOV-3, HeLa and MCF-7 cells are moderate. A549 and BEAS-2B cells express low levels of FR protein. FRs are highly expressed in all the cancer cell lines analysed when compared to the normal cell line BEAS-2B. The cell lines CaCo-2, MCF-7, A549 and BEAS-2B were used in the cell viability assays. 48 hours treatment with the free drug and the complex resulted in IC50 values of 93.9 µM ± 9.2 and 56.0 µM ± 4.0 for CaCo-2 for free MTX and CDEnFA:MTX respectively, 118.2 µM ± 10.8 and 97.8 µM ± 12.3 for MCF-7, 36.4 µM ± 6.9 and 75.0 µM ± 8.5 for A549 and 132.6 µM ± 12.1 and 288.1 µM ± 16.3 for BEAS-2B. These results demonstrate that MTX is more toxic towards cell lines expressing low levels of FR, such as the BEAS-2B. More importantly, these results demonstrate that the inclusion complex CDEnFA:MTX showed greater cytotoxicity than the free drug towards the high FR expressing CaCo-2 cells, indicating that it has potential to target this receptor, enhancing the specificity and the efficiency of the drug.Keywords: cyclodextrins, cancer treatment, drug delivery, folate receptors, reduced folate carriers
Procedia PDF Downloads 3024416 Real Time Classification of Political Tendency of Twitter Spanish Users based on Sentiment Analysis
Authors: Marc Solé, Francesc Giné, Magda Valls, Nina Bijedic
Abstract:
What people say on social media has turned into a rich source of information to understand social behavior. Specifically, the growing use of Twitter social media for political communication has arisen high opportunities to know the opinion of large numbers of politically active individuals in real time and predict the global political tendencies of a specific country. It has led to an increasing body of research on this topic. The majority of these studies have been focused on polarized political contexts characterized by only two alternatives. Unlike them, this paper tackles the challenge of forecasting Spanish political trends, characterized by multiple political parties, by means of analyzing the Twitters Users political tendency. According to this, a new strategy, named Tweets Analysis Strategy (TAS), is proposed. This is based on analyzing the users tweets by means of discovering its sentiment (positive, negative or neutral) and classifying them according to the political party they support. From this individual political tendency, the global political prediction for each political party is calculated. In order to do this, two different strategies for analyzing the sentiment analysis are proposed: one is based on Positive and Negative words Matching (PNM) and the second one is based on a Neural Networks Strategy (NNS). The complete TAS strategy has been performed in a Big-Data environment. The experimental results presented in this paper reveal that NNS strategy performs much better than PNM strategy to analyze the tweet sentiment. In addition, this research analyzes the viability of the TAS strategy to obtain the global trend in a political context make up by multiple parties with an error lower than 23%.Keywords: political tendency, prediction, sentiment analysis, Twitter
Procedia PDF Downloads 2394415 Estimation of Energy Losses of Photovoltaic Systems in France Using Real Monitoring Data
Authors: Mohamed Amhal, Jose Sayritupac
Abstract:
Photovoltaic (PV) systems have risen as one of the modern renewable energy sources that are used in wide ranges to produce electricity and deliver it to the electrical grid. In parallel, monitoring systems have been deployed as a key element to track the energy production and to forecast the total production for the next days. The reliability of the PV energy production has become a crucial point in the analysis of PV systems. A deeper understanding of each phenomenon that causes a gain or a loss of energy is needed to better design, operate and maintain the PV systems. This work analyzes the current losses distribution in PV systems starting from the available solar energy, going through the DC side and AC side, to the delivery point. Most of the phenomena linked to energy losses and gains are considered and modeled, based on real time monitoring data and datasheets of the PV system components. An analysis of the order of magnitude of each loss is compared to the current literature and commercial software. To date, the analysis of PV systems performance based on a breakdown structure of energy losses and gains is not covered enough in the literature, except in some software where the concept is very common. The cutting-edge of the current analysis is the implementation of software tools for energy losses estimation in PV systems based on several energy losses definitions and estimation technics. The developed tools have been validated and tested on some PV plants in France, which are operating for years. Among the major findings of the current study: First, PV plants in France show very low rates of soiling and aging. Second, the distribution of other losses is comparable to the literature. Third, all losses reported are correlated to operational and environmental conditions. For future work, an extended analysis on further PV plants in France and abroad will be performed.Keywords: energy gains, energy losses, losses distribution, monitoring, photovoltaic, photovoltaic systems
Procedia PDF Downloads 1774414 Numerical Simulation of Large-Scale Landslide-Generated Impulse Waves With a Soil‒Water Coupling Smooth Particle Hydrodynamics Model
Authors: Can Huang, Xiaoliang Wang, Qingquan Liu
Abstract:
Soil‒water coupling is an important process in landslide-generated impulse waves (LGIW) problems, accompanied by large deformation of soil, strong interface coupling and three-dimensional effect. A meshless particle method, smooth particle hydrodynamics (SPH) has great advantages in dealing with complex interface and multiphase coupling problems. This study presents an improved soil‒water coupled model to simulate LGIW problems based on an open source code DualSPHysics (v4.0). Aiming to solve the low efficiency problem in modeling real large-scale LGIW problems, graphics processing unit (GPU) acceleration technology is implemented into this code. An experimental example, subaerial landslide-generated water waves, is simulated to demonstrate the accuracy of this model. Then, the Huangtian LGIW, a real large-scale LGIW problem is modeled to reproduce the entire disaster chain, including landslide dynamics, fluid‒solid interaction, and surge wave generation. The convergence analysis shows that a particle distance of 5.0 m can provide a converged landslide deposit and surge wave for this example. Numerical simulation results are in good agreement with the limited field survey data. The application example of the Huangtian LGIW provides a typical reference for large-scale LGIW assessments, which can provide reliable information on landslide dynamics, interface coupling behavior, and surge wave characteristics.Keywords: soil‒water coupling, landslide-generated impulse wave, large-scale, SPH
Procedia PDF Downloads 644413 Multi-Criteria Inventory Classification Process Based on Logical Analysis of Data
Authors: Diana López-Soto, Soumaya Yacout, Francisco Ángel-Bello
Abstract:
Although inventories are considered as stocks of money sitting on shelve, they are needed in order to secure a constant and continuous production. Therefore, companies need to have control over the amount of inventory in order to find the balance between excessive and shortage of inventory. The classification of items according to certain criteria such as the price, the usage rate and the lead time before arrival allows any company to concentrate its investment in inventory according to certain ranking or priority of items. This makes the decision making process for inventory management easier and more justifiable. The purpose of this paper is to present a new approach for the classification of new items based on the already existing criteria. This approach is called the Logical Analysis of Data (LAD). It is used in this paper to assist the process of ABC items classification based on multiple criteria. LAD is a data mining technique based on Boolean theory that is used for pattern recognition. This technique has been tested in medicine, industry, credit risk analysis, and engineering with remarkable results. An application on ABC inventory classification is presented for the first time, and the results are compared with those obtained when using the well-known AHP technique and the ANN technique. The results show that LAD presented very good classification accuracy.Keywords: ABC multi-criteria inventory classification, inventory management, multi-class LAD model, multi-criteria classification
Procedia PDF Downloads 8844412 Simultaneous Removal of Arsenic and Toxic Metals from Contaminated Soil: a Pilot-Scale Demonstration
Authors: Juan Francisco Morales Arteaga, Simon Gluhar, Anela Kaurin, Domen Lestan
Abstract:
Contaminated soils are recognized as one of the most pressing global environmental problems. As is one of the most hazardous elements: chronic exposure to arsenic has devastating effects on health, cardiovascular diseases, cancer, and eventually death. Pb, Zn and Cd are very highly toxic metals that affect almost every organ in the body. With this in mind, new technologies for soil remediation processes are urgently needed. Calcareous artificially contaminated soil containing 231 mg kg-1 As and historically contaminated with Pb, Zn and Cd was washed with a 1:1.5 solid-liquid ratio of 90 mM EDTA, 100 mM oxalic acid, and 50 mM sodium dithionite to remove 59, 75, 29, and 53% of As, Pb, Zn, and Cd, respectively. To reduce emissions of residual EDTA and chelated metals from the remediated soil, zero valent iron (ZVI) was added (1% w/w) to the slurry of the washed soil immediately prior to rinsing. Experimental controls were conducted without the addition of ZVI after remediation. The use of ZVI reduced metal leachability and minimized toxic emissions 21 days after remediation. After this time, NH4NO3 extraction was performed to determine the mobility of toxic elements in the soil. In addition, Unified Human BioaccessibilityMethod (UBM) was performed to quantify the bioaccessibility levels of metals in stimulated human gastric and gastrointestinal phases.Keywords: soil remediation, soil science, soil washing, toxic metals removal
Procedia PDF Downloads 1754411 Discovering Causal Structure from Observations: The Relationships between Technophile Attitude, Users Value and Use Intention of Mobility Management Travel App
Authors: Aliasghar Mehdizadeh Dastjerdi, Francisco Camara Pereira
Abstract:
The increasing complexity and demand of transport services strains transportation systems especially in urban areas with limited possibilities for building new infrastructure. The solution to this challenge requires changes of travel behavior. One of the proposed means to induce such change is multimodal travel apps. This paper describes a study of the intention to use a real-time multi-modal travel app aimed at motivating travel behavior change in the Greater Copenhagen Region (Denmark) toward promoting sustainable transport options. The proposed app is a multi-faceted smartphone app including both travel information and persuasive strategies such as health and environmental feedback, tailoring travel options, self-monitoring, tunneling users toward green behavior, social networking, nudging and gamification elements. The prospective for mobility management travel apps to stimulate sustainable mobility rests not only on the original and proper employment of the behavior change strategies, but also on explicitly anchoring it on established theoretical constructs from behavioral theories. The theoretical foundation is important because it positively and significantly influences the effectiveness of the system. However, there is a gap in current knowledge regarding the study of mobility-management travel app with support in behavioral theories, which should be explored further. This study addresses this gap by a social cognitive theory‐based examination. However, compare to conventional method in technology adoption research, this study adopts a reverse approach in which the associations between theoretical constructs are explored by Max-Min Hill-Climbing (MMHC) algorithm as a hybrid causal discovery method. A technology-use preference survey was designed to collect data. The survey elicited different groups of variables including (1) three groups of user’s motives for using the app including gain motives (e.g., saving travel time and cost), hedonic motives (e.g., enjoyment) and normative motives (e.g., less travel-related CO2 production), (2) technology-related self-concepts (i.e. technophile attitude) and (3) use Intention of the travel app. The questionnaire items led to the formulation of causal relationships discovery to learn the causal structure of the data. Causal relationships discovery from observational data is a critical challenge and it has applications in different research fields. The estimated causal structure shows that the two constructs of gain motives and technophilia have a causal effect on adoption intention. Likewise, there is a causal relationship from technophilia to both gain and hedonic motives. In line with the findings of the prior studies, it highlights the importance of functional value of the travel app as well as technology self-concept as two important variables for adoption intention. Furthermore, the results indicate the effect of technophile attitude on developing gain and hedonic motives. The causal structure shows hierarchical associations between the three groups of user’s motive. They can be explained by “frustration-regression” principle according to Alderfer's ERG (Existence, Relatedness and Growth) theory of needs meaning that a higher level need remains unfulfilled, a person may regress to lower level needs that appear easier to satisfy. To conclude, this study shows the capability of causal discovery methods to learn the causal structure of theoretical model, and accordingly interpret established associations.Keywords: travel app, behavior change, persuasive technology, travel information, causality
Procedia PDF Downloads 1434410 Statistically Accurate Synthetic Data Generation for Enhanced Traffic Predictive Modeling Using Generative Adversarial Networks and Long Short-Term Memory
Authors: Srinivas Peri, Siva Abhishek Sirivella, Tejaswini Kallakuri, Uzair Ahmad
Abstract:
Effective traffic management and infrastructure planning are crucial for the development of smart cities and intelligent transportation systems. This study addresses the challenge of data scarcity by generating realistic synthetic traffic data using the PeMS-Bay dataset, improving the accuracy and reliability of predictive modeling. Advanced synthetic data generation techniques, including TimeGAN, GaussianCopula, and PAR Synthesizer, are employed to produce synthetic data that replicates the statistical and structural characteristics of real-world traffic. Future integration of Spatial-Temporal Generative Adversarial Networks (ST-GAN) is planned to capture both spatial and temporal correlations, further improving data quality and realism. The performance of each synthetic data generation model is evaluated against real-world data to identify the best models for accurately replicating traffic patterns. Long Short-Term Memory (LSTM) networks are utilized to model and predict complex temporal dependencies within traffic patterns. This comprehensive approach aims to pinpoint areas with low vehicle counts, uncover underlying traffic issues, and inform targeted infrastructure interventions. By combining GAN-based synthetic data generation with LSTM-based traffic modeling, this study supports data-driven decision-making that enhances urban mobility, safety, and the overall efficiency of city planning initiatives.Keywords: GAN, long short-term memory, synthetic data generation, traffic management
Procedia PDF Downloads 294409 Induced Thermo-Osmotic Convection for Heat and Mass Transfer
Authors: Francisco J. Arias
Abstract:
Consideration is given to a mechanism of heat and mass transport in solutions similar than that of natural convection but with one important difference. Here the mechanism is not promoted by density differences in the fluid occurring due to temperature gradients (coefficient of thermal expansion) but rather by solubility differences due to the thermal dependence of the solubility (coefficient of thermal solubility). Utilizing a simplified physical model, it is shown that by the proper choice of the concentration of a given solution, convection might be induced by the alternating precipitation of the solute -when the solution becomes supersaturated, and its posterior recombination when changes in temperature occurs. The spontaneous change in the Gibbs free energy during the mixing is the driven force for the mechanism. The maximum extractable energy from this new type of thermal convection was derived. Experimental data from a closed-loop circuit was obtained demonstrating the feasibility for continuous separation and recombination of the solution. This type of heat and mass transport -which doesn’t depend on gravity, might potentially be interesting for heat and mass transport downwards (as in solar-roof collectors to inside homes), horizontal (e.g., microelectronic applications), and in microgravity (space technology). Also, because the coefficient of thermal solubility could be positive or negative, the investigated thermo-osmosis convection can be used either for heating or cooling.Keywords: natural convection, thermal gradient, solubility, osmotic pressure
Procedia PDF Downloads 2954408 Social Networks Global Impact on Protest Movements and Human Rights Activism
Authors: Marcya Burden, Savonna Greer
Abstract:
In the wake of social unrest around the world, protest movements have been captured like never before. As protest movements have evolved, so too have their visibility and sources of coverage. Long gone are the days of print media as our only glimpse into the action surrounding a protest. Now, with social networks such as Facebook, Instagram and Snapchat, we have access to real-time video footage of protest movements and human rights activism that can reach millions of people within seconds. This research paper investigated various social media network platforms’ statistical usage data in the areas of human rights activism and protest movements, paralleling with other past forms of media coverage. This research demonstrates that social networks are extremely important to protest movements and human rights activism. With over 2.9 billion users across social media networks globally, these platforms are the heart of most recent protests and human rights activism. This research shows the paradigm shift from the Selma March of 1965 to the more recent protests of Ferguson in 2014, Ni Una Menos in 2015, and End Sars in 2018. The research findings demonstrate that today, almost anyone may use their social networks to protest movement leaders and human rights activists. From a student to an 80-year-old professor, the possibility of reaching billions of people all over the world is limitless. Findings show that 82% of the world’s internet population is on social networks 1 in every 5 minutes. Over 65% of Americans believe social media highlights important issues. Thus, there is no need to have a formalized group of people or even be known online. A person simply needs to be engaged on their respective social media networks (Facebook, Twitter, Instagram, Snapchat) regarding any cause they are passionate about. Information may be exchanged in real time around the world and a successful protest can begin.Keywords: activism, protests, human rights, networks
Procedia PDF Downloads 964407 Security Issues on Smart Grid and Blockchain-Based Secure Smart Energy Management Systems
Authors: Surah Aldakhl, Dafer Alali, Mohamed Zohdy
Abstract:
The next generation of electricity grid infrastructure, known as the "smart grid," integrates smart ICT (information and communication technology) into existing grids in order to alleviate the drawbacks of existing one-way grid systems. Future power systems' efficiency and dependability are anticipated to significantly increase thanks to the Smart Grid, especially given the desire for renewable energy sources. The security of the Smart Grid's cyber infrastructure is a growing concern, though, as a result of the interconnection of significant power plants through communication networks. Since cyber-attacks can destroy energy data, beginning with personal information leaking from grid members, they can result in serious incidents like huge outages and the destruction of power network infrastructure. We shall thus propose a secure smart energy management system based on the Blockchain as a remedy for this problem. The power transmission and distribution system may undergo a transformation as a result of the inclusion of optical fiber sensors and blockchain technology in smart grids. While optical fiber sensors allow real-time monitoring and management of electrical energy flow, Blockchain offers a secure platform to safeguard the smart grid against cyberattacks and unauthorized access. Additionally, this integration makes it possible to see how energy is produced, distributed, and used in real time, increasing transparency. This strategy has advantages in terms of improved security, efficiency, dependability, and flexibility in energy management. An in-depth analysis of the advantages and drawbacks of combining blockchain technology with optical fiber is provided in this paper.Keywords: smart grids, blockchain, fiber optic sensor, security
Procedia PDF Downloads 1214406 Analytical Modelling of the Moment-Rotation Behavior of Top and Seat Angle Connection with Stiffeners
Authors: Merve Sagiroglu
Abstract:
The earthquake-resistant steel structure design is required taking into account the behavior of beam-column connections besides the basic properties of the structure such as material and geometry. Beam-column connections play an important role in the behavior of frame systems. Taking into account the behaviour of connection in analysis and design of steel frames is important due to presenting the actual behavior of frames. So, the behavior of the connections should be well known. The most important force which transmitted by connections in the structural system is the moment. The rotational deformation is customarily expressed as a function of the moment in the connection. So, the moment-rotation curves are the best expression of behaviour of the beam-to-column connections. The designed connections form various moment-rotation curves according to the elements of connection and the shape of placement. The only way to achieve this curve is with real-scale experiments. The experiments of some connections have been carried out partially and are formed in the databank. It has been formed the models using this databank to express the behavior of connection. In this study, theoretical studies have been carried out to model a real behavior of the top and seat angles connections with angles. Two stiffeners in the top and seat angle to increase the stiffness of the connection, and two stiffeners in the beam web to prevent local buckling are used in this beam-to-column connection. Mathematical models have been performed using the database of the beam-to-column connection experiments previously by authors. Using the data of the tests, it has been aimed that analytical expressions have been developed to obtain the moment-rotation curve for the connection details whose test data are not available. The connection has been dimensioned in various shapes and the effect of the dimensions of the connection elements on the behavior has been examined.Keywords: top and seat angle connection, stiffener, moment-rotation curves, analytical study
Procedia PDF Downloads 1824405 The Other Dreamers: A Study of the Relationship between Returned Migration and Entrepreneurship
Authors: Pascual García, Francisco Ochoa, Jessica Ordoñez
Abstract:
The links between migration and development have been widely written and analyzed from different perspectives. However, the nexus between entrepreneurship and migration is of recent interest. The different studies related to this have focused on the ventures of ethnic enclaves, or on transnational businesses, which link the community of origin and destination. Beyond this perspective, this work analyzes the return migration, (a few studies until now, but forming part of a theoretical body of migration). As a result of the European crisis started between 2007-2008. Many Ecuadorians who lived in Europe, decided to return to their place of origin. The rise of the price of the oil and commodities presented a better panorama in Ecuador than in Europe. Faced with the magnitude of returnees, the opportunities for entrepreneurship in Ecuador increased (Accumulation of human capital, social capital, learned skills and capital). Thus there is an interest in the possibility of returned migrants in the country to start a business in their place of origin. The following study is the result of this. A survey of 110 returned migrants was carried out in the south of Ecuador and, using a Probit econometric model, we determined that the variables specified as geographic area, sex, education level are not significant, so they are not determinant when undertaking. However, time abroad and skills learned, if they were significant at the time of the decision to start a business.Keywords: entrepreneurship, development, migration, returned migration
Procedia PDF Downloads 2104404 Bank, Stock Market Efficiency and Economic Growth: Lessons for ASEAN-5
Authors: Tan Swee Liang
Abstract:
This paper estimates bank and stock market efficiency associations with real per capita GDP growth by examining panel-data across three different regions using Panel-Corrected Standard Errors (PCSE) regression developed by Beck and Katz (1995). Data from five economies in ASEAN (Singapore, Malaysia, Thailand, Philippines, and Indonesia), five economies in Asia (Japan, China, Hong Kong SAR, South Korea, and India) and seven economies in OECD (Australia, Canada, Denmark, Norway, Sweden, United Kingdom U.K., and United States U.S.), between 1990 and 2017 are used. Empirical findings suggest one, for Asia-5 high bank net interest margin means greater bank profitability, hence spurring economic growth. Two, for OECD-7 low bank overhead costs (as a share of total assets) may reflect weak competition and weak investment in providing superior banking services, hence dampening economic growth. Three, stock market turnover ratio has negative association with OECD-7 economic growth, but a positive association with Asia-5, which suggest the relationship between liquidity and growth is ambiguous. Lastly, for ASEAN-5 high bank overhead costs (as a share of total assets) may suggest expenses have not been channelled efficiently to income generating activities. One practical implication of the findings is that policy makers should take necessary measures toward financial liberalisation policies that boost growth through the efficiency channel, so that funds are efficiently allocated through the financial system between financial and real sectors.Keywords: financial development, banking system, capital markets, economic growth
Procedia PDF Downloads 1394403 Selective Extraction of Lithium from Native Geothermal Brines Using Lithium-ion Sieves
Authors: Misagh Ghobadi, Rich Crane, Karen Hudson-Edwards, Clemens Vinzenz Ullmann
Abstract:
Lithium is recognized as the critical energy metal of the 21st century, comparable in importance to coal in the 19th century and oil in the 20th century, often termed 'white gold'. Current global demand for lithium, estimated at 0.95-0.98 million metric tons (Mt) of lithium carbonate equivalent (LCE) annually in 2024, is projected to rise to 1.87 Mt by 2027 and 3.06 Mt by 2030. Despite anticipated short-term stability in supply and demand, meeting the forecasted 2030 demand will require the lithium industry to develop an additional capacity of 1.42 Mt of LCE annually, exceeding current planned and ongoing efforts. Brine resources constitute nearly 65% of global lithium reserves, underscoring the importance of exploring lithium recovery from underutilized sources, especially geothermal brines. However, conventional lithium extraction from brine deposits faces challenges due to its time-intensive process, low efficiency (30-50% lithium recovery), unsuitability for low lithium concentrations (<300 mg/l), and notable environmental impacts. Addressing these challenges, direct lithium extraction (DLE) methods have emerged as promising technologies capable of economically extracting lithium even from low-concentration brines (>50 mg/l) with high recovery rates (75-98%). However, most studies (70%) have predominantly focused on synthetic brines instead of native (natural/real), with limited application of these approaches in real-world case studies or industrial settings. This study aims to bridge this gap by investigating a geothermal brine sample collected from a real case study site in the UK. A Mn-based lithium-ion sieve (LIS) adsorbent was synthesized and employed to selectively extract lithium from the sample brine. Adsorbents with a Li:Mn molar ratio of 1:1 demonstrated superior lithium selectivity and adsorption capacity. Furthermore, the pristine Mn-based adsorbent was modified through transition metals doping, resulting in enhanced lithium selectivity and adsorption capacity. The modified adsorbent exhibited a higher separation factor for lithium over major co-existing cations such as Ca, Mg, Na, and K, with separation factors exceeding 200. The adsorption behaviour was well-described by the Langmuir model, indicating monolayer adsorption, and the kinetics followed a pseudo-second-order mechanism, suggesting chemisorption at the solid surface. Thermodynamically, negative ΔG° values and positive ΔH° and ΔS° values were observed, indicating the spontaneity and endothermic nature of the adsorption process.Keywords: adsorption, critical minerals, DLE, geothermal brines, geochemistry, lithium, lithium-ion sieves
Procedia PDF Downloads 484402 Exergy Analysis of a Green Dimethyl Ether Production Plant
Authors: Marcello De Falco, Gianluca Natrella, Mauro Capocelli
Abstract:
CO₂ capture and utilization (CCU) is a promising approach to reduce GHG(greenhouse gas) emissions. Many technologies in this field are recently attracting attention. However, since CO₂ is a very stable compound, its utilization as a reagent is energetic intensive. As a consequence, it is unclear whether CCU processes allow for a net reduction of environmental impacts from a life cycle perspective and whether these solutions are sustainable. Among the tools to apply for the quantification of the real environmental benefits of CCU technologies, exergy analysis is the most rigorous from a scientific point of view. The exergy of a system is the maximum obtainable work during a process that brings the system into equilibrium with its reference environment through a series of reversible processes in which the system can only interact with such an environment. In other words, exergy is an “opportunity for doing work” and, in real processes, it is destroyed by entropy generation. The exergy-based analysis is useful to evaluate the thermodynamic inefficiencies of processes, to understand and locate the main consumption of fuels or primary energy, to provide an instrument for comparison among different process configurations and to detect solutions to reduce the energy penalties of a process. In this work, the exergy analysis of a process for the production of Dimethyl Ether (DME) from green hydrogen generated through an electrolysis unit and pure CO₂ captured from flue gas is performed. The model simulates the behavior of all units composing the plant (electrolyzer, carbon capture section, DME synthesis reactor, purification step), with the scope to quantify the performance indices based on the II Law of Thermodynamics and to identify the entropy generation points. Then, a plant optimization strategy is proposed to maximize the exergy efficiency.Keywords: green DME production, exergy analysis, energy penalties, exergy efficiency
Procedia PDF Downloads 2594401 Face Recognition Using Eigen Faces Algorithm
Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale
Abstract:
Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.Keywords: face detection, face recognition, eigen faces, algorithm
Procedia PDF Downloads 3614400 Consequences of Transformation of Modern Monetary Policy during the Global Financial Crisis
Authors: Aleksandra Szunke
Abstract:
Monetary policy is an important pillar of the economy, directly affecting on the condition of banking sector. Depending on the strategy may both support functioning of banking institutions, as well as limit their excessively risky activities. The literature studies indicate a large number of publications, which include characteristics of initiatives, implemented by central banks during the global financial crisis and the potential effects of the use of non-standard monetary policy instruments. However, the empirical evidence about their effects and real consequences for the financial markets are still not final. Even before the escalation of instability, Bernanke, Reinhart, and Sack (2004) analyzed the effectiveness of various unconventional monetary tools in lowering long-term interest rates in the United States and Japan. The obtained results largely confirmed the effectiveness of the zero-interest-rate policy and Quantitative Easing (QE) in achieving the goal of reducing long-term interest rates. Japan, considered as the precursor of QE policy, also conducted research about the consequences of non-standard instruments, implemented to restore financial stability of the country. Although the literature about the effectiveness of Quantitative Easing in Japan is extensive, it does not uniquely specify whether it brought permanent effects. The main aim of the study is to identify the implications of non-standard monetary policy, implemented by selected central banks (the Federal Reserve System, Bank of England and European Central Bank), paying particular attention to the consequences into three areas: the size of money supply, financial markets, and the real economy.Keywords: consequences of modern monetary policy, quantitative easing policy, banking sector instability, global financial crisis
Procedia PDF Downloads 4804399 The Underestimation of Cultural Risk in the Execution of Megaprojects
Authors: Alan Walsh, Peter Walker, Michael Ellis
Abstract:
There is a real danger that both practitioners and researchers considering risks associated with megaprojects ignore or underestimate the impacts of cultural risk. The paper investigates the potential impacts of a failure to achieve cultural unity between the principal actors executing a megaproject. The principle relationships include the relationships between the principle Contractors and the project stakeholders or the project stakeholders and their principle advisors, Western Consultants. This study confirms that cultural dissonance between these parties can delay or disrupt the megaproject execution and examines why cultural issues should be prioritized as a significant risk factor in megaproject delivery. This paper addresses the practical impacts and potential mitigation measures, which may reduce cultural dissonance for a megaproject's delivery. This information is retrieved from on-going case studies in live infrastructure megaprojects in Europe and the Middle East's GCC states, from Western Consultants' perspective. The collaborating researchers each have at least 30 years of construction experience and are engaged in architecture, project management and contracts management, dealing with megaprojects in Europe or the GCC. After examining the cultural interfaces they have observed during the execution of megaprojects, they conclude that globally, culture significantly influences their efficient delivery. The study finds that cultural risk is ever-present, where different nationalities co-manage megaprojects and that cultural conflict poses a real threat to the timely delivery of megaprojects. The study indicates that the higher the cultural distance between the principal actors, the more pronounced the risk, with the risk of cultural dissonance more prominent in GCC megaprojects. The findings support a more culturally aware and cohesive team approach and recommend cross-cultural training to mitigate the effects of cultural disparity.Keywords: cultural risk underestimation, cultural distance, megaproject characteristics, megaproject execution
Procedia PDF Downloads 1074398 Design of Seismically Resistant Tree-Branching Steel Frames Using Theory and Design Guides for Eccentrically Braced Frames
Authors: R. Gary Black, Abolhassan Astaneh-Asl
Abstract:
The International Building Code (IBC) and the California Building Code (CBC) both recognize four basic types of steel seismic resistant frames; moment frames, concentrically braced frames, shear walls and eccentrically braced frames. Based on specified geometries and detailing, the seismic performance of these steel frames is well understood. In 2011, the authors designed an innovative steel braced frame system with tapering members in the general shape of a branching tree as a seismic retrofit solution to an existing four story “lift-slab” building. Located in the seismically active San Francisco Bay Area of California, a frame of this configuration, not covered by the governing codes, would typically require model or full scale testing to obtain jurisdiction approval. This paper describes how the theories, protocols, and code requirements of eccentrically braced frames (EBFs) were employed to satisfy the 2009 International Building Code (IBC) and the 2010 California Building Code (CBC) for seismically resistant steel frames and permit construction of these nonconforming geometries.Keywords: eccentrically braced frame, lift slab construction, seismic retrofit, shear link, steel design
Procedia PDF Downloads 4724397 Graph Neural Network-Based Classification for Disease Prediction in Health Care Heterogeneous Data Structures of Electronic Health Record
Authors: Raghavi C. Janaswamy
Abstract:
In the healthcare sector, heterogenous data elements such as patients, diagnosis, symptoms, conditions, observation text from physician notes, and prescriptions form the essentials of the Electronic Health Record (EHR). The data in the form of clear text and images are stored or processed in a relational format in most systems. However, the intrinsic structure restrictions and complex joins of relational databases limit the widespread utility. In this regard, the design and development of realistic mapping and deep connections as real-time objects offer unparallel advantages. Herein, a graph neural network-based classification of EHR data has been developed. The patient conditions have been predicted as a node classification task using a graph-based open source EHR data, Synthea Database, stored in Tigergraph. The Synthea DB dataset is leveraged due to its closer representation of the real-time data and being voluminous. The graph model is built from the EHR heterogeneous data using python modules, namely, pyTigerGraph to get nodes and edges from the Tigergraph database, PyTorch to tensorize the nodes and edges, PyTorch-Geometric (PyG) to train the Graph Neural Network (GNN) and adopt the self-supervised learning techniques with the AutoEncoders to generate the node embeddings and eventually perform the node classifications using the node embeddings. The model predicts patient conditions ranging from common to rare situations. The outcome is deemed to open up opportunities for data querying toward better predictions and accuracy.Keywords: electronic health record, graph neural network, heterogeneous data, prediction
Procedia PDF Downloads 874396 The Impact of Artificial Intelligence on Higher Education in Latin America
Authors: Luis Rodrigo Valencia Perez, Francisco Flores Aguero, Gibran Aguilar Rangel
Abstract:
Artificial Intelligence (AI) is rapidly transforming diverse sectors, and higher education in Latin America is no exception. This article explores the impact of AI on higher education institutions in the region, highlighting the imperative need for well-trained teachers in emerging technologies and a cultural shift towards the adoption and efficient use of these tools. AI offers significant opportunities to improve learning personalization, optimize administrative processes, and promote more inclusive and accessible education. However, the effectiveness of its implementation depends largely on the preparation and willingness of teachers to integrate these technologies into their pedagogical practices. Furthermore, it is essential that Latin American countries develop and implement public policies that encourage the adoption of AI in the education sector, thus ensuring that institutions can compete globally. Policies should focus on the continuous training of educators, investment in technological infrastructure, and the creation of regulatory frameworks that promote innovation and the ethical use of AI. Only through a comprehensive and collaborative approach will it be possible to fully harness the potential of AI to transform higher education in Latin America, thereby boosting the region's development and competitiveness on the global stage.Keywords: artificial intelligence (AI), higher education, teacher training, public policies, latin america, global competitiveness
Procedia PDF Downloads 304395 Human-Centred Data Analysis Method for Future Design of Residential Spaces: Coliving Case Study
Authors: Alicia Regodon Puyalto, Alfonso Garcia-Santos
Abstract:
This article presents a method to analyze the use of indoor spaces based on data analytics obtained from inbuilt digital devices. The study uses the data generated by the in-place devices, such as smart locks, Wi-Fi routers, and electrical sensors, to gain additional insights on space occupancy, user behaviour, and comfort. Those devices, originally installed to facilitate remote operations, report data through the internet that the research uses to analyze information on human real-time use of spaces. Using an in-place Internet of Things (IoT) network enables a faster, more affordable, seamless, and scalable solution to analyze building interior spaces without incorporating external data collection systems such as sensors. The methodology is applied to a real case study of coliving, a residential building of 3000m², 7 floors, and 80 users in the centre of Madrid. The case study applies the method to classify IoT devices, assess, clean, and analyze collected data based on the analysis framework. The information is collected remotely, through the different platforms devices' platforms; the first step is to curate the data, understand what insights can be provided from each device according to the objectives of the study, this generates an analysis framework to be escalated for future building assessment even beyond the residential sector. The method will adjust the parameters to be analyzed tailored to the dataset available in the IoT of each building. The research demonstrates how human-centered data analytics can improve the future spatial design of indoor spaces.Keywords: in-place devices, IoT, human-centred data-analytics, spatial design
Procedia PDF Downloads 1974394 Solving a Micromouse Maze Using an Ant-Inspired Algorithm
Authors: Rolando Barradas, Salviano Soares, António Valente, José Alberto Lencastre, Paulo Oliveira
Abstract:
This article reviews the Ant Colony Optimization, a nature-inspired algorithm, and its implementation in the Scratch/m-Block programming environment. The Ant Colony Optimization is a part of Swarm Intelligence-based algorithms and is a subset of biological-inspired algorithms. Starting with a problem in which one has a maze and needs to find its path to the center and return to the starting position. This is similar to an ant looking for a path to a food source and returning to its nest. Starting with the implementation of a simple wall follower simulator, the proposed solution uses a dynamic graphical interface that allows young students to observe the ants’ movement while the algorithm optimizes the routes to the maze’s center. Things like interface usability, Data structures, and the conversion of algorithmic language to Scratch syntax were some of the details addressed during this implementation. This gives young students an easier way to understand the computational concepts of sequences, loops, parallelism, data, events, and conditionals, as they are used through all the implemented algorithms. Future work includes the simulation results with real contest mazes and two different pheromone update methods and the comparison with the optimized results of the winners of each one of the editions of the contest. It will also include the creation of a Digital Twin relating the virtual simulator with a real micromouse in a full-size maze. The first test results show that the algorithm found the same optimized solutions that were found by the winners of each one of the editions of the Micromouse contest making this a good solution for maze pathfinding.Keywords: nature inspired algorithms, scratch, micromouse, problem-solving, computational thinking
Procedia PDF Downloads 1264393 Study on the DC Linear Stepper Motor to Industrial Applications
Authors: Nolvi Francisco Baggio Filho, Roniele Belusso
Abstract:
Many industrial processes require a precise linear motion. Usually, this movement is achieved with the use of rotary motors combined with electrical control systems and mechanical systems such as gears, pulleys and bearings. Other types of devices are based on linear motors, where the linear motion is obtained directly. The Linear Stepper Motor (MLP) is an excellent solution for industrial applications that require precise positioning and high speed. This study presents an MLP formed by a linear structure and static ferromagnetic material, and a mover structure in which three coils are mounted. Mechanical suspension systems allow a linear movement between static and mover parts, maintaining a constant air gap. The operating principle is based on the tendency of alignment of magnetic flux through the path of least reluctance. The force proportional to the intensity of the electric current and the speed proportional to the frequency of the excitation coils. The study of this device is still based on the use of a numerical and experimental analysis to verify the relationship among electric current applied and planar force developed. In addition, the magnetic field in the air gap region is also monitored.Keywords: linear stepper motor, planar traction force, reluctance magnetic, industry applications
Procedia PDF Downloads 5014392 Entropy Measures on Neutrosophic Soft Sets and Its Application in Multi Attribute Decision Making
Authors: I. Arockiarani
Abstract:
The focus of the paper is to furnish the entropy measure for a neutrosophic set and neutrosophic soft set which is a measure of uncertainty and it permeates discourse and system. Various characterization of entropy measures are derived. Further we exemplify this concept by applying entropy in various real time decision making problems.Keywords: entropy measure, Hausdorff distance, neutrosophic set, soft set
Procedia PDF Downloads 2574391 Simultaneous Removal of Phosphate and Ammonium from Eutrophic Water Using Dolochar Based Media Filter
Authors: Prangya Ranjan Rout, Rajesh Roshan Dash, Puspendu Bhunia
Abstract:
With the aim of enhancing the nutrient (ammonium and phosphate) removal from eutrophic wastewater with reduced cost, a novel media based multistage bio filter with drop aeration facility was developed in this work. The bio filter was packed with a discarded sponge iron industry by product, ‘dolochar’ primarily to remove phosphate via physicochemical approach. In the multi stage bio-filter drop, aeration was achieved by the process of percolation of the gravity-fed wastewater through the filter media and dropping down of wastewater from stage to stage. Ammonium present in wastewater got adsorbed by the filter media and biomass grown on the filter media and subsequently, got converted to nitrate through biological nitrification in the aerobic condition, as realized by drop aeration. The performance of the bio-filter in treating real eutrophic wastewater was monitored for a period of about 2 months. The influent phosphate concentration was in the range of 16-19 mg/L, and ammonium concentration was in the range of 65-78 mg/L. The average nutrient removal efficiency observed during the study period were 95.2% for phosphate and 88.7% for ammonium, with mean final effluent concentration of 0.91, and 8.74 mg/L, respectively. Furthermore, the subsequent release of nutrient from the saturated filter media, after completion of treatment process has been undertaken in this study and thin layer funnel analytical test results reveal the slow nutrient release nature of spent dolochar, thereby, recommending its potential agricultural application. Thus, the bio-filter displays immense prospective for treating real eutrophic wastewater, significantly decreasing the level of nutrients and keeping the effluent nutrient concentrations at par with the permissible limit and more importantly, facilitating the conversion of the waste materials into usable ones.Keywords: ammonium removal, phosphate removal, multi-stage bio-filter, dolochar
Procedia PDF Downloads 194