Search results for: predictive models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7254

Search results for: predictive models

4674 The Hallmarks of War Propaganda: The Case of Russia-Ukraine Conflict

Authors: Veronika Solopova, Oana-Iuliana Popescu, Tim Landgraf, Christoph Benzmüller

Abstract:

Beginning in 2014, slowly building geopolitical tensions in Eastern Europe led to a full-blown conflict between the Russian Federation and Ukraine that generated an unprecedented amount of news articles and data from social media data, reflecting the opposing ideologies and narratives as a background and the essence of the ongoing war. These polarized informational campaigns have led to countless mutual accusations of misinformation and fake news, shaping an atmosphere of confusion and mistrust for many readers all over the world. In this study, we analyzed scraped news articles from Ukrainian, Russian, Romanian and English-speaking news outlets, on the eve of 24th of February 2022, compared to day five of the conflict (28th of February), to see how the media influenced and mirrored the changes in public opinion. We also contrast the sources opposing and supporting the stands of the Russian government in Ukrainian, Russian and Romanian media spaces. In a data-driven way, we describe how the narratives are spread throughout Eastern and Central Europe. We present predictive linguistic features surrounding war propaganda. Our results indicate that there are strong similarities in terms of rhetoric strategies in the pro-Kremlin media in both Ukraine and Russia, which, while being relatively neutral according to surface structure, use aggressive vocabulary. This suggests that automatic propaganda identification systems have to be tailored for each new case, as they have to rely on situationally specific words. Both Ukrainian and Russian outlets lean towards strongly opinionated news, pointing towards the use of war propaganda in order to achieve strategic goals.

Keywords: linguistic, news, propaganda, Russia, ukraine

Procedia PDF Downloads 114
4673 Understanding Space, Citizenship and Assimilation in the Context of Migration in North-Eastern Region of India

Authors: Mukunda Upadhyay, Rakesh Mishra, Rajni Singh

Abstract:

This paper is an attempt to understand the abstract concept of space, citizenship and migration in the north-eastern region. In the twentieth century, researchers and thinkers related citizenship and migration on national models. The national models of jus sulis and jus sangunis provide scope of space and rights to only those who are either born in the territory or either share the common descent. Space ensures rights and citizenship ensures space and for many migrants, citizenship is the ultimate goal in the host country. Migrants with the intention of settling down in the destination region, begin to adapt and assimilate in their new homes. In many cases, migrants may also retain the culture and values of the place of origin. In such cases the difference in the degree of retention and assimilation may determine the chances of conflict between the host society and migrants. Such conflicts are fueled by political aspirations of few individuals on both the sides. The North-Eastern part of India is a mixed community with many linguistic and religious groups sharing a common Geo-political space. Every community has its own unique history, culture and identity. Since the last half of the nineteenth century, this region has been experiencing both internal migration from other states and immigration from the neighboring countries which has resulted in the interactions of various cultures and ethnicities. With the span of time, migration has taken bitter form with problems concentrated around acquiring rights through space and citizenship. Political tensions resulted by host hostility and migrants resistance has ruined the social order in few areas. In order to resolve these issues in this area proper intervention has to be carried out by the involvement of the National and International community.

Keywords: space, citizenship, assimilation, migration, rights

Procedia PDF Downloads 412
4672 Credit Card Fraud Detection with Ensemble Model: A Meta-Heuristic Approach

Authors: Gong Zhilin, Jing Yang, Jian Yin

Abstract:

The purpose of this paper is to develop a novel system for credit card fraud detection based on sequential modeling of data using hybrid deep learning models. The projected model encapsulates five major phases are pre-processing, imbalance-data handling, feature extraction, optimal feature selection, and fraud detection with an ensemble classifier. The collected raw data (input) is pre-processed to enhance the quality of the data through alleviation of the missing data, noisy data as well as null values. The pre-processed data are class imbalanced in nature, and therefore they are handled effectively with the K-means clustering-based SMOTE model. From the balanced class data, the most relevant features like improved Principal Component Analysis (PCA), statistical features (mean, median, standard deviation) and higher-order statistical features (skewness and kurtosis). Among the extracted features, the most optimal features are selected with the Self-improved Arithmetic Optimization Algorithm (SI-AOA). This SI-AOA model is the conceptual improvement of the standard Arithmetic Optimization Algorithm. The deep learning models like Long Short-Term Memory (LSTM), Convolutional Neural Network (CNN), and optimized Quantum Deep Neural Network (QDNN). The LSTM and CNN are trained with the extracted optimal features. The outcomes from LSTM and CNN will enter as input to optimized QDNN that provides the final detection outcome. Since the QDNN is the ultimate detector, its weight function is fine-tuned with the Self-improved Arithmetic Optimization Algorithm (SI-AOA).

Keywords: credit card, data mining, fraud detection, money transactions

Procedia PDF Downloads 123
4671 The Determinants of Corporate Hedging Strategy

Authors: Ademola Ajibade

Abstract:

Previous studies have explored several rationales for hedging strategies, but the evidence provided by these studies remains ambiguous. Using a hand-collected dataset of 2460 observations of non-financial firms in eight African countries covering 2013-2022, this paper investigates the determinants and extent of corporate hedge use. In particular, this paper focuses on the link between country-specific conditions and the corporate hedging behaviour of firms. To our knowledge, this represents the first African studies investigating the association between country-specific factors and corporate hedging policy. The evidence based on both univariate and multivariate reveal that country-level corruption and government quality are important indicators of the decisions and extent of hedge use among African firms. However, the connection between country-specific factors as a rationale for corporate hedge use is stronger for firms located in highly corrupt countries. This suggest that firms located in corrupt countries are more motivated to hedge due to the large exposure they face. In addition, we test the risk management theories and observe that CEOs educational qualification and experience shape corporate hedge behaviour. We implement a lagged variables in a panel data setting to address endogeneity concern and implement an interaction term between governance indices and firm-specific variables to test for robustness. Generally, our findings reveal that institutional factors shape risk management decisions and have a predictive power in explaining corporate hedging strategy.

Keywords: corporate hedging, governance quality, corruption, derivatives

Procedia PDF Downloads 84
4670 Prediction of Damage to Cutting Tools in an Earth Pressure Balance Tunnel Boring Machine EPB TBM: A Case Study L3 Guadalajara Metro Line (Mexico)

Authors: Silvia Arrate, Waldo Salud, Eloy París

Abstract:

The wear of cutting tools is one of the most decisive elements when planning tunneling works, programming the maintenance stops and saving the optimum stock of spare parts during the evolution of the excavation. Being able to predict the behavior of cutting tools can give a very competitive advantage in terms of costs and excavation performance, optimized to the needs of the TBM itself. The incredible evolution of data science in recent years gives the option to implement it at the time of analyzing the key and most critical parameters related to machinery with the purpose of knowing how the cutting head is performing in front of the excavated ground. Taking this as a case study, Metro Line 3 of Guadalajara in Mexico will develop the feasibility of using Specific Energy versus data science applied over parameters of Torque, Penetration, and Contact Force, among others, to predict the behavior and status of cutting tools. The results obtained through both techniques are analyzed and verified in the function of the wear and the field situations observed in the excavation in order to determine its effectiveness regarding its predictive capacity. In conclusion, the possibilities and improvements offered by the application of digital tools and the programming of calculation algorithms for the analysis of wear of cutting head elements compared to purely empirical methods allow early detection of possible damage to cutting tools, which is reflected in optimization of excavation performance and a significant improvement in costs and deadlines.

Keywords: cutting tools, data science, prediction, TBM, wear

Procedia PDF Downloads 42
4669 Analytical Modelling of the Moment-Rotation Behavior of Top and Seat Angle Connection with Stiffeners

Authors: Merve Sagiroglu

Abstract:

The earthquake-resistant steel structure design is required taking into account the behavior of beam-column connections besides the basic properties of the structure such as material and geometry. Beam-column connections play an important role in the behavior of frame systems. Taking into account the behaviour of connection in analysis and design of steel frames is important due to presenting the actual behavior of frames. So, the behavior of the connections should be well known. The most important force which transmitted by connections in the structural system is the moment. The rotational deformation is customarily expressed as a function of the moment in the connection. So, the moment-rotation curves are the best expression of behaviour of the beam-to-column connections. The designed connections form various moment-rotation curves according to the elements of connection and the shape of placement. The only way to achieve this curve is with real-scale experiments. The experiments of some connections have been carried out partially and are formed in the databank. It has been formed the models using this databank to express the behavior of connection. In this study, theoretical studies have been carried out to model a real behavior of the top and seat angles connections with angles. Two stiffeners in the top and seat angle to increase the stiffness of the connection, and two stiffeners in the beam web to prevent local buckling are used in this beam-to-column connection. Mathematical models have been performed using the database of the beam-to-column connection experiments previously by authors. Using the data of the tests, it has been aimed that analytical expressions have been developed to obtain the moment-rotation curve for the connection details whose test data are not available. The connection has been dimensioned in various shapes and the effect of the dimensions of the connection elements on the behavior has been examined.

Keywords: top and seat angle connection, stiffener, moment-rotation curves, analytical study

Procedia PDF Downloads 171
4668 Supply Network Design for Production-Distribution of Fish: A Sustainable Approach Using Mathematical Programming

Authors: Nicolás Clavijo Buriticá, Laura Viviana Triana Sanchez

Abstract:

This research develops a productive context associated with the aquaculture industry in northern Tolima-Colombia, specifically in the town of Lerida. Strategic aspects of chain of fish Production-Distribution, especially those related to supply network design of an association devoted to cultivating, farming, processing and marketing of fish are addressed. This research is addressed from a special approach of Supply Chain Management (SCM) which guides management objectives to the system sustainability; this approach is called Sustainable Supply Chain Management (SSCM). The network design of fish production-distribution system is obtained for the case study by two mathematical programming models that aims to maximize the economic benefits of the chain and minimize total supply chain costs, taking into account restrictions to protect the environment and its implications on system productivity. The results of the mathematical models validated in the productive situation of the partnership under study, called Asopiscinorte shows the variation in the number of open or closed locations in the supply network that determines the final network configuration. This proposed result generates for the case study an increase of 31.5% in the partial productivity of storage and processing, in addition to possible favorable long-term implications, such as attending an agile or not a consumer area, increase or not the level of sales in several areas, to meet in quantity, time and cost of work in progress and finished goods to various actors in the chain.

Keywords: Sustainable Supply Chain, mathematical programming, aquaculture industry, Supply Chain Design, Supply Chain Configuration

Procedia PDF Downloads 534
4667 Faster Pedestrian Recognition Using Deformable Part Models

Authors: Alessandro Preziosi, Antonio Prioletti, Luca Castangia

Abstract:

Deformable part models achieve high precision in pedestrian recognition, but all publicly available implementations are too slow for real-time applications. We implemented a deformable part model algorithm fast enough for real-time use by exploiting information about the camera position and orientation. This implementation is both faster and more precise than alternative DPM implementations. These results are obtained by computing convolutions in the frequency domain and using lookup tables to speed up feature computation. This approach is almost an order of magnitude faster than the reference DPM implementation, with no loss in precision. Knowing the position of the camera with respect to horizon it is also possible prune many hypotheses based on their size and location. The range of acceptable sizes and positions is set by looking at the statistical distribution of bounding boxes in labelled images. With this approach it is not needed to compute the entire feature pyramid: for example higher resolution features are only needed near the horizon. This results in an increase in mean average precision of 5% and an increase in speed by a factor of two. Furthermore, to reduce misdetections involving small pedestrians near the horizon, input images are supersampled near the horizon. Supersampling the image at 1.5 times the original scale, results in an increase in precision of about 4%. The implementation was tested against the public KITTI dataset, obtaining an 8% improvement in mean average precision over the best performing DPM-based method. By allowing for a small loss in precision computational time can be easily brought down to our target of 100ms per image, reaching a solution that is faster and still more precise than all publicly available DPM implementations.

Keywords: autonomous vehicles, deformable part model, dpm, pedestrian detection, real time

Procedia PDF Downloads 272
4666 Survey Research Assessment for Renewable Energy Integration into the Mining Industry

Authors: Kateryna Zharan, Jan C. Bongaerts

Abstract:

Mining operations are energy intensive, and the share of energy costs in total costs is often quoted in the range of 40 %. Saving on energy costs is, therefore, a key element of any mine operator. With the improving reliability and security of renewable energy (RE) sources, and requirements to reduce carbon dioxide emissions, perspectives for using RE in mining operations emerge. These aspects are stimulating the mining companies to search for ways to substitute fossil energy with RE. Hereby, the main purpose of this study is to present the survey research assessment in matter of finding out the key issues related to the integration of RE into mining activities, based on the mining and renewable energy experts’ opinion. The purpose of the paper is to present the outcomes of a survey conducted among mining and renewable energy experts about the feasibility of RE in mining operations. The survey research has been developed taking into consideration the following categories: first of all, the mining and renewable energy experts were chosen based on the specific criteria. Secondly, they were offered a questionnaire to gather their knowledge and opinions on incentives for mining operators to turn to RE, barriers and challenges to be expected, environmental effects, appropriate business models and the overall impact of RE on mining operations. The outcomes of the survey allow for the identification of factors which favor and disfavor decision-making on the use of RE in mining operations. It concludes with a set of recommendations for further study. One of them relates to a deeper analysis of benefits for mining operators when using RE, and another one suggests that appropriate business models considering economic and environmental issues need to be studied and developed. The results of the paper will be used for developing a hybrid optimized model which might be adopted at mines according to their operation processes as well as economic and environmental perspectives.

Keywords: carbon dioxide emissions, mining industry, photovoltaic, renewable energy, survey research, wind generation

Procedia PDF Downloads 352
4665 An Elasto-Viscoplastic Constitutive Model for Unsaturated Soils: Numerical Implementation and Validation

Authors: Maria Lazari, Lorenzo Sanavia

Abstract:

Mechanics of unsaturated soils has been an active field of research in the last decades. Efficient constitutive models that take into account the partial saturation of soil are necessary to solve a number of engineering problems e.g. instability of slopes and cuts due to heavy rainfalls. A large number of constitutive models can now be found in the literature that considers fundamental issues associated with the unsaturated soil behaviour, like the volume change and shear strength behaviour with suction or saturation changes. Partially saturated soils may either expand or collapse upon wetting depending on the stress level, and it is also possible that a soil might experience a reversal in the volumetric behaviour during wetting. Shear strength of soils also changes dramatically with changes in the degree of saturation, and a related engineering problem is slope failures caused by rainfall. There are several states of the art reviews over the last years for studying the topic, usually providing a thorough discussion of the stress state, the advantages, and disadvantages of specific constitutive models as well as the latest developments in the area of unsaturated soil modelling. However, only a few studies focused on the coupling between partial saturation states and time effects on the behaviour of geomaterials. Rate dependency is experimentally observed in the mechanical response of granular materials, and a viscoplastic constitutive model is capable of reproducing creep and relaxation processes. Therefore, in this work an elasto-viscoplastic constitutive model for unsaturated soils is proposed and validated on the basis of experimental data. The model constitutes an extension of an existing elastoplastic strain-hardening constitutive model capable of capturing the behaviour of variably saturated soils, based on energy conjugated stress variables in the framework of superposed continua. The purpose was to develop a model able to deal with possible mechanical instabilities within a consistent energy framework. The model shares the same conceptual structure of the elastoplastic laws proposed to deal with bonded geomaterials subject to weathering or diagenesis and is capable of modelling several kinds of instabilities induced by the loss of hydraulic bonding contributions. The novelty of the proposed formulation is enhanced with the incorporation of density dependent stiffness and hardening coefficients in order to allow the modeling of the pycnotropy behaviour of granular materials with a single set of material constants. The model has been implemented in the commercial FE platform PLAXIS, widely used in Europe for advanced geotechnical design. The algorithmic strategies adopted for the stress-point algorithm had to be revised to take into account the different approach adopted by PLAXIS developers in the solution of the discrete non-linear equilibrium equations. An extensive comparison between models with a series of experimental data reported by different authors is presented to validate the model and illustrate the capability of the newly developed model. After the validation, the effectiveness of the viscoplastic model is displayed by numerical simulations of a partially saturated slope failure of the laboratory scale and the effect of viscosity and degree of saturation on slope’s stability is discussed.

Keywords: PLAXIS software, slope, unsaturated soils, Viscoplasticity

Procedia PDF Downloads 219
4664 Evaluation of Firearm Injury Syndromic Surveillance in Utah

Authors: E. Bennion, A. Acharya, S. Barnes, D. Ferrell, S. Luckett-Cole, G. Mower, J. Nelson, Y. Nguyen

Abstract:

Objective: This study aimed to evaluate the validity of a firearm injury query in the Early Notification of Community-based Epidemics syndromic surveillance system. Syndromic surveillance data are used at the Utah Department of Health for early detection of and rapid response to unusually high rates of violence and injury, among other health outcomes. The query of interest was defined by the Centers for Disease Control and Prevention and used chief complaint and discharge diagnosis codes to capture initial emergency department encounters for firearm injury of all intents. Design: Two epidemiologists manually reviewed electronic health records of emergency department visits captured by the query from April-May 2020, compared results, and sent conflicting determinations to two arbiters. Results: Of the 85 unique records captured, 67 were deemed probable, 19 were ruled out, and two were undetermined, resulting in a positive predictive value of 75.3%. Common reasons for false positives included non-initial encounters and misleading keywords. Conclusion: Improving the validity of syndromic surveillance data would better inform outbreak response decisions made by state and local health departments. The firearm injury definition could be refined to exclude non-initial encounters by negating words such as “last month,” “last week,” and “aftercare”; and to exclude non-firearm injury by negating words such as “pellet gun,” “air gun,” “nail gun,” “bullet bike,” and “exit wound” when a firearm is not mentioned.

Keywords: evaluation, health information system, firearm injury, syndromic surveillance

Procedia PDF Downloads 163
4663 Perception of Public Transport Quality of Service among Regular Private Vehicle Users in Five European Cities

Authors: Juan de Ona, Esperanza Estevez, Rocío de Ona

Abstract:

Urban traffic levels can be reduced by drawing travelers away from private vehicles over to using public transport. This modal change can be achieved by either introducing restrictions on private vehicles or by introducing measures which increase people’s satisfaction with public transport. For public transport users, quality of service affects customer satisfaction, which, in turn, influences the behavioral intentions towards the service. This paper intends to identify the main attributes which influence the perception private vehicle users have about the public transport services provided in five European cities: Berlin, Lisbon, London, Madrid and Rome. Ordinal logit models have been applied to an online panel survey with a sample size of 2,500 regular private vehicle users (approximately 500 inhabitants per city). To achieve a comprehensive analysis and to deal with heterogeneity in perceptions, 15 models have been developed for the entire sample and 14 user segments. The results show differences between the cities and among the segments. Madrid was taken as reference city and results indicate that the inhabitants are satisfied with public transport in Madrid and that the most important public transport service attributes for private vehicle users are frequency, speed and intermodality. Frequency is an important attribute for all the segments, while speed and intermodality are important for most of the segments. An analysis by segments has identified attributes which, although not important in most cases, are relevant for specific segments. This study also points out important differences between the five cities. Findings from this study can be used to develop policies and recommendations for persuading.

Keywords: service quality, satisfaction, public transportation, private vehicle users, car users, segmentation, ordered logit

Procedia PDF Downloads 109
4662 Consumer Value and Purchase Behaviour: The Mediating Role of Consumers' Expectations of Corporate Social Responsibility in Durban, South Africa

Authors: Abosede Ijabadeniyi, Jeevarathnam P. Govender

Abstract:

Prevailing strategic Corporate Social Responsibility (CSR) research is predominantly centred around the predictive implications of the construct on behavioural outcomes. This phenomenon limits the depth of our understanding of the trajectory of strategic CSR. The purpose of this paper is to investigate the mediating effects of CSR expectations on the relationship between consumer value and purchase behaviour by identifying the implications of the multidimensionality of CSR (economic, legal, ethical and philanthropic) on the latter. Drawing from the stakeholder theory and its interplay with the prevalence of Ubuntu values; the underlying force which governs the values of South African camaraderie, we hypothesise that the multidimensionality of CSR expectations has positive mediating effects in the relationship between consumer value and purchase behaviour. Partial Least Square (PLS) path modelling was employed, using six measures of the average path coefficient (APC) to test the relationship between the constructs. Results from a sample of mall shoppers of (n=411), based on a survey conducted across five major malls in Durban, South Africa, indicate that only the legal dimension of CSR serves as a mediating factor in the relationship among the constructs. South Africa’s unique history of segregation, leading to the proliferation of spontaneous organisational approach to CSR and higher expectations of organisational legitimacy are identified as antecedents of consumers’ reliance on the law (legal CSR) to redress the ills of the past, sustainable development, and socially responsible behaviour. The paper also highlights theoretical and managerial implications for future research.

Keywords: consumer value, corporate marketing, corporate social responsibility, purchase behaviour, Ubuntu

Procedia PDF Downloads 361
4661 A 'Systematic Literature Review' of Specific Types of Inventory Faced by the Management of Firms

Authors: Rui Brito

Abstract:

This contribution regards a literature review of inventory management that is a relevant topic for the firms, due to its important use of capital with implications in firm’s profitability within the complexity of a more competitive and globalized world. Firms look for small inventories in order to reduce holding costs, namely opportunity cost, warehousing and handling costs, deterioration and being out of style, but larger inventories are required by some reasons, such as customer service, ordering cost, transportation cost, supplier’s payment to reduce unit costs or to take advantage of price increase in the near future, and equipment setup cost. Thus, management shall address a trade-off between small inventories and larger inventories. This literature review concerns three types of inventory (spare parts, safety stock, and vendor) whose management usually is beyond the scope of logistics. The applied methodology consisted of an online search of databases regarding scientific documents in English, namely Elsevier, Springer, Emerald, Wiley, and Taylor & Francis, but excluding books except if edited, using search engines, such as Google Scholar and B-on. The search was based on three keywords/strings (themes) which had to be included just as in the article title, suggesting themes were very relevant to the researchers. The whole search period was between 2009 and 2018 with the aim of collecting between twenty and forty studies considered relevant within each of the key words/strings specified. Documents were sorted by relevance and to prevent the exclusion of the more recent articles, based on lower quantity of citations partially due to less time to be cited in new research articles, the search period was divided into two sub-periods (2009-2015 and 2016-2018). The number of surveyed articles by theme showed a variation from 40 to 200 and the number of citations of those articles showed a wider variation from 3 to 216. Selected articles from the three themes were analyzed and the first seven of the first sub-period and the first three of the second sub-period with more citations were read in full to make a synopsis of each article. Overall, the findings show that the majority of article types were models, namely mathematical, although with different sub-types for each theme. Almost all articles suggest further studies, with some mentioning it for their own author(s), which widen the diversity of the previous research. Identified research gaps concern the use of surveys to know which are the models more used by firms, the reasons for not using the models with more performance and accuracy, and which are the satisfaction levels with the outcomes of the inventories management and its effect on the improvement of the firm’s overall performance. The review ends with the limitations and contributions of the study.

Keywords: inventory management, safety stock, spare parts inventory, vendor managed inventory

Procedia PDF Downloads 89
4660 A Comprehensive Finite Element Model for Incremental Launching of Bridges: Optimizing Construction and Design

Authors: Mohammad Bagher Anvari, Arman Shojaei

Abstract:

Incremental launching, a widely adopted bridge erection technique, offers numerous advantages for bridge designers. However, accurately simulating and modeling the dynamic behavior of the bridge during each step of the launching process proves to be tedious and time-consuming. The perpetual variation of internal forces within the deck during construction stages adds complexity, exacerbated further by considerations of other load cases, such as support settlements and temperature effects. As a result, there is an urgent need for a reliable, simple, economical, and fast algorithmic solution to model bridge construction stages effectively. This paper presents a novel Finite Element (FE) model that focuses on studying the static behavior of bridges during the launching process. Additionally, a simple method is introduced to normalize all quantities in the problem. The new FE model overcomes the limitations of previous models, enabling the simulation of all stages of launching, which conventional models fail to achieve due to underlying assumptions. By leveraging the results obtained from the new FE model, this study proposes solutions to improve the accuracy of conventional models, particularly for the initial stages of bridge construction that have been neglected in previous research. The research highlights the critical role played by the first span of the bridge during the initial stages, a factor often overlooked in existing studies. Furthermore, a new and simplified model termed the "semi-infinite beam" model, is developed to address this oversight. By utilizing this model alongside a simple optimization approach, optimal values for launching nose specifications are derived. The practical applications of this study extend to optimizing the nose-deck system of incrementally launched bridges, providing valuable insights for practical usage. In conclusion, this paper introduces a comprehensive Finite Element model for studying the static behavior of bridges during incremental launching. The proposed model addresses limitations found in previous approaches and offers practical solutions to enhance accuracy. The study emphasizes the importance of considering the initial stages and introduces the "semi-infinite beam" model. Through the developed model and optimization approach, optimal specifications for launching nose configurations are determined. This research holds significant practical implications and contributes to the optimization of incrementally launched bridges, benefiting both the construction industry and bridge designers.

Keywords: incremental launching, bridge construction, finite element model, optimization

Procedia PDF Downloads 87
4659 A Physiological Approach for Early Detection of Hemorrhage

Authors: Rabie Fadil, Parshuram Aarotale, Shubha Majumder, Bijay Guargain

Abstract:

Hemorrhage is the loss of blood from the circulatory system and leading cause of battlefield and postpartum related deaths. Early detection of hemorrhage remains the most effective strategy to reduce mortality rate caused by traumatic injuries. In this study, we investigated the physiological changes via non-invasive cardiac signals at rest and under different hemorrhage conditions simulated through graded lower-body negative pressure (LBNP). Simultaneous electrocardiogram (ECG), photoplethysmogram (PPG), blood pressure (BP), impedance cardiogram (ICG), and phonocardiogram (PCG) were acquired from 10 participants (age:28 ± 6 year, weight:73 ± 11 kg, height:172 ± 8 cm). The LBNP protocol consisted of applying -20, -30, -40, -50, and -60 mmHg pressure to the lower half of the body. Beat-to-beat heart rate (HR), systolic blood pressure (SBP), diastolic blood pressure (DBP), and mean aerial pressure (MAP) were extracted from ECG and blood pressure. Systolic amplitude (SA), systolic time (ST), diastolic time (DT), and left ventricle Ejection time (LVET) were extracted from PPG during each stage. Preliminary results showed that the application of -40 mmHg i.e. moderate stage simulated hemorrhage resulted significant changes in HR (85±4 bpm vs 68 ± 5bpm, p < 0.01), ST (191 ± 10 ms vs 253 ± 31 ms, p < 0.05), LVET (350 ± 14 ms vs 479 ± 47 ms, p < 0.05) and DT (551 ± 22 ms vs 683 ± 59 ms, p < 0.05) compared to rest, while no change was observed in SA (p > 0.05) as a consequence of LBNP application. These findings demonstrated the potential of cardiac signals in detecting moderate hemorrhage. In future, we will analyze all the LBNP stages and investigate the feasibility of other physiological signals to develop a predictive machine learning model for early detection of hemorrhage.

Keywords: blood pressure, hemorrhage, lower-body negative pressure, LBNP, machine learning

Procedia PDF Downloads 162
4658 Face Recognition Using Eigen Faces Algorithm

Authors: Shweta Pinjarkar, Shrutika Yawale, Mayuri Patil, Reshma Adagale

Abstract:

Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this, demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application. Face recognition is the technique which can be applied to the wide variety of problems like image and film processing, human computer interaction, criminal identification etc. This has motivated researchers to develop computational models to identify the faces, which are easy and simple to implement. In this , demonstrates the face recognition system in android device using eigenface. The system can be used as the base for the development of the recognition of human identity. Test images and training images are taken directly with the camera in android device.The test results showed that the system produces high accuracy. The goal is to implement model for particular face and distinguish it with large number of stored faces. face recognition system detects the faces in picture taken by web camera or digital camera and these images then checked with training images dataset based on descriptive features. Further this algorithm can be extended to recognize the facial expressions of a person.recognition could be carried out under widely varying conditions like frontal view,scaled frontal view subjects with spectacles. The algorithm models the real time varying lightning conditions. The implemented system is able to perform real-time face detection, face recognition and can give feedback giving a window with the subject's info from database and sending an e-mail notification to interested institutions using android application.

Keywords: face detection, face recognition, eigen faces, algorithm

Procedia PDF Downloads 352
4657 Play Based Practices in Early Childhood Curriculum: The Contribution of High Scope, Modern School Movement and Pedagogy of Participation

Authors: Dalila Lino

Abstract:

The power of play for learning and development in early childhood education is beyond question. The main goal of this study is to analyse how three contemporary early childhood pedagogical approaches, the High Scope, the Modern School Movement (MEM) and the Pedagogy of Participation integrate play in their curriculum development. From this main goal the following objectives emerged: (i) to characterize how play is integrated in the daily routine of the pedagogical approaches under study; (ii) to analyse the teachers’ role during children’s playing situations; (iii) to identify the types of play that children are more often involved. The methodology used is the qualitative approach and is situated under the interpretative paradigm. Data is collected through semi-structured interviews to 30 preschool teachers and through observations of typical daily routines. The participants are 30 Portuguese preschool classrooms attending children from 3 to 6 years and working with the High Scope curriculum (10 classrooms), the MEM (10 classrooms) and the Pedagogy of Participation (10 classrooms). The qualitative method of content analysis was used to analyse the data. To ensure confidentiality, no information is disclosed without participants' consent, and the interviews were transcribed and sent to the participants for a final revision. The results show that there are differences how play is integrated and promoted in the three pedagogical approaches. The teachers’ role when children are at play varies according the pedagogical approach adopted, and also according to the teachers’ understanding about the meaning of play. The study highlights the key role that early childhood curriculum models have to promote opportunities for children to play, and therefore to be involved in meaningful learning.

Keywords: curriculum models, early childhood education, pedagogy, play

Procedia PDF Downloads 200
4656 A Methodology to Integrate Data in the Company Based on the Semantic Standard in the Context of Industry 4.0

Authors: Chang Qin, Daham Mustafa, Abderrahmane Khiat, Pierre Bienert, Paulo Zanini

Abstract:

Nowadays, companies are facing lots of challenges in the process of digital transformation, which can be a complex and costly undertaking. Digital transformation involves the collection and analysis of large amounts of data, which can create challenges around data management and governance. Furthermore, it is also challenged to integrate data from multiple systems and technologies. Although with these pains, companies are still pursuing digitalization because by embracing advanced technologies, companies can improve efficiency, quality, decision-making, and customer experience while also creating different business models and revenue streams. In this paper, the issue that data is stored in data silos with different schema and structures is focused. The conventional approaches to addressing this issue involve utilizing data warehousing, data integration tools, data standardization, and business intelligence tools. However, these approaches primarily focus on the grammar and structure of the data and neglect the importance of semantic modeling and semantic standardization, which are essential for achieving data interoperability. In this session, the challenge of data silos in Industry 4.0 is addressed by developing a semantic modeling approach compliant with Asset Administration Shell (AAS) models as an efficient standard for communication in Industry 4.0. The paper highlights how our approach can facilitate the data mapping process and semantic lifting according to existing industry standards such as ECLASS and other industrial dictionaries. It also incorporates the Asset Administration Shell technology to model and map the company’s data and utilize a knowledge graph for data storage and exploration.

Keywords: data interoperability in industry 4.0, digital integration, industrial dictionary, semantic modeling

Procedia PDF Downloads 88
4655 Neural Network based Risk Detection for Dyslexia and Dysgraphia in Sinhala Language Speaking Children

Authors: Budhvin T. Withana, Sulochana Rupasinghe

Abstract:

The educational system faces a significant concern with regards to Dyslexia and Dysgraphia, which are learning disabilities impacting reading and writing abilities. This is particularly challenging for children who speak the Sinhala language due to its complexity and uniqueness. Commonly used methods to detect the risk of Dyslexia and Dysgraphia rely on subjective assessments, leading to limited coverage and time-consuming processes. Consequently, delays in diagnoses and missed opportunities for early intervention can occur. To address this issue, the project developed a hybrid model that incorporates various deep learning techniques to detect the risk of Dyslexia and Dysgraphia. Specifically, Resnet50, VGG16, and YOLOv8 models were integrated to identify handwriting issues. The outputs of these models were then combined with other input data and fed into an MLP model. Hyperparameters of the MLP model were fine-tuned using Grid Search CV, enabling the identification of optimal values for the model. This approach proved to be highly effective in accurately predicting the risk of Dyslexia and Dysgraphia, providing a valuable tool for early detection and intervention. The Resnet50 model exhibited a training accuracy of 0.9804 and a validation accuracy of 0.9653. The VGG16 model achieved a training accuracy of 0.9991 and a validation accuracy of 0.9891. The MLP model demonstrated impressive results with a training accuracy of 0.99918, a testing accuracy of 0.99223, and a loss of 0.01371. These outcomes showcase the high accuracy achieved by the proposed hybrid model in predicting the risk of Dyslexia and Dysgraphia.

Keywords: neural networks, risk detection system, dyslexia, dysgraphia, deep learning, learning disabilities, data science

Procedia PDF Downloads 58
4654 Federated Knowledge Distillation with Collaborative Model Compression for Privacy-Preserving Distributed Learning

Authors: Shayan Mohajer Hamidi

Abstract:

Federated learning has emerged as a promising approach for distributed model training while preserving data privacy. However, the challenges of communication overhead, limited network resources, and slow convergence hinder its widespread adoption. On the other hand, knowledge distillation has shown great potential in compressing large models into smaller ones without significant loss in performance. In this paper, we propose an innovative framework that combines federated learning and knowledge distillation to address these challenges and enhance the efficiency of distributed learning. Our approach, called Federated Knowledge Distillation (FKD), enables multiple clients in a federated learning setting to collaboratively distill knowledge from a teacher model. By leveraging the collaborative nature of federated learning, FKD aims to improve model compression while maintaining privacy. The proposed framework utilizes a coded teacher model that acts as a reference for distilling knowledge to the client models. To demonstrate the effectiveness of FKD, we conduct extensive experiments on various datasets and models. We compare FKD with baseline federated learning methods and standalone knowledge distillation techniques. The results show that FKD achieves superior model compression, faster convergence, and improved performance compared to traditional federated learning approaches. Furthermore, FKD effectively preserves privacy by ensuring that sensitive data remains on the client devices and only distilled knowledge is shared during the training process. In our experiments, we explore different knowledge transfer methods within the FKD framework, including Fine-Tuning (FT), FitNet, Correlation Congruence (CC), Similarity-Preserving (SP), and Relational Knowledge Distillation (RKD). We analyze the impact of these methods on model compression and convergence speed, shedding light on the trade-offs between size reduction and performance. Moreover, we address the challenges of communication efficiency and network resource utilization in federated learning by leveraging the knowledge distillation process. FKD reduces the amount of data transmitted across the network, minimizing communication overhead and improving resource utilization. This makes FKD particularly suitable for resource-constrained environments such as edge computing and IoT devices. The proposed FKD framework opens up new avenues for collaborative and privacy-preserving distributed learning. By combining the strengths of federated learning and knowledge distillation, it offers an efficient solution for model compression and convergence speed enhancement. Future research can explore further extensions and optimizations of FKD, as well as its applications in domains such as healthcare, finance, and smart cities, where privacy and distributed learning are of paramount importance.

Keywords: federated learning, knowledge distillation, knowledge transfer, deep learning

Procedia PDF Downloads 64
4653 Modeling of Drug Distribution in the Human Vitreous

Authors: Judith Stein, Elfriede Friedmann

Abstract:

The injection of a drug into the vitreous body for the treatment of retinal diseases like wet aged-related macular degeneration (AMD) is the most common medical intervention worldwide. We develop mathematical models for drug transport in the vitreous body of a human eye to analyse the impact of different rheological models of the vitreous on drug distribution. In addition to the convection diffusion equation characterizing the drug spreading, we use porous media modeling for the healthy vitreous with a dense collagen network and include the steady permeating flow of the aqueous humor described by Darcy's law driven by a pressure drop. Additionally, the vitreous body in a healthy human eye behaves like a viscoelastic gel through the collagen fibers suspended in the network of hyaluronic acid and acts as a drug depot for the treatment of retinal diseases. In a completely liquefied vitreous, we couple the drug diffusion with the classical Navier-Stokes flow equations. We prove the global existence and uniqueness of the weak solution of the developed initial-boundary value problem describing the drug distribution in the healthy vitreous considering the permeating aqueous humor flow in the realistic three-dimensional setting. In particular, for the drug diffusion equation, results from the literature are extended from homogeneous Dirichlet boundary conditions to our mixed boundary conditions that describe the eye with the Galerkin's method using Cauchy-Schwarz inequality and trace theorem. Because there is only a small effective drug concentration range and higher concentrations may be toxic, the ability to model the drug transport could improve the therapy by considering patient individual differences and give a better understanding of the physiological and pathological processes in the vitreous.

Keywords: coupled PDE systems, drug diffusion, mixed boundary conditions, vitreous body

Procedia PDF Downloads 130
4652 Virtual Reality and Avatars in Education

Authors: Michael Brazley

Abstract:

Virtual Reality (VR) and 3D videos are the most current generation of learning technology today. Virtual Reality and 3D videos are being used in professional offices and Schools now for marketing and education. Technology in the field of design has progress from two dimensional drawings to 3D models, using computers and sophisticated software. Virtual Reality is being used as collaborative means to allow designers and others to meet and communicate inside models or VR platforms using avatars. This research proposes to teach students from different backgrounds how to take a digital model into a 3D video, then into VR, and finally VR with multiple avatars communicating with each other in real time. The next step would be to develop the model where people from three or more different locations can meet as avatars in real time, in the same model and talk to each other. This research is longitudinal, studying the use of 3D videos in graduate design and Virtual Reality in XR (Extended Reality) courses. The research methodology is a combination of quantitative and qualitative methods. The qualitative methods begin with the literature review and case studies. The quantitative methods come by way of student’s 3D videos, survey, and Extended Reality (XR) course work. The end product is to develop a VR platform with multiple avatars being able to communicate in real time. This research is important because it will allow multiple users to remotely enter your model or VR platform from any location in the world and effectively communicate in real time. This research will lead to improved learning and training using Virtual Reality and Avatars; and is generalizable because most Colleges, Universities, and many citizens own VR equipment and computer labs. This research did produce a VR platform with multiple avatars having the ability to move and speak to each other in real time. Major implications of the research include but not limited to improved: learning, teaching, communication, marketing, designing, planning, etc. Both hardware and software played a major role in project success.

Keywords: virtual reality, avatars, education, XR

Procedia PDF Downloads 95
4651 Numerical Tools for Designing Multilayer Viscoelastic Damping Devices

Authors: Mohammed Saleh Rezk, Reza Kashani

Abstract:

Auxiliary damping has gained popularity in recent years, especially in structures such as mid- and high-rise buildings. Distributed damping systems (typically viscous and viscoelastic) or reactive damping systems (such as tuned mass dampers) are the two types of damping choices for such structures. Distributed VE dampers are normally configured as braces or damping panels, which are engaged through relatively small movements between the structural members when the structure sways under wind or earthquake loading. In addition to being used as stand-alone dampers in distributed damping applications, VE dampers can also be incorporated into the suspension element of tuned mass dampers (TMDs). In this study, analytical and numerical tools for modeling and design of multilayer viscoelastic damping devices to be used in dampening the vibration of large structures are developed. Considering the limitations of analytical models for the synthesis and analysis of realistic, large, multilayer VE dampers, the emphasis of the study has been on numerical modeling using the finite element method. To verify the finite element models, a two-layer VE damper using ½ inch synthetic viscoelastic urethane polymer was built, tested, and the measured parameters were compared with the numerically predicted ones. The numerical model prediction and experimentally evaluated damping and stiffness of the test VE damper were in very good agreement. The effectiveness of VE dampers in adding auxiliary damping to larger structures is numerically demonstrated by chevron bracing one such damper numerically into the model of a massive frame subject to an abrupt lateral load. A comparison of the responses of the frame to the aforementioned load, without and with the VE damper, clearly shows the efficacy of the damper in lowering the extent of frame vibration.

Keywords: viscoelastic, damper, distributed damping, tuned mass damper

Procedia PDF Downloads 99
4650 Capacity of Cold-Formed Steel Warping-Restrained Members Subjected to Combined Axial Compressive Load and Bending

Authors: Maryam Hasanali, Syed Mohammad Mojtabaei, Iman Hajirasouliha, G. Charles Clifton, James B. P. Lim

Abstract:

Cold-formed steel (CFS) elements are increasingly being used as main load-bearing components in the modern construction industry, including low- to mid-rise buildings. In typical multi-storey buildings, CFS structural members act as beam-column elements since they are exposed to combined axial compression and bending actions, both in moment-resisting frames and stud wall systems. Current design specifications, including the American Iron and Steel Institute (AISI S100) and the Australian/New Zealand Standard (AS/NZS 4600), neglect the beneficial effects of warping-restrained boundary conditions in the design of beam-column elements. Furthermore, while a non-linear relationship governs the interaction of axial compression and bending, the combined effect of these actions is taken into account through a simplified linear expression combining pure axial and flexural strengths. This paper aims to evaluate the reliability of the well-known Direct Strength Method (DSM) as well as design proposals found in the literature to provide a better understanding of the efficiency of the code-prescribed linear interaction equation in the strength predictions of CFS beam columns and the effects of warping-restrained boundary conditions on their behavior. To this end, the experimentally validated finite element (FE) models of CFS elements under compression and bending were developed in ABAQUS software, which accounts for both non-linear material properties and geometric imperfections. The validated models were then used for a comprehensive parametric study containing 270 FE models, covering a wide range of key design parameters, such as length (i.e., 0.5, 1.5, and 3 m), thickness (i.e., 1, 2, and 4 mm) and cross-sectional dimensions under ten different load eccentricity levels. The results of this parametric study demonstrated that using the DSM led to the most conservative strength predictions for beam-column members by up to 55%, depending on the element’s length and thickness. This can be sourced by the errors associated with (i) the absence of warping-restrained boundary condition effects, (ii) equations for the calculations of buckling loads, and (iii) the linear interaction equation. While the influence of warping restraint is generally less than 6%, the code suggested interaction equation led to an average error of 4% to 22%, based on the element lengths. This paper highlights the need to provide more reliable design solutions for CFS beam-column elements for practical design purposes.

Keywords: beam-columns, cold-formed steel, finite element model, interaction equation, warping-restrained boundary conditions

Procedia PDF Downloads 95
4649 Adjusting Electricity Demand Data to Account for the Impact of Loadshedding in Forecasting Models

Authors: Migael van Zyl, Stefanie Visser, Awelani Phaswana

Abstract:

The electricity landscape in South Africa is characterized by frequent occurrences of loadshedding, a measure implemented by Eskom to manage electricity generation shortages by curtailing demand. Loadshedding, classified into stages ranging from 1 to 8 based on severity, involves the systematic rotation of power cuts across municipalities according to predefined schedules. However, this practice introduces distortions in recorded electricity demand, posing challenges to accurate forecasting essential for budgeting, network planning, and generation scheduling. Addressing this challenge requires the development of a methodology to quantify the impact of loadshedding and integrate it back into metered electricity demand data. Fortunately, comprehensive records of loadshedding impacts are maintained in a database, enabling the alignment of Loadshedding effects with hourly demand data. This adjustment ensures that forecasts accurately reflect true demand patterns, independent of loadshedding's influence, thereby enhancing the reliability of electricity supply management in South Africa. This paper presents a methodology for determining the hourly impact of load scheduling and subsequently adjusting historical demand data to account for it. Furthermore, two forecasting models are developed: one utilizing the original dataset and the other using the adjusted data. A comparative analysis is conducted to evaluate forecast accuracy improvements resulting from the adjustment process. By implementing this methodology, stakeholders can make more informed decisions regarding electricity infrastructure investments, resource allocation, and operational planning, contributing to the overall stability and efficiency of South Africa's electricity supply system.

Keywords: electricity demand forecasting, load shedding, demand side management, data science

Procedia PDF Downloads 53
4648 Study of the Diaphragm Flexibility Effect on the Inelastic Seismic Response of Thin Wall Reinforced Concrete Buildings (TWRCB): A Purpose to Reduce the Uncertainty in the Vulnerability Estimation

Authors: A. Zapata, Orlando Arroyo, R. Bonett

Abstract:

Over the last two decades, the growing demand for housing in Latin American countries has led to the development of construction projects based on low and medium-rise buildings with thin reinforced concrete walls. This system, known as Thin Walls Reinforced Concrete Buildings (TWRCB), uses walls with thicknesses from 100 to 150 millimetres, with flexural reinforcement formed by welded wire mesh (WWM) with diameters between 5 and 7 millimetres, arranged in one or two layers. These walls often have irregular structural configurations, including combinations of rectangular shapes. Experimental and numerical research conducted in regions where this structural system is commonplace indicates inherent weaknesses, such as limited ductility due to the WWM reinforcement and thin element dimensions. Because of its complexity, numerical analyses have relied on two-dimensional models that don't explicitly account for the floor system, even though it plays a crucial role in distributing seismic forces among the resilient elements. Nonetheless, the numerical analyses assume a rigid diaphragm hypothesis. For this purpose, two study cases of buildings were selected, low-rise and mid-rise characteristics of TWRCB in Colombia. The buildings were analyzed in Opensees using the MVLEM-3D for walls and shell elements to simulate the slabs to involve the effect of coupling diaphragm in the nonlinear behaviour. Three cases are considered: a) models without a slab, b) models with rigid slabs, and c) models with flexible slabs. An incremental static (pushover) and nonlinear dynamic analyses were carried out using a set of 44 far-field ground motions of the FEMA P-695, scaled to 1.0 and 1.5 factors to consider the probability of collapse for the design base earthquake (DBE) and the maximum considered earthquake (MCE) for the model, according to the location sites and hazard zone of the archetypes in the Colombian NSR-10. Shear base capacity, maximum displacement at the roof, walls shear base individual demands and probabilities of collapse were calculated, to evaluate the effect of absence, rigid and flexible slabs in the nonlinear behaviour of the archetype buildings. The pushover results show that the building exhibits an overstrength between 1.1 to 2 when the slab is considered explicitly and depends on the structural walls plan configuration; additionally, the nonlinear behaviour considering no slab is more conservative than if the slab is represented. Include the flexible slab in the analysis remarks the importance to consider the slab contribution in the shear forces distribution between structural elements according to design resistance and rigidity. The dynamic analysis revealed that including the slab reduces the collapse probability of this system due to have lower displacements and deformations, enhancing the safety of residents and the seismic performance. The strategy of including the slab in modelling is important to capture the real effect on the distribution shear forces in walls due to coupling to estimate the correct nonlinear behaviour in this system and the adequate distribution to proportionate the correct resistance and rigidity of the elements in the design to reduce the possibility of damage to the elements during an earthquake.

Keywords: thin wall reinforced concrete buildings, coupling slab, rigid diaphragm, flexible diaphragm

Procedia PDF Downloads 67
4647 A West Coast Estuarine Case Study: A Predictive Approach to Monitor Estuarine Eutrophication

Authors: Vedant Janapaty

Abstract:

Estuaries are wetlands where fresh water from streams mixes with salt water from the sea. Also known as “kidneys of our planet”- they are extremely productive environments that filter pollutants, absorb floods from sea level rise, and shelter a unique ecosystem. However, eutrophication and loss of native species are ailing our wetlands. There is a lack of uniform data collection and sparse research on correlations between satellite data and in situ measurements. Remote sensing (RS) has shown great promise in environmental monitoring. This project attempts to use satellite data and correlate metrics with in situ observations collected at five estuaries. Images for satellite data were processed to calculate 7 bands (SIs) using Python. Average SI values were calculated per month for 23 years. Publicly available data from 6 sites at ELK was used to obtain 10 parameters (OPs). Average OP values were calculated per month for 23 years. Linear correlations between the 7 SIs and 10 OPs were made and found to be inadequate (correlation = 1 to 64%). Fourier transform analysis on 7 SIs was performed. Dominant frequencies and amplitudes were extracted for 7 SIs, and a machine learning(ML) model was trained, validated, and tested for 10 OPs. Better correlations were observed between SIs and OPs, with certain time delays (0, 3, 4, 6 month delay), and ML was again performed. The OPs saw improved R² values in the range of 0.2 to 0.93. This approach can be used to get periodic analyses of overall wetland health with satellite indices. It proves that remote sensing can be used to develop correlations with critical parameters that measure eutrophication in situ data and can be used by practitioners to easily monitor wetland health.

Keywords: estuary, remote sensing, machine learning, Fourier transform

Procedia PDF Downloads 94
4646 Obstetric Outcome after Hysteroscopic Septum Resection in Patients with Uterine Septa of Various Sizes

Authors: Nilanchali Singh, Alka Kriplani, Reeta Mahey, Garima Kachhawa

Abstract:

Objective: Resection of larger uterine septa does improve obstetric performance but whether smaller septa need resection and their impact on obstetric outcome is not clear. We wanted to evaluate the role of septal resection of septa of various sizes in obstetric performance. Methods: This retrospective cohort study comprised of 107 patients with uterine septum. The patients were categorized on the basis of extent of uterine septum into four groups: a) Subsepta (< 1/3rd), b) Septum > 1/3 to ½, c) Septum>1/2 to whole uterine cervix, d) Septum traversing whole of uterine cavity and cervix. Out of these 107 patients, 74 could be contacted telephonically and outcomes recorded. Sensitivity and specificity of investigative modalities were calculated. Results: Infertility was seen in maximum number of cases in complete septa (100%), whereas abortions were seen more commonly, in subsepta (18%). MRI had maximum sensitivity and positive predictive value, followed by hysteron-salpingography. Tubal block, fibroid, endometriosis, pelvic adhesions, ovarian pathologies were seen in some but no definite association of these pathologies was seen with any subgroup of septa. Almost five-year follow-up was recorded in all the subgroups. Significant reduction in infertility was seen in all septal subgroup (p=0.046, 0.032 & 0.05) patients except in subsepta (< 1/3rd uterine cavity) after septum resection. Abortions were significantly reduced (p=0.048) in third subgroup (i.e. septum > ½ to upto internal os) after hysteroscopic septum resection. Take home baby rate was 33% in subsepta and around 50% in the remaining subgroups of septa. Conclusions: Septal resection improves obstetric performance in patients with uterine septa of various sizes. Whether septal resection improves obstetric performance in patients with subsepta or very small septa, is controversial. Larger studies addressing this issue need to be planned.

Keywords: septal resection, obstetric outcome, infertility, septum size

Procedia PDF Downloads 315
4645 Optimization of Springback Prediction in U-Channel Process Using Response Surface Methodology

Authors: Muhamad Sani Buang, Shahrul Azam Abdullah, Juri Saedon

Abstract:

There is not much effective guideline on development of design parameters selection on springback for advanced high strength steel sheet metal in U-channel process during cold forming process. This paper presents the development of predictive model for springback in U-channel process on advanced high strength steel sheet employing Response Surface Methodology (RSM). The experimental was performed on dual phase steel sheet, DP590 in U-channel forming process while design of experiment (DoE) approach was used to investigates the effects of four factors namely blank holder force (BHF), clearance (C) and punch travel (Tp) and rolling direction (R) were used as input parameters using two level values by applying Full Factorial design (24). From a statistical analysis of variant (ANOVA), result showed that blank holder force (BHF), clearance (C) and punch travel (Tp) displayed significant effect on springback of flange angle (β2) and wall opening angle (β1), while rolling direction (R) factor is insignificant. The significant parameters are optimized in order to reduce the springback behavior using Central Composite Design (CCD) in RSM and the optimum parameters were determined. A regression model for springback was developed. The effect of individual parameters and their response was also evaluated. The results obtained from optimum model are in agreement with the experimental values

Keywords: advance high strength steel, u-channel process, springback, design of experiment, optimization, response surface methodology (rsm)

Procedia PDF Downloads 538