Search results for: reduced order macro models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 22883

Search results for: reduced order macro models

18953 Application of the Piloting Law Based on Adaptive Differentiators via Second Order Sliding Mode for a Fixed Wing Aircraft

Authors: Zaouche Mohammed, Amini Mohammed, Foughali Khaled, Hamissi Aicha, Aktouf Mohand Arezki, Boureghda Ilyes

Abstract:

In this paper, we present a piloting law based on the adaptive differentiators via high order sliding mode controller, by using an aircraft in virtual simulated environment. To deal with the design of an autopilot controller, we propose a framework based on Software in the Loop (SIL) methodology and we use MicrosoftTM Flight Simulator (FS-2004) as the environment for plane simulation. The aircraft dynamic model is nonlinear, Multi-Input Multi-Output (MIMO) and tightly coupled. The nonlinearity resides in the dynamic equations and also in the aerodynamic coefficients' variability. In our case, two (02) aircrafts are used in the flight tests, the Zlin-142 and MQ-1 Predator. For both aircrafts and in a very low altitude flight, we send the piloting control inputs to the aircraft which has stalled due to a command disconnection. Then, we present the aircraft’s dynamic behavior analysis while reestablishing the command transmission. Finally, a comparative study between the two aircraft’s dynamic behaviors is presented.

Keywords: adaptive differentiators, second order sliding modes, dynamic adaptation of the gains, microsoft flight simulator, Zlin-142, MQ-1 predator

Procedia PDF Downloads 425
18952 Improvements in Double Q-Learning for Anomalous Radiation Source Searching

Authors: Bo-Bin Xiaoa, Chia-Yi Liua

Abstract:

In the task of searching for anomalous radiation sources, personnel holding radiation detectors to search for radiation sources may be exposed to unnecessary radiation risk, and automated search using machines becomes a required project. The research uses various sophisticated algorithms, which are double Q learning, dueling network, and NoisyNet, of deep reinforcement learning to search for radiation sources. The simulation environment, which is a 10*10 grid and one shielding wall setting in it, improves the development of the AI model by training 1 million episodes. In each episode of training, the radiation source position, the radiation source intensity, agent position, shielding wall position, and shielding wall length are all set randomly. The three algorithms are applied to run AI model training in four environments where the training shielding wall is a full-shielding wall, a lead wall, a concrete wall, and a lead wall or a concrete wall appearing randomly. The 12 best performance AI models are selected by observing the reward value during the training period and are evaluated by comparing these AI models with the gradient search algorithm. The results show that the performance of the AI model, no matter which one algorithm, is far better than the gradient search algorithm. In addition, the simulation environment becomes more complex, the AI model which applied Double DQN combined Dueling and NosiyNet algorithm performs better.

Keywords: double Q learning, dueling network, NoisyNet, source searching

Procedia PDF Downloads 116
18951 Flexible Design Solutions for Complex Free form Geometries Aimed to Optimize Performances and Resources Consumption

Authors: Vlad Andrei Raducanu, Mariana Lucia Angelescu, Ion Cinca, Vasile Danut Cojocaru, Doina Raducanu

Abstract:

By using smart digital tools, such as generative design (GD) and digital fabrication (DF), problems of high actuality concerning resources optimization (materials, energy, time) can be solved and applications or products of free-form type can be created. In the new digital technology materials are active, designed in response to a set of performance requirements, which impose a total rethinking of old material practices. The article presents the design procedure key steps of a free-form architectural object - a column type one with connections to get an adaptive 3D surface, by using the parametric design methodology and by exploiting the properties of conventional metallic materials. In parametric design the form of the created object or space is shaped by varying the parameters values and relationships between the forms are described by mathematical equations. Digital parametric design is based on specific procedures, as shape grammars, Lindenmayer - systems, cellular automata, genetic algorithms or swarm intelligence, each of these procedures having limitations which make them applicable only in certain cases. In the paper the design process stages and the shape grammar type algorithm are presented. The generative design process relies on two basic principles: the modeling principle and the generative principle. The generative method is based on a form finding process, by creating many 3D spatial forms, using an algorithm conceived in order to apply its generating logic onto different input geometry. Once the algorithm is realized, it can be applied repeatedly to generate the geometry for a number of different input surfaces. The generated configurations are then analyzed through a technical or aesthetic selection criterion and finally the optimal solution is selected. Endless range of generative capacity of codes and algorithms used in digital design offers various conceptual possibilities and optimal solutions for both technical and environmental increasing demands of building industry and architecture. Constructions or spaces generated by parametric design can be specifically tuned, in order to meet certain technical or aesthetical requirements. The proposed approach has direct applicability in sustainable architecture, offering important potential economic advantages, a flexible design (which can be changed until the end of the design process) and unique geometric models of high performance.

Keywords: parametric design, algorithmic procedures, free-form architectural object, sustainable architecture

Procedia PDF Downloads 381
18950 3D Interactions in Under Water Acoustic Simulations

Authors: Prabu Duplex

Abstract:

Due to stringent emission regulation targets, large-scale transition to renewable energy sources is a global challenge, and wind power plays a significant role in the solution vector. This scenario has led to the construction of offshore wind farms, and several wind farms are planned in the shallow waters where the marine habitat exists. It raises concerns over impacts of underwater noise on marine species, for example bridge constructions in the ocean straits. Dangerous to aquatic life, the environmental organisations say, the bridge would be devastating, since ocean straits are important place of transit for marine mammals. One of the highest concentrations of biodiversity in the world is concentrated these areas. The investigation of ship noise and piling noise that may happen during bridge construction and in operation is therefore vital. Once the source levels are known the receiver levels can be modelled. With this objective this work investigates the key requirement of the software that can model transmission loss in high frequencies that may occur during construction or operation phases. Most propagation models are 2D solutions, calculating the propagation loss along a transect, which does not include horizontal refraction, reflection or diffraction. In many cases, such models provide sufficient accuracy and can provide three-dimensional maps by combining, through interpolation, several two-dimensional (distance and depth) transects. However, in some instances the use of 2D models may not be sufficient to accurately model the sound propagation. A possible example includes a scenario where an island or land mass is situated between the source and receiver. The 2D model will result in a shadow behind the land mass where the modelled transects intersect the land mass. Diffraction will occur causing bending of the sound around the land mass. In such cases, it may be necessary to use a 3D model, which accounts for horizontal diffraction to accurately represent the sound field. Other scenarios where 2D models may not provide sufficient accuracy may be environments characterised by a strong up-sloping or down sloping seabed, such as propagation around continental shelves. In line with these objectives by means of a case study, this work addresses the importance of 3D interactions in underwater acoustics. The methodology used in this study can also be used for other 3D underwater sound propagation studies. This work assumes special significance given the increasing interest in using underwater acoustic modeling for environmental impacts assessments. Future work also includes inter-model comparison in shallow water environments considering more physical processes known to influence sound propagation, such as scattering from the sea surface. Passive acoustic monitoring of the underwater soundscape with distributed hydrophone arrays is also suggested to investigate the 3D propagation effects as discussed in this article.

Keywords: underwater acoustics, naval, maritime, cetaceans

Procedia PDF Downloads 26
18949 Studying the Simultaneous Effect of Petroleum and DDT Pollution on the Geotechnical Characteristics of Sands

Authors: Sara Seyfi

Abstract:

DDT and petroleum contamination in coastal sand alters the physical and mechanical properties of contaminated soils. This article aims to understand the effects of DDT pollution on the geotechnical characteristics of sand groups, including sand, silty sand, and clay sand. First, the studies conducted on the topic of the article will be reviewed. In the initial stage of the tests, this article deals with the identification of the used sands (sand, silty sand, clay sand) by FTIR, µ-XRF and SEM methods. Then, the geotechnical characteristics of these sand groups, including density, permeability, shear strength, compaction, and plasticity, are investigated using a sand cone, head permeability test, Vane shear test, strain gauge penetrometer, and plastic limit test. Sand groups are artificially contaminated with petroleum substances with 1, 2, 4, 8, 10, 12% by weight. In a separate experiment, amounts of 2, 4, 8, 12, 16, 20 mg/liter of DDT were added to the sand groups. Geotechnical characteristics and identification analysis are performed on the contaminated samples. In the final tests, the mentioned amounts of oil pollution and DDT are simultaneously added to the sand groups, and identification and measurement processes are carried out. The results of the tests showed that petroleum contamination had reduced the optimal moisture content, permeability, and plasticity of all samples. Except silty sand’s plasticity, which petroleum increased it by 1-4% and decreased it by 8-12%. The dry density of sand and clay sand increased, but that of silty sand decreased. Also, the shear strength of sand and silty sand increased, but that of clay sand decreased. DDT contamination increased the maximum dry density and decreased the permeability of all samples. It also reduced the optimum moisture content of the sand. The shear resistance of silty sand and clayey sand decreased, and plasticity of clayey sand increased, and silty sand decreased. The simultaneous effect of petroleum and DDT pollution on the maximum dry density of sand and clayey sand has been synergistic, on the plasticity of clayey sand and silty sand, there has been antagonism. This process has caused antagonism of optimal sand content, shear strength of silty sand and clay sand. In other cases, the effect of synergy or antagonism is not observed.

Keywords: DDT contamination, geotechnical characteristics, petroleum contamination, sand

Procedia PDF Downloads 55
18948 Iris Recognition Based on the Low Order Norms of Gradient Components

Authors: Iman A. Saad, Loay E. George

Abstract:

Iris pattern is an important biological feature of human body; it becomes very hot topic in both research and practical applications. In this paper, an algorithm is proposed for iris recognition and a simple, efficient and fast method is introduced to extract a set of discriminatory features using first order gradient operator applied on grayscale images. The gradient based features are robust, up to certain extents, against the variations may occur in contrast or brightness of iris image samples; the variations are mostly occur due lightening differences and camera changes. At first, the iris region is located, after that it is remapped to a rectangular area of size 360x60 pixels. Also, a new method is proposed for detecting eyelash and eyelid points; it depends on making image statistical analysis, to mark the eyelash and eyelid as a noise points. In order to cover the features localization (variation), the rectangular iris image is partitioned into N overlapped sub-images (blocks); then from each block a set of different average directional gradient densities values is calculated to be used as texture features vector. The applied gradient operators are taken along the horizontal, vertical and diagonal directions. The low order norms of gradient components were used to establish the feature vector. Euclidean distance based classifier was used as a matching metric for determining the degree of similarity between the features vector extracted from the tested iris image and template features vectors stored in the database. Experimental tests were performed using 2639 iris images from CASIA V4-Interival database, the attained recognition accuracy has reached up to 99.92%.

Keywords: iris recognition, contrast stretching, gradient features, texture features, Euclidean metric

Procedia PDF Downloads 338
18947 Using Machine Learning to Extract Patient Data from Non-standardized Sports Medicine Physician Notes

Authors: Thomas Q. Pan, Anika Basu, Chamith S. Rajapakse

Abstract:

Machine learning requires data that is categorized into features that models train on. This topic is important to the field of sports medicine due to the many tools it provides to physicians such as diagnosis support and risk assessment. Physician note that healthcare professionals take are usually unclean and not suitable for model training. The objective of this study was to develop and evaluate an advanced approach for extracting key features from sports medicine data without the need for extensive model training or data labeling. An LLM (Large Language Model) was given a narrative (Physician’s Notes) and prompted to extract four features (details about the patient). The narrative was found in a datasheet that contained six columns: Case Number, Validation Age, Validation Gender, Validation Diagnosis, Validation Body Part, and Narrative. The validation columns represent the accurate responses that the LLM attempts to output. With the given narrative, the LLM would output its response and extract the age, gender, diagnosis, and injured body part with each category taking up one line. The output would then be cleaned, matched, and added to new columns containing the extracted responses. Five ways of checking the accuracy were used: unclear count, substring comparison, LLM comparison, LLM re-check, and hand-evaluation. The unclear count essentially represented the extractions the LLM missed. This can be also understood as the recall score ([total - false negatives] over total). The rest of these correspond to the precision score ([total - false positives] over total). Substring comparison evaluated the validation (X) and extracted (Y) columns’ likeness by checking if X’s results were a substring of Y's findings and vice versa. LLM comparison directly asked an LLM if the X and Y’s results were similar. LLM Re-check prompted the LLM to see if the extracted results can be found in the narrative. Lastly, A selection of 1,000 random narratives was also selected and hand-evaluated to give an estimate of how well the LLM-based feature extraction model performed. With a selection of 10,000 narratives, the LLM-based approach had a recall score of roughly 98%. However, the precision scores of the substring comparison and LLM comparison models were around 72% and 76% respectively. The reason for these low figures is due to the minute differences between answers. For example, the ‘chest’ is a part of the ‘upper trunk’ however, these models cannot detect that. On the other hand, the LLM re-check and subset of hand-tested narratives showed a precision score of 96% and 95%. If this subset is used to extrapolate the possible outcome of the whole 10,000 narratives, the LLM-based approach would be strong in both precision and recall. These results indicated that an LLM-based feature extraction model could be a useful way for medical data in sports to be collected and analyzed by machine learning models. Wide use of this method could potentially increase the availability of data thus improving machine learning algorithms and supporting doctors with more enhanced tools.

Keywords: AI, LLM, ML, sports

Procedia PDF Downloads 18
18946 Life Cycle Assessment of Todays and Future Electricity Grid Mixes of EU27

Authors: Johannes Gantner, Michael Held, Rafael Horn, Matthias Fischer

Abstract:

At the United Nations Climate Change Conference 2015 a global agreement on the reduction of climate change was achieved stating CO₂ reduction targets for all countries. For instance, the EU targets a reduction of 40 percent in emissions by 2030 compared to 1990. In order to achieve this ambitious goal, the environmental performance of the different European electricity grid mixes is crucial. First, the electricity directly needed for everyone’s daily life (e.g. heating, plug load, mobility) and therefore a reduction of the environmental impacts of the electricity grid mix reduces the overall environmental impacts of a country. Secondly, the manufacturing of every product depends on electricity. Thereby a reduction of the environmental impacts of the electricity mix results in a further decrease of environmental impacts of every product. As a result, the implementation of the two-degree goal highly depends on the decarbonization of the European electricity mixes. Currently the production of electricity in the EU27 is based on fossil fuels and therefore bears a high GWP impact per kWh. Due to the importance of the environmental impacts of the electricity mix, not only today but also in future, within the European research projects, CommONEnergy and Senskin, time-dynamic Life Cycle Assessment models for all EU27 countries were set up. As a methodology, a combination of scenario modeling and life cycle assessment according to ISO14040 and ISO14044 was conducted. Based on EU27 trends regarding energy, transport, and buildings, the different national electricity mixes were investigated taking into account future changes such as amount of electricity generated in the country, change in electricity carriers, COP of the power plants and distribution losses, imports and exports. As results, time-dynamic environmental profiles for the electricity mixes of each country and for Europe overall were set up. Thereby for each European country, the decarbonization strategies of the electricity mix are critically investigated in order to identify decisions, that can lead to negative environmental effects, for instance on the reduction of the global warming of the electricity mix. For example, the withdrawal of the nuclear energy program in Germany and at the same time compensation of the missing energy by non-renewable energy carriers like lignite and natural gas is resulting in an increase in global warming potential of electricity grid mix. Just after two years this increase countervailed by the higher share of renewable energy carriers such as wind power and photovoltaic. Finally, as an outlook a first qualitative picture is provided, illustrating from environmental perspective, which country has the highest potential for low-carbon electricity production and therefore how investments in a connected European electricity grid could decrease the environmental impacts of the electricity mix in Europe.

Keywords: electricity grid mixes, EU27 countries, environmental impacts, future trends, life cycle assessment, scenario analysis

Procedia PDF Downloads 187
18945 Study and Analysis of the Factors Affecting Road Safety Using Decision Tree Algorithms

Authors: Naina Mahajan, Bikram Pal Kaur

Abstract:

The purpose of traffic accident analysis is to find the possible causes of an accident. Road accidents cannot be totally prevented but by suitable traffic engineering and management the accident rate can be reduced to a certain extent. This paper discusses the classification techniques C4.5 and ID3 using the WEKA Data mining tool. These techniques use on the NH (National highway) dataset. With the C4.5 and ID3 technique it gives best results and high accuracy with less computation time and error rate.

Keywords: C4.5, ID3, NH(National highway), WEKA data mining tool

Procedia PDF Downloads 341
18944 Maintenance Performance Measurement Derived Optimization: A Case Study

Authors: James M. Wakiru, Liliane Pintelon, Peter Muchiri, Stanley Mburu

Abstract:

Maintenance performance measurement (MPM) represents an integrated aspect that considers both operational and maintenance related aspects while evaluating the effectiveness and efficiency of maintenance to ensure assets are working as they should. Three salient issues require to be addressed for an asset-intensive organization to employ an MPM-based framework to optimize maintenance. Firstly, the organization should establish important perfomance metric(s), in this case the maintenance objective(s), which they will be focuss on. The second issue entails aligning the maintenance objective(s) with maintenance optimization. This is achieved by deriving maintenance performance indicators that subsequently form an objective function for the optimization program. Lastly, the objective function is employed in an optimization program to derive maintenance decision support. In this study, we develop a framework that initially identifies the crucial maintenance performance measures, and employs them to derive maintenance decision support. The proposed framework is demonstrated in a case study of a geothermal drilling rig, where the objective function is evaluated utilizing a simulation-based model whose parameters are derived from empirical maintenance data. Availability, reliability and maintenance inventory are depicted as essential objectives requiring further attention. A simulation model is developed mimicking a drilling rig operations and maintenance where the sub-systems are modelled undergoing imperfect maintenance, corrective (CM) and preventive (PM), with the total cost as the primary performance measurement. Moreover, three maintenance spare inventory policies are considered; classical (retaining stocks for a contractual period), vendor-managed inventory with consignment stock and periodic monitoring order-to-stock (s, S) policy. Optimization results infer that the adoption of (s, S) inventory policy, increased PM interval and reduced reliance of CM actions offers improved availability and total costs reduction.

Keywords: maintenance, vendor-managed, decision support, performance, optimization

Procedia PDF Downloads 128
18943 Effects of Tillage and Poultry Manure on Soil Properties and Yam Performance on Alfisol in Southwest Nigeria

Authors: Adeleye Ebenezer Omotayo

Abstract:

The main effects of tillage, poultry manure and interaction effects of tillage-poultry manure combinations on soil characteristics and yam yield were investigated in a factorial experiment involving four tillage techniques namely (ploughing (p), ploughing plus harrowing (PH), manual ridging (MR), manual heaping (MH) and poultry manure at two levels 0 t ha-1 and 10 t ha-1 arranged in split-plot design. Data obtained were subjected to analysis of variance using Statistical Analysis System (SAS) Institute Package. Soil moisture content, bulk density and total porosity were significantly (p>0.05) influenced by soil tillage techniques. Manually heaped and ridged plots had the lowest soil bulk density, moisture content and highest total porosity. The soil total N, exchangeable Mg, k, base saturation and CEC were better enhanced in manually tilled plots. Soil nutrients status declined at the end of the second cropping for all the tillage techniques in the order PH>P>MH>MR. Yam tuber yields were better enhanced in manually tilled plots than mechanically tilled plots. Poultry manure application reduced soil bulk density, temperature, increased total porosity and soil moisture content. It also improved soil organic matter, total N, available P, exchangeable Mg, Ca, K and lowered exchange acidity. It also increased yam tuber yield significantly. Tillage techniques plots amended with poultry manure enhanced yam tuber yield relative to tillage techniques plots without poultry manure application. It is concluded that yam production on alfisol in Southwest Nigeria requires loose soil structure for tuber development and that the use of poultry manure in combination with tillage is recommended as it will ensure stability of soil structure, improve soil organic matter status, nutrient availability and high yam tuber yield. Also, it will help to reduce the possible deleterious effects of tillage on soil properties and yam performance.

Keywords: ploughing, poultry manure, yam, yield

Procedia PDF Downloads 276
18942 Analysis of Overall Thermo-Elastic Properties of Random Particulate Nanocomposites with Various Interphase Models

Authors: Lidiia Nazarenko, Henryk Stolarski, Holm Altenbach

Abstract:

In the paper, a (hierarchical) approach to analysis of thermo-elastic properties of random composites with interphases is outlined and illustrated. It is based on the statistical homogenization method – the method of conditional moments – combined with recently introduced notion of the energy-equivalent inhomogeneity which, in this paper, is extended to include thermal effects. After exposition of the general principles, the approach is applied in the investigation of the effective thermo-elastic properties of a material with randomly distributed nanoparticles. The basic idea of equivalent inhomogeneity is to replace the inhomogeneity and the surrounding it interphase by a single equivalent inhomogeneity of constant stiffness tensor and coefficient of thermal expansion, combining thermal and elastic properties of both. The equivalent inhomogeneity is then perfectly bonded to the matrix which allows to analyze composites with interphases using techniques devised for problems without interphases. From the mechanical viewpoint, definition of the equivalent inhomogeneity is based on Hill’s energy equivalence principle, applied to the problem consisting only of the original inhomogeneity and its interphase. It is more general than the definitions proposed in the past in that, conceptually and practically, it allows to consider inhomogeneities of various shapes and various models of interphases. This is illustrated considering spherical particles with two models of interphases, Gurtin-Murdoch material surface model and spring layer model. The resulting equivalent inhomogeneities are subsequently used to determine effective thermo-elastic properties of randomly distributed particulate composites. The effective stiffness tensor and coefficient of thermal extension of the material with so defined equivalent inhomogeneities are determined by the method of conditional moments. Closed-form expressions for the effective thermo-elastic parameters of a composite consisting of a matrix and randomly distributed spherical inhomogeneities are derived for the bulk and the shear moduli as well as for the coefficient of thermal expansion. Dependence of the effective parameters on the interphase properties is included in the resulting expressions, exhibiting analytically the nature of the size-effects in nanomaterials. As a numerical example, the epoxy matrix with randomly distributed spherical glass particles is investigated. The dependence of the effective bulk and shear moduli, as well as of the effective thermal expansion coefficient on the particle volume fraction (for different radii of nanoparticles) and on the radius of nanoparticle (for fixed volume fraction of nanoparticles) for different interphase models are compared to and discussed in the context of other theoretical predictions. Possible applications of the proposed approach to short-fiber composites with various types of interphases are discussed.

Keywords: effective properties, energy equivalence, Gurtin-Murdoch surface model, interphase, random composites, spherical equivalent inhomogeneity, spring layer model

Procedia PDF Downloads 188
18941 Image Ranking to Assist Object Labeling for Training Detection Models

Authors: Tonislav Ivanov, Oleksii Nedashkivskyi, Denis Babeshko, Vadim Pinskiy, Matthew Putman

Abstract:

Training a machine learning model for object detection that generalizes well is known to benefit from a training dataset with diverse examples. However, training datasets usually contain many repeats of common examples of a class and lack rarely seen examples. This is due to the process commonly used during human annotation where a person would proceed sequentially through a list of images labeling a sufficiently high total number of examples. Instead, the method presented involves an active process where, after the initial labeling of several images is completed, the next subset of images for labeling is selected by an algorithm. This process of algorithmic image selection and manual labeling continues in an iterative fashion. The algorithm used for the image selection is a deep learning algorithm, based on the U-shaped architecture, which quantifies the presence of unseen data in each image in order to find images that contain the most novel examples. Moreover, the location of the unseen data in each image is highlighted, aiding the labeler in spotting these examples. Experiments performed using semiconductor wafer data show that labeling a subset of the data, curated by this algorithm, resulted in a model with a better performance than a model produced from sequentially labeling the same amount of data. Also, similar performance is achieved compared to a model trained on exhaustive labeling of the whole dataset. Overall, the proposed approach results in a dataset that has a diverse set of examples per class as well as more balanced classes, which proves beneficial when training a deep learning model.

Keywords: computer vision, deep learning, object detection, semiconductor

Procedia PDF Downloads 142
18940 Management of Acute Appendicitis with Preference on Delayed Primary Suturing of Surgical Incision

Authors: N. A. D. P. Niwunhella, W. G. R. C. K. Sirisena

Abstract:

Appendicitis is one of the most encountered abdominal emergencies worldwide. Proper clinical diagnosis and appendicectomy with minimal post operative complications are therefore priorities. Aim of this study was to ascertain the overall management of acute appendicitis in Sri Lanka in special preference to delayed primary suturing of the surgical site, comparing other local and international treatment outcomes. Data were collected prospectively from 155 patients who underwent appendicectomy following clinical and radiological diagnosis with ultrasonography. Histological assessment was done for all the specimens. All perforated appendices were managed with delayed primary closure. Patients were followed up for 28 days to assess complications. Mean age of patient presentation was 27 years; mean pre-operative waiting time following admission was 24 hours; average hospital stay was 72 hours; accuracy of clinical diagnosis of appendicitis as confirmed by histology was 87.1%; post operative wound infection rate was 8.3%, and among them 5% had perforated appendices; 4 patients had post operative complications managed without re-opening. There was no fistula formation or mortality reported. Current study was compared with previously published data: a comparison on management of acute appendicitis in Sri Lanka vs. United Kingdom (UK). The diagnosis of current study was equally accurate, but post operative complications were significantly reduced - (current study-9.6%, compared Sri Lankan study-16.4%; compared UK study-14.1%). During the recent years, there has been an exponential rise in the use of Computerised Tomography (CT) imaging in the assessment of patients with acute appendicitis. Even though, the diagnostic accuracy without using CT, and treatment outcome of acute appendicitis in this study match other local studies as well as with data compared to UK. Therefore CT usage has not increased the diagnostic accuracy of acute appendicitis significantly. Especially, delayed primary closure may have reduced post operative wound infection rate for ruptured appendices, therefore suggest this approach for further evaluation as a safer and an effective practice in other hospitals worldwide as well.

Keywords: acute appendicitis, computerised tomography, diagnostic accuracy, delayed primary closure

Procedia PDF Downloads 171
18939 Effects of Aircraft Wing Configuration on Aerodynamic Efficiency

Authors: Aderet Pantierer, Shmuel Pantierer, Atif Saeed, Amir Elzawawy

Abstract:

In recent years, air travel has seen volatile growth. Due to this growth, the maximization of efficiency and space utilization has been a major issue for aircraft manufacturers. Elongation of the wingspan of aircraft has resulted in increased lift; and, thereby, efficiency. However, increasing the wingspan of aircraft has been detrimental to the manufacturing process and has led to airport congestion and required airport reconfiguration to accommodate the extended wingspans of aircraft. This project outlines differing wing configurations of a commercial aircraft and the effects on the aerodynamic loads produced. Multiple wing configurations are analyzed using Finite Element Models. These models are then validated by testing one wing configuration in a wind tunnel under laminar flow and turbulent flow conditions. The wing configurations to be tested include high and low wing aircraft, as well as various combinations of the two, including a unique model hereon referred to as an infinity wing. The infinity wing configuration consists of both a high and low wing, with the two wings connected by a vertical airfoil. This project seeks to determine if a wing configuration consisting of multiple airfoils produces more lift than the standard wing configurations and is able to provide a solution to manufacturing limitations as well as airport congestion. If the analysis confirms the hypothesis, a trade study will be performed to determine if and when an arrangement of multiple wings would be cost-effective.

Keywords: aerodynamics, aircraft design, aircraft efficiency, wing configuration, wing design

Procedia PDF Downloads 272
18938 Unleashing the Potential of Green Finance in Architecture: A Promising Path for Balkan Countries

Authors: Luan Vardari, Dena Arapi Vardari

Abstract:

The Balkan countries, known for their diverse landscapes and cultural heritage, face the dual challenge of promoting economic growth while addressing pressing environmental concerns. In recent years, the concept of green finance has emerged as a powerful tool to achieve sustainable development and mitigate the environmental impact of various sectors, including architecture. This extended abstract explores the untapped potential of green finance in architecture within the Balkan region and highlights its role in driving sustainable construction practices and fostering a greener future. The abstract begins by defining green finance and emphasizing its relevance in the context of the architectural sector in Balkan countries. It underlines the benefits of green finance, such as economic growth, environmental conservation, and social well-being. Integrating green finance into architectural projects is important as a means to achieve sustainable development goals while promoting financial viability. Also, delves into the current state of green building practices in the Balkan countries and identifies the need for financial support to further drive adoption. It explores the existing regulatory frameworks and policies that promote sustainable architecture and discusses how green finance can complement these initiatives. Unique challenges faced by Balkan countries are highlighted, along with the potential opportunities that green finance presents in overcoming these challenges. We highlight successful sustainable architectural projects in the region to showcase the practical application of green finance in the Balkans. These projects exemplify the effective utilization of green finance mechanisms, resulting in tangible economic and environmental impacts, including job creation, energy efficiency, and reduced carbon emissions. The abstract concludes by identifying replicable models and lessons learned from these projects that can serve as a blueprint for future sustainable architecture initiatives in the Balkans. The importance of collaboration and knowledge sharing among stakeholders is emphasized. Engaging architects, financial institutions, governments, and local communities is crucial to promoting green finance in architecture. The abstract suggests the establishment of knowledge exchange platforms and regional/international networks to foster collaboration and facilitate the sharing of expertise among Balkan countries.

Keywords: sustainable finance, renewable energy, Balkan region, investment opportunities, green infrastructure, ESG criteria, architecture

Procedia PDF Downloads 73
18937 Evaluation of Modern Natural Language Processing Techniques via Measuring a Company's Public Perception

Authors: Burak Oksuzoglu, Savas Yildirim, Ferhat Kutlu

Abstract:

Opinion mining (OM) is one of the natural language processing (NLP) problems to determine the polarity of opinions, mostly represented on a positive-neutral-negative axis. The data for OM is usually collected from various social media platforms. In an era where social media has considerable control over companies’ futures, it’s worth understanding social media and taking actions accordingly. OM comes to the fore here as the scale of the discussion about companies increases, and it becomes unfeasible to gauge opinion on individual levels. Thus, the companies opt to automize this process by applying machine learning (ML) approaches to their data. For the last two decades, OM or sentiment analysis (SA) has been mainly performed by applying ML classification algorithms such as support vector machines (SVM) and Naïve Bayes to a bag of n-gram representations of textual data. With the advent of deep learning and its apparent success in NLP, traditional methods have become obsolete. Transfer learning paradigm that has been commonly used in computer vision (CV) problems started to shape NLP approaches and language models (LM) lately. This gave a sudden rise to the usage of the pretrained language model (PTM), which contains language representations that are obtained by training it on the large datasets using self-supervised learning objectives. The PTMs are further fine-tuned by a specialized downstream task dataset to produce efficient models for various NLP tasks such as OM, NER (Named-Entity Recognition), Question Answering (QA), and so forth. In this study, the traditional and modern NLP approaches have been evaluated for OM by using a sizable corpus belonging to a large private company containing about 76,000 comments in Turkish: SVM with a bag of n-grams, and two chosen pre-trained models, multilingual universal sentence encoder (MUSE) and bidirectional encoder representations from transformers (BERT). The MUSE model is a multilingual model that supports 16 languages, including Turkish, and it is based on convolutional neural networks. The BERT is a monolingual model in our case and transformers-based neural networks. It uses a masked language model and next sentence prediction tasks that allow the bidirectional training of the transformers. During the training phase of the architecture, pre-processing operations such as morphological parsing, stemming, and spelling correction was not used since the experiments showed that their contribution to the model performance was found insignificant even though Turkish is a highly agglutinative and inflective language. The results show that usage of deep learning methods with pre-trained models and fine-tuning achieve about 11% improvement over SVM for OM. The BERT model achieved around 94% prediction accuracy while the MUSE model achieved around 88% and SVM did around 83%. The MUSE multilingual model shows better results than SVM, but it still performs worse than the monolingual BERT model.

Keywords: BERT, MUSE, opinion mining, pretrained language model, SVM, Turkish

Procedia PDF Downloads 151
18936 Fatigue Life Evaluation of Al6061/Al2O3 and Al6061/SiC Composites under Uniaxial and Multiaxial Loading Conditions

Authors: C. E. Sutton, A. Varvani-Farahani

Abstract:

Fatigue damage and life prediction of particle metal matrix composites (PMMCs) under uniaxial and multiaxial loading conditions were investigated. Three PMM composite materials of Al6061/Al2O3/20p-T6, Al6061/Al2O3/22p-T6 and Al6061/SiC/17w-T6 tested under tensile, torsion, and combined tension-torsion fatigue cycling were evaluated with various fatigue damage models. The fatigue damage models of Smith-Watson-Topper (S. W. T.), Ellyin, Brown-Miller, Fatemi-Socie, and Varvani were compared for their capability to assess the fatigue damage of materials undergoing various loading conditions. Fatigue life predication results were then evaluated by implementing material-dependent coefficients that factored in the effects of the particle reinforcement in the earlier developed Varvani model. The critical plane-energy approach incorporated the critical plane as the plane of crack initiation and early stage of crack growth. The strain energy density was calculated on the critical plane incorporating stress and strain components acting on the plane. This approach successfully evaluated fatigue damage values versus fatigue lives within a narrower band for both uniaxial and multiaxial loading conditions as compared with other damage approaches studied in this paper.

Keywords: fatigue damage, life prediction, critical plane approach, energy approach, PMM composites

Procedia PDF Downloads 404
18935 Study of the Phenomenon Nature of Order and Disorder in BaMn(Fe/V)F7 Fluoride Glass by the Hybrid Reverse Monte Carlo Method

Authors: Sidi Mohamed Mesli, Mohamed Habchi, Mohamed Kotbi, Rafik Benallal, Abdelali Derouiche

Abstract:

Fluoride glasses with a nominal composition of BaMnMF7 (M = FeV assuming isomorphous replacement) have been structurally modelled through the simultaneous simulation of their neutron diffraction patterns by a reverse Monte Carlo (RMC) model and by a Rietveld for disordered materials (RDM) method. Model is consistent with an expected network of interconnected [MF6] polyhedra. The RMC results are accompanied by artificial satellite peaks. To remedy this problem, we use an extension of the RMC algorithm, which introduces an energy penalty term in acceptance criteria. This method is called the Hybrid Reverse Monte Carlo (HRMC) method. The idea of this paper is to apply the (HRMC) method to the title glasses, in order to make a study of the phenomenon nature of order and disorder by displaying and discussing the partial pair distribution functions (PDFs) g(r). We suggest that this method can be used to describe average correlations between components of fluoride glass or similar system.

Keywords: fluoride glasses, RMC simulation, neutron scattering, hybrid RMC simulation, Lennard-Jones potential, partial pair distribution functions

Procedia PDF Downloads 543
18934 Detection of Cyberattacks on the Metaverse Based on First-Order Logic

Authors: Sulaiman Al Amro

Abstract:

There are currently considerable challenges concerning data security and privacy, particularly in relation to modern technologies. This includes the virtual world known as the Metaverse, which consists of a virtual space that integrates various technologies and is therefore susceptible to cyber threats such as malware, phishing, and identity theft. This has led recent studies to propose the development of Metaverse forensic frameworks and the integration of advanced technologies, including machine learning for intrusion detection and security. In this context, the application of first-order logic offers a formal and systematic approach to defining the conditions of cyberattacks, thereby contributing to the development of effective detection mechanisms. In addition, formalizing the rules and patterns of cyber threats has the potential to enhance the overall security posture of the Metaverse and, thus, the integrity and safety of this virtual environment. The current paper focuses on the primary actions employed by avatars for potential attacks, including Interval Temporal Logic (ITL) and behavior-based detection to detect an avatar’s abnormal activities within the Metaverse. The research established that the proposed framework attained an accuracy of 92.307%, resulting in the experimental results demonstrating the efficacy of ITL, including its superior performance in addressing the threats posed by avatars within the Metaverse domain.

Keywords: security, privacy, metaverse, cyberattacks, detection, first-order logic

Procedia PDF Downloads 46
18933 Genotypic and Allelic Distribution of Polymorphic Variants of Gene SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) and Their Association to the Clinical Response to Metformin in Adult Pakistani T2DM Patients

Authors: Sadaf Moeez, Madiha Khalid, Zoya Khalid, Sania Shaheen, Sumbul Khalid

Abstract:

Background: Inter-individual variation in response to metformin, which has been considered as a first line therapy for T2DM treatment is considerable. In the current study, it was aimed to investigate the impact of two genetic variants Leu125Phe (rs77474263) and Gly64Asp (rs77630697) in gene SLC47A1 on the clinical efficacy of metformin in T2DM Pakistani patients. Methods: The study included 800 T2DM patients (400 metformin responders and 400 metformin non-responders) along with 400 ethnically matched healthy individuals. The genotypes were determined by allele-specific polymerase chain reaction. In-silico analysis was done to confirm the effect of the two SNPs on the structure of genes. Association was statistically determined using SPSS software. Results: Minor allele frequency for rs77474263 and rs77630697 was 0.13 and 0.12. For SLC47A1 rs77474263 the homozygotes of one mutant allele ‘T’ (CT) of rs77474263 variant were fewer in metformin responders than metformin non-responders (29.2% vs. 35.5 %). Likewise, the efficacy was further reduced (7.2% vs. 4.0 %) in homozygotes of two copies of ‘T’ allele (TT). Remarkably, T2DM cases with two copies of allele ‘C’ (CC) had 2.11 times more probability to respond towards metformin monotherapy. For SLC47A1 rs77630697 the homozygotes of one mutant allele ‘A’ (GA) of rs77630697 variant were fewer in metformin responders than metformin non-responders (33.5% vs. 43.0 %). Likewise, the efficacy was further reduced (8.5% vs. 4.5%) in homozygotes of two copies of ‘A’ allele (AA). Remarkably, T2DM cases with two copies of allele ‘G’ (GG) had 2.41 times more probability to respond towards metformin monotherapy. In-silico analysis revealed that these two variants affect the structure and stability of their corresponding proteins. Conclusion: The present data suggest that SLC47A1 Leu125Phe (rs77474263) and Gly64Asp (rs77630697) polymorphisms were associated with the therapeutic response of metformin in T2DM patients of Pakistan.

Keywords: diabetes, T2DM, SLC47A1, Pakistan, polymorphism

Procedia PDF Downloads 163
18932 Prediction of Pounding between Two SDOF Systems by Using Link Element Based On Mathematic Relations and Suggestion of New Equation for Impact Damping Ratio

Authors: Seyed M. Khatami, H. Naderpour, R. Vahdani, R. C. Barros

Abstract:

Many previous studies have been carried out to calculate the impact force and the dissipated energy between two neighboring buildings during seismic excitation, when they collide with each other. Numerical studies are an important part of impact, which several researchers have tried to simulate the impact by using different formulas. Estimation of the impact force and the dissipated energy depends significantly on some parameters of impact. Mass of bodies, stiffness of spring, coefficient of restitution, damping ratio of dashpot and impact velocity are some known and unknown parameters to simulate the impact and measure dissipated energy during collision. Collision is usually shown by force-displacement hysteresis curve. The enclosed area of the hysteresis loop explains the dissipated energy during impact. In this paper, the effect of using different types of impact models is investigated in order to calculate the impact force. To increase the accuracy of impact model and to optimize the results of simulations, a new damping equation is assumed and is validated to get the best results of impact force and dissipated energy, which can show the accuracy of suggested equation of motion in comparison with other formulas. This relation is called "n-m". Based on mathematical relation, an initial value is selected for the mentioned coefficients and kinetic energy loss is calculated. After each simulation, kinetic energy loss and energy dissipation are compared with each other. If they are equal, selected parameters are true and, if not, the constant of parameters are modified and a new analysis is performed. Finally, two unknown parameters are suggested to estimate the impact force and calculate the dissipated energy.

Keywords: impact force, dissipated energy, kinetic energy loss, damping relation

Procedia PDF Downloads 555
18931 Effects of Wearable Garments on Postural Regulation in Community-Dwelling Elderly Adults

Authors: Mei Teng Woo, Keith Davids, Jarmo Liukkonen, Jia Yi Chow, Timo Jaakkola

Abstract:

Wearable garments such as tapes, compression garments, and braces could improve proprioception and reduced postural sway. The aim of this study was to examine the effects of wearable garments on postural regulation in a sample of community-dwelling elderly individuals, aged 65 years. It was hypothesized that wearable garments such as socks would provide stimulation to lower leg mechanoreceptors, and help participants achieve better postural regulation. Participants (N=63) performed a 30-s Romberg balance test protocol under four conditions (barefoot; wearing commercial socks; wearing clinical compression socks; wearing non-clinical compression socks), in a counterbalanced order, with four levels of performance difficulty: (1) standing on a stable surface with open eyes (SO); (2) a stable surface with closed eyes (SC); (3) a foam surface with open eyes (FO); and (4) a foam surface with closed eyes (FC). Centre of pressure (CoP) measurements included postural sway area (C90 area), trace length (TL) and sway velocity. Thirty-five participants (55.6%) showed positive effects of wearing the socks (responded group). In the responded group, it was revealed that socks showed significant differences in SO, SC and FO conditions for the two CoP measurements - TL and sway velocity (p < 0.05). In contrast, in the non-responded group, barefoot condition significantly decreased the TL and velocity in the SO condition. From the positive effects observed in the responded group, it is possible that wearable garments provide sensory cues that could interact with a biological cueing system to enhance performance in the postural regulation system. This study suggests that individuals respond to the socks treatments differently and future research should be undertaken to examine the factors that benefited the responded group of participants.

Keywords: community-dwelling, elderly adults, postural regulation, wearable garments

Procedia PDF Downloads 339
18930 Trigonella foenum-graecum Seeds Extract as Therapeutic Candidate for Treatment of Alzheimer's Disease

Authors: Mai M. Farid, Ximeng Yang, Tomoharu Kuboyama, Yuna Inada, Chihiro Tohda

Abstract:

Intro: Trigonella foenum-graecum (Fenugreek), from Fabaceae family is a well-known plant traditionally used as food and medicine. Many pharmacological effects of Trigonella foenum- graecum seeds extract (TF extract) were evaluated such as anti-diabetic, anti-tumor and anti-dementia effects using in vivo models. Regarding the anti-dementia effects of TF extract, diabetic rats, aluminum chloride-induced amnesia rats and scopolamine-injected mice were used previously for evaluation, which are not well established as Alzheimer’s disease models. In addition, those previous studies, active constituents in TF extract for memory function were not identified. Method: This study aimed to clarify the effect of TF extract on Alzheimer’s disease model, 5XFAD mouse that overexpresses mutated APP and PS1 genes and determine the major active constituent in the brain after oral intake of TF extract. Results: Trigonelline was detected in the cerebral cortex of 5XFAD mice after 24 hours of oral administration of TF extract by LC-MS/MS. Oral administration of TF extract for 17 days improved object location memory in 5XFAD mice. Conclusion: These results suggest that TF extract and its active constituents could be an expected therapeutic candidate for Alzheimer’s disease.

Keywords: Alzheimer's disease, LC-MS/MS, memory recovery, Trigonella foenum-graecum Seeds, 5XFAD mice

Procedia PDF Downloads 152
18929 Analysis of the Internal Mechanical Conditions in the Lower Limb Due to External Loads

Authors: Kent Salomonsson, Xuefang Zhao, Sara Kallin

Abstract:

Human soft tissue is loaded and deformed by any activity, an effect known as a stress-strain relationship, and is often described by a load and tissue elongation curve. Several advances have been made in the fields of biology and mechanics of soft human tissue. However, there is limited information available on in vivo tissue mechanical characteristics and behavior. Confident mechanical properties of human soft tissue cannot be extrapolated from e.g. animal testing. Thus, there is need for non invasive methods to analyze mechanical characteristics of soft human tissue. In the present study, the internal mechanical conditions of the lower limb, which is subject to an external load, is studied by use of the finite element method. A detailed finite element model of the lower limb is made possible by use of MRI scans. Skin, fat, bones, fascia and muscles are represented separately and the material properties for them are obtained from literature. Previous studies have been shown to address macroscopic deformation features, e.g. indentation depth, to a large extent. However, the detail in which the internal anatomical features have been modeled does not reveal the critical internal strains that may induce hypoxia and/or eventual tissue damage. The results of the present study reveals that lumped material models, i.e. averaging of the material properties for the different constituents, does not capture regions of critical strains in contrast to more detailed models.

Keywords: FEM, tissue, indentation, properties

Procedia PDF Downloads 362
18928 Global Optimization Techniques for Optimal Placement of HF Antennas on a Shipboard

Authors: Mustafa Ural, Can Bayseferogulari

Abstract:

In this work, radio frequency (RF) coupling between two HF antennas on a shipboard platform is minimized by determining an optimal antenna placement. Unlike the other works, the coupling is minimized not only at single frequency but over the whole frequency band of operation. Similarly, GAO and PSO, are used in order to determine optimal antenna placement. Throughout this work, outputs of two optimization techniques are compared with each other in terms of antenna placements and coupling results. At the end of the work, far-field radiation pattern performances of the antennas at their optimal places are analyzed in terms of directivity and coverage in order to see that.

Keywords: electromagnetic compatibility, antenna placement, optimization, genetic algorithm optimization, particle swarm optimization

Procedia PDF Downloads 245
18927 Active Thermography Technique for High-Entropy Alloy Characterization Deposited with Cold Spray Technique

Authors: Nazanin Sheibanian, Raffaella Sesana, Sedat Ozbilen

Abstract:

In recent years, high-entropy alloys (HEAs) have attracted considerable attention due to their unique properties and potential applications. In this study, novel HEA coatings were prepared on Mg substrates using mechanically alloyed HEA powder feedstocks based on Al_(0.1-0.5)CoCrCuFeNi and MnCoCrCuFeNi multi-material systems. The coatings were deposited by the Cold Spray (CS) process using three different temperatures of the process gas (N2) (650°C, 750°C, and 850°C) to examine the effect of gas temperature on coating properties. In this study, Infrared Thermography (non-destructive) was examined as a possible quality control technique for HEA coatings applied to magnesium substrates. Active Thermography was employed to characterize coating properties using the thermal response of the coating. Various HEA chemical compositions and deposition temperatures have been investigated. As a part of this study, a comprehensive macro and microstructural analysis of Cold Spray (CS) HEA coatings has been conducted using macrophotography, optical microscopy, scanning electron microscopy with energy-dispersive X-ray spectroscopy (SEM+EDS), X-ray diffraction (XRD), X-ray photoelectron spectroscopy (XPS), microhardness tests, roughness measurements, and porosity assessments. These analyses provided insight into phase identification, microstructure characterization, deposition, particle deformation behavior, bonding mechanisms, and identifying a possible relationship between physical properties and thermal responses. Based on the figures and tables, it is evident that the Maximum Relative Radiance (∆RMax) of each sample differs depending on both the chemical composition of HEA and the temperature at which Cold Spray is applied.

Keywords: active thermography, coating, cold spray, high- entropy alloy, material characterization

Procedia PDF Downloads 76
18926 Design and Application of a Model Eliciting Activity with Civil Engineering Students on Binomial Distribution to Solve a Decision Problem Based on Samples Data Involving Aspects of Randomness and Proportionality

Authors: Martha E. Aguiar-Barrera, Humberto Gutierrez-Pulido, Veronica Vargas-Alejo

Abstract:

Identifying and modeling random phenomena is a fundamental cognitive process to understand and transform reality. Recognizing situations governed by chance and giving them a scientific interpretation, without being carried away by beliefs or intuitions, is a basic training for citizens. Hence the importance of generating teaching-learning processes, supported using technology, paying attention to model creation rather than only executing mathematical calculations. In order to develop the student's knowledge about basic probability distributions and decision making; in this work a model eliciting activity (MEA) is reported. The intention was applying the Model and Modeling Perspective to design an activity related to civil engineering that would be understandable for students, while involving them in its solution. Furthermore, the activity should imply a decision-making challenge based on sample data, and the use of the computer should be considered. The activity was designed considering the six design principles for MEA proposed by Lesh and collaborators. These are model construction, reality, self-evaluation, model documentation, shareable and reusable, and prototype. The application and refinement of the activity was carried out during three school cycles in the Probability and Statistics class for Civil Engineering students at the University of Guadalajara. The analysis of the way in which the students sought to solve the activity was made using audio and video recordings, as well as with the individual and team reports of the students. The information obtained was categorized according to the activity phase (individual or team) and the category of analysis (sample, linearity, probability, distributions, mechanization, and decision-making). With the results obtained through the MEA, four obstacles have been identified to understand and apply the binomial distribution: the first one was the resistance of the student to move from the linear to the probabilistic model; the second one, the difficulty of visualizing (infering) the behavior of the population through the sample data; the third one, viewing the sample as an isolated event and not as part of a random process that must be viewed in the context of a probability distribution; and the fourth one, the difficulty of decision-making with the support of probabilistic calculations. These obstacles have also been identified in literature on the teaching of probability and statistics. Recognizing these concepts as obstacles to understanding probability distributions, and that these do not change after an intervention, allows for the modification of these interventions and the MEA. In such a way, the students may identify themselves the erroneous solutions when they carrying out the MEA. The MEA also showed to be democratic since several students who had little participation and low grades in the first units, improved their participation. Regarding the use of the computer, the RStudio software was useful in several tasks, for example in such as plotting the probability distributions and to exploring different sample sizes. In conclusion, with the models created to solve the MEA, the Civil Engineering students improved their probabilistic knowledge and understanding of fundamental concepts such as sample, population, and probability distribution.

Keywords: linear model, models and modeling, probability, randomness, sample

Procedia PDF Downloads 121
18925 Implementation of Cord- Blood Derived Stem Cells in the Regeneration of Two Experimental Models: Carbon Tetrachloride and S. Mansoni Induced Liver Fibrosis

Authors: Manal M. Kame, Zeinab A. Demerdash, Hanan G. El-Baz, Salwa M. Hassan, Faten M. Salah, Wafaa Mansour, Olfat Hammam

Abstract:

Cord blood (CB) derived Unrestricted Somatic Stem Cells (USSCs) with their multipotentiality hold great promise in liver regeneration. This work aims at evaluation of the therapeutic potentiality of USSCs in two experimental models of chronic liver injury induced either by S. mansoni infection in balb/c mice or CCL4 injection in hamsters. Isolation, propagation, and characterization of USSCs from CB samples were performed. USSCs were induced to differentiate into osteoblasts, adipocytes and hepatocyte-like cells. Cells of the third passage were transplanted in two models of liver fibrosis: (1) Twenty hamsters were induced to liver fibrosis by repeated i. p. injection of 100 μl CCl4 /hamster for 8 weeks. This model was designed as; 10 hamsters with liver fibrosis and treated with i.h. injection of 3x106 USSCs (USSCs transplanted group), 10 hamsters with liver fibrosis (pathological control group), and 10 hamsters with healthy livers (normal control group). (2) Murine chronics S.mansoni model: twenty mice were induced to liver fibrosis with S. mansoni ceracariae (60 cercariae/ mouse) using the tail immersion method and left for 12 weeks. This model was designed as; 10 mice with liver fibrosis were transplanted with i. v. injection of 1×106 USCCs (USSCs transplanted group). Other 2 groups were designed as in hamsters model. Animals were sacrificed 12 weeks after USSCs transplantation, and their liver sections were examined for detection of human hepatocyte-like cells by immunohistochemistry staining. Moreover, liver sections were examined for fibrosis level, and fibrotic indices were calculated. Sera of sacrificed animals were tested for liver functions. CB USSCs, with fibroblast-like morphology, expressed high levels of CD44, CD90, CD73 and CD105 and were negative for CD34, CD45, and HLA-DR. USSCs showed high expression of transcripts for Oct4 and Sox2 and were in vitro differentiated into osteoblasts, adipocytes. In both animal models, in vitro induced hepatocyte-like cells were confirmed by cytoplasmic expression of glycogen, alpha-fetoprotein, and cytokeratin18. Livers of USSCs transplanted group showed engraftment with human hepatocyte-like cells as proved by cytoplasmic expression of human alpha-fetoprotein, cytokeratin18, and OV6. In addition, livers of this group showed less fibrosis than the pathological control group. Liver functions in the form of serum AST & ALT level and serum total bilirubin level were significantly lowered in USSCs transplanted group than pathological control group (p < 0.001). Moreover, the fibrotic index was significantly lower (p< 0.001) in USSCs transplanted group than pathological control group. In addition liver sections, of i. v. injection of 1×106 USCCs of mice, stained with either H&E or sirius red showed diminished granuloma size and a relative decrease in hepatic fibrosis. Our experimental liver fibrosis models transplanted with CB-USSCs showed liver engraftment with human hepatocyte-like cells as well as signs of liver regeneration in the form of improvement in liver function assays and fibrosis level. These data provide hope that human CB- derived USSCs are introduced as multipotent stem cells with great potentiality in regenerative medicine & strengthens the concept of cellular therapy for the treatment of liver fibrosis.

Keywords: cord blood, liver fibrosis, stem cells, transplantation

Procedia PDF Downloads 311
18924 Structural Testing and the Finite Element Modelling of Anchors Loaded Against Partially Confined Surfaces

Authors: Ali Karrech, Alberto Puccini, Ben Galvin, Davide Galli

Abstract:

This paper summarises the laboratory tests, numerical models and statistical approach developed to investigate the behaviour of concrete blocks loaded in shear through metallic anchors. This research is proposed to bridge a gap in the state of the art and practice related to anchors loaded against partially confined concrete surfaces. Eight concrete blocks (420 mm x 500 mm x 1000 mm) with 150 and/or 250 deep anchors were tested. The stainless-steel anchors of diameter 16 mm were bonded with HIT-RE 500 V4 injection epoxy resin and were subjected to shear loading against partially supported edges. In addition, finite element models were constructed to validate the laboratory tests and explore the influence of key parameters such as anchor depth, anchor distance from the edge, and compressive strength on the stability of the block. Upon their validation experimentally, the numerical results were used to populate, develop and interpret a systematic parametric study based on the Design of Experiment approach through the Box-Behnken design and Response Surface Methodology. An empirical model has been derived based on this approach, which predicts the load capacity with the desirable intervals of confidence.

Keywords: finite element modelling, design of experiment, response surface methodology, Box-Behnken design, empirical model, interval of confidence, load capacity

Procedia PDF Downloads 30