Search results for: mode choice models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9814

Search results for: mode choice models

1654 Detecting Anomalous Matches: An Empirical Study from National Basketball Association

Authors: Jacky Liu, Dulani Jayasuriya, Ryan Elmore

Abstract:

Match fixing and anomalous sports events have increasingly threatened the integrity of professional sports, prompting concerns about existing detection methods. This study addresses prior research limitations in match fixing detection, improving the identification of potential fraudulent matches by incorporating advanced anomaly detection techniques. We develop a novel method to identify anomalous matches and player performances by examining series of matches, such as playoffs. Additionally, we investigate bettors' potential profits when avoiding anomaly matches and explore factors behind unusual player performances. Our literature review covers match fixing detection, match outcome forecasting models, and anomaly detection methods, underscoring current limitations and proposing a new sports anomaly detection method. Our findings reveal anomalous series in the 2022 NBA playoffs, with the Phoenix Suns vs Dallas Mavericks series having the lowest natural occurrence probability. We identify abnormal player performances and bettors' profits significantly decrease when post-season matches are included. This study contributes by developing a new approach to detect anomalous matches and player performances, and assisting investigators in identifying responsible parties. While we cannot conclusively establish reasons behind unusual player performances, our findings suggest factors such as team financial difficulties, executive mismanagement, and individual player contract issues.

Keywords: anomaly match detection, match fixing, match outcome forecasting, problematic players identification

Procedia PDF Downloads 72
1653 Nonlinear Finite Element Analysis of Optimally Designed Steel Angelina™ Beams

Authors: Ferhat Erdal, Osman Tunca, Serkan Tas, Serdar Carbas

Abstract:

Web-expanded steel beams provide an easy and economical solution for the systems having longer structural members. The main goal of manufacturing these beams is to increase the moment of inertia and section modulus, which results in greater strength and rigidity. Until recently, there were two common types of open web-expanded beams: with hexagonal openings, also called castellated beams, and beams with circular openings referred to as cellular beams, until the generation of sinusoidal web-expanded beams. In the present research, the optimum design of a new generation beams, namely sinusoidal web-expanded beams, will be carried out and the design results will be compared with castellated and cellular beam solutions. Thanks to a reduced fabrication process and substantial material savings, the web-expanded beam with sinusoidal holes (Angelina™ Beam) meets the economic requirements of steel design problems while ensuring optimum safety. The objective of this research is to carry out non-linear finite element analysis (FEA) of the web-expanded beam with sinusoidal holes. The FE method has been used to predict their entire response to increasing values of external loading until they lose their load carrying capacity. FE model of each specimen that is utilized in the experimental studies is carried out. These models are used to simulate the experimental work to verify of test results and to investigate the non-linear behavior of failure modes such as web-post buckling, shear buckling and vierendeel bending of beams.

Keywords: steel structures, web-expanded beams, angelina beam, optimum design, failure modes, finite element analysis

Procedia PDF Downloads 271
1652 Flood Modeling in Urban Area Using a Well-Balanced Discontinuous Galerkin Scheme on Unstructured Triangular Grids

Authors: Rabih Ghostine, Craig Kapfer, Viswanathan Kannan, Ibrahim Hoteit

Abstract:

Urban flooding resulting from a sudden release of water due to dam-break or excessive rainfall is a serious threatening environment hazard, which causes loss of human life and large economic losses. Anticipating floods before they occur could minimize human and economic losses through the implementation of appropriate protection, provision, and rescue plans. This work reports on the numerical modelling of flash flood propagation in urban areas after an excessive rainfall event or dam-break. A two-dimensional (2D) depth-averaged shallow water model is used with a refined unstructured grid of triangles for representing the urban area topography. The 2D shallow water equations are solved using a second-order well-balanced discontinuous Galerkin scheme. Theoretical test case and three flood events are described to demonstrate the potential benefits of the scheme: (i) wetting and drying in a parabolic basin (ii) flash flood over a physical model of the urbanized Toce River valley in Italy; (iii) wave propagation on the Reyran river valley in consequence of the Malpasset dam-break in 1959 (France); and (iv) dam-break flood in October 1982 at the town of Sumacarcel (Spain). The capability of the scheme is also verified against alternative models. Computational results compare well with recorded data and show that the scheme is at least as efficient as comparable second-order finite volume schemes, with notable efficiency speedup due to parallelization.

Keywords: dam-break, discontinuous Galerkin scheme, flood modeling, shallow water equations

Procedia PDF Downloads 165
1651 Case Study on Innovative Aquatic-Based Bioeconomy for Chlorella sorokiniana

Authors: Iryna Atamaniuk, Hannah Boysen, Nils Wieczorek, Natalia Politaeva, Iuliia Bazarnova, Kerstin Kuchta

Abstract:

Over the last decade due to climate change and a strategy of natural resources preservation, the interest for the aquatic biomass has dramatically increased. Along with mitigation of the environmental pressure and connection of waste streams (including CO2 and heat emissions), microalgae bioeconomy can supply food, feed, as well as the pharmaceutical and power industry with number of value-added products. Furthermore, in comparison to conventional biomass, microalgae can be cultivated in wide range of conditions without compromising food and feed production, thus addressing issues associated with negative social and the environmental impacts. This paper presents the state-of-the art technology for microalgae bioeconomy from cultivation process to production of valuable components and by-streams. Microalgae Chlorella sorokiniana were cultivated in the pilot-scale innovation concept in Hamburg (Germany) using different systems such as race way pond (5000 L) and flat panel reactors (8 x 180 L). In order to achieve the optimum growth conditions along with suitable cellular composition for the further extraction of the value-added components, process parameters such as light intensity, temperature and pH are continuously being monitored. On the other hand, metabolic needs in nutrients were provided by addition of micro- and macro-nutrients into a medium to ensure autotrophic growth conditions of microalgae. The cultivation was further followed by downstream process and extraction of lipids, proteins and saccharides. Lipids extraction is conducted in repeated-batch semi-automatic mode using hot extraction method according to Randall. As solvents hexane and ethanol are used at different ratio of 9:1 and 1:9, respectively. Depending on cell disruption method along with solvents ratio, the total lipids content showed significant variations between 8.1% and 13.9 %. The highest percentage of extracted biomass was reached with a sample pretreated with microwave digestion using 90% of hexane and 10% of ethanol as solvents. Proteins content in microalgae was determined by two different methods, namely: Total Kejadahl Nitrogen (TKN), which further was converted to protein content, as well as Bradford method using Brilliant Blue G-250 dye. Obtained results, showed a good correlation between both methods with protein content being in the range of 39.8–47.1%. Characterization of neutral and acid saccharides from microalgae was conducted by phenol-sulfuric acid method at two wavelengths of 480 nm and 490 nm. The average concentration of neutral and acid saccharides under the optimal cultivation conditions was 19.5% and 26.1%, respectively. Subsequently, biomass residues are used as substrate for anaerobic digestion on the laboratory-scale. The methane concentration, which was measured on the daily bases, showed some variations for different samples after extraction steps but was in the range between 48% and 55%. CO2 which is formed during the fermentation process and after the combustion in the Combined Heat and Power unit can potentially be used within the cultivation process as a carbon source for the photoautotrophic synthesis of biomass.

Keywords: bioeconomy, lipids, microalgae, proteins, saccharides

Procedia PDF Downloads 236
1650 Ontology-Driven Knowledge Discovery and Validation from Admission Databases: A Structural Causal Model Approach for Polytechnic Education in Nigeria

Authors: Bernard Igoche Igoche, Olumuyiwa Matthew, Peter Bednar, Alexander Gegov

Abstract:

This study presents an ontology-driven approach for knowledge discovery and validation from admission databases in Nigerian polytechnic institutions. The research aims to address the challenges of extracting meaningful insights from vast amounts of admission data and utilizing them for decision-making and process improvement. The proposed methodology combines the knowledge discovery in databases (KDD) process with a structural causal model (SCM) ontological framework. The admission database of Benue State Polytechnic Ugbokolo (Benpoly) is used as a case study. The KDD process is employed to mine and distill knowledge from the database, while the SCM ontology is designed to identify and validate the important features of the admission process. The SCM validation is performed using the conditional independence test (CIT) criteria, and an algorithm is developed to implement the validation process. The identified features are then used for machine learning (ML) modeling and prediction of admission status. The results demonstrate the adequacy of the SCM ontological framework in representing the admission process and the high predictive accuracies achieved by the ML models, with k-nearest neighbors (KNN) and support vector machine (SVM) achieving 92% accuracy. The study concludes that the proposed ontology-driven approach contributes to the advancement of educational data mining and provides a foundation for future research in this domain.

Keywords: admission databases, educational data mining, machine learning, ontology-driven knowledge discovery, polytechnic education, structural causal model

Procedia PDF Downloads 49
1649 Short-Term Forecast of Wind Turbine Production with Machine Learning Methods: Direct Approach and Indirect Approach

Authors: Mamadou Dione, Eric Matzner-lober, Philippe Alexandre

Abstract:

The Energy Transition Act defined by the French State has precise implications on Renewable Energies, in particular on its remuneration mechanism. Until then, a purchase obligation contract permitted the sale of wind-generated electricity at a fixed rate. Tomorrow, it will be necessary to sell this electricity on the Market (at variable rates) before obtaining additional compensation intended to reduce the risk. This sale on the market requires to announce in advance (about 48 hours before) the production that will be delivered on the network, so to be able to predict (in the short term) this production. The fundamental problem remains the variability of the Wind accentuated by the geographical situation. The objective of the project is to provide, every day, short-term forecasts (48-hour horizon) of wind production using weather data. The predictions of the GFS model and those of the ECMWF model are used as explanatory variables. The variable to be predicted is the production of a wind farm. We do two approaches: a direct approach that predicts wind generation directly from weather data, and an integrated approach that estimâtes wind from weather data and converts it into wind power by power curves. We used machine learning techniques to predict this production. The models tested are random forests, CART + Bagging, CART + Boosting, SVM (Support Vector Machine). The application is made on a wind farm of 22MW (11 wind turbines) of the Compagnie du Vent (that became Engie Green France). Our results are very conclusive compared to the literature.

Keywords: forecast aggregation, machine learning, spatio-temporal dynamics modeling, wind power forcast

Procedia PDF Downloads 206
1648 Fiber Stiffness Detection of GFRP Using Combined ABAQUS and Genetic Algorithms

Authors: Gyu-Dong Kim, Wuk-Jae Yoo, Sang-Youl Lee

Abstract:

Composite structures offer numerous advantages over conventional structural systems in the form of higher specific stiffness and strength, lower life-cycle costs, and benefits such as easy installation and improved safety. Recently, there has been a considerable increase in the use of composites in engineering applications and as wraps for seismic upgrading and repairs. However, these composites deteriorate with time because of outdated materials, excessive use, repetitive loading, climatic conditions, manufacturing errors, and deficiencies in inspection methods. In particular, damaged fibers in a composite result in significant degradation of structural performance. In order to reduce the failure probability of composites in service, techniques to assess the condition of the composites to prevent continual growth of fiber damage are required. Condition assessment technology and nondestructive evaluation (NDE) techniques have provided various solutions for the safety of structures by means of detecting damage or defects from static or dynamic responses induced by external loading. A variety of techniques based on detecting the changes in static or dynamic behavior of isotropic structures has been developed in the last two decades. These methods, based on analytical approaches, are limited in their capabilities in dealing with complex systems, primarily because of their limitations in handling different loading and boundary conditions. Recently, investigators have introduced direct search methods based on metaheuristics techniques and artificial intelligence, such as genetic algorithms (GA), simulated annealing (SA) methods, and neural networks (NN), and have promisingly applied these methods to the field of structural identification. Among them, GAs attract our attention because they do not require a considerable amount of data in advance in dealing with complex problems and can make a global solution search possible as opposed to classical gradient-based optimization techniques. In this study, we propose an alternative damage-detection technique that can determine the degraded stiffness distribution of vibrating laminated composites made of Glass Fiber-reinforced Polymer (GFRP). The proposed method uses a modified form of the bivariate Gaussian distribution function to detect degraded stiffness characteristics. In addition, this study presents a method to detect the fiber property variation of laminated composite plates from the micromechanical point of view. The finite element model is used to study free vibrations of laminated composite plates for fiber stiffness degradation. In order to solve the inverse problem using the combined method, this study uses only first mode shapes in a structure for the measured frequency data. In particular, this study focuses on the effect of the interaction among various parameters, such as fiber angles, layup sequences, and damage distributions, on fiber-stiffness damage detection.

Keywords: stiffness detection, fiber damage, genetic algorithm, layup sequences

Procedia PDF Downloads 259
1647 A Multi-Dimensional Neural Network Using the Fisher Transform to Predict the Price Evolution for Algorithmic Trading in Financial Markets

Authors: Cristian Pauna

Abstract:

Trading the financial markets is a widespread activity today. A large number of investors, companies, public of private funds are buying and selling every day in order to make profit. Algorithmic trading is the prevalent method to make the trade decisions after the electronic trading release. The orders are sent almost instantly by computers using mathematical models. This paper will present a price prediction methodology based on a multi-dimensional neural network. Using the Fisher transform, the neural network will be instructed for a low-latency auto-adaptive process in order to predict the price evolution for the next period of time. The model is designed especially for algorithmic trading and uses the real-time price series. It was found that the characteristics of the Fisher function applied at the nodes scale level can generate reliable trading signals using the neural network methodology. After real time tests it was found that this method can be applied in any timeframe to trade the financial markets. The paper will also include the steps to implement the presented methodology into an automated trading system. Real trading results will be displayed and analyzed in order to qualify the model. As conclusion, the compared results will reveal that the neural network methodology applied together with the Fisher transform at the nodes level can generate a good price prediction and can build reliable trading signals for algorithmic trading.

Keywords: algorithmic trading, automated trading systems, financial markets, high-frequency trading, neural network

Procedia PDF Downloads 145
1646 Fractal Nature of Granular Mixtures of Different Concretes Formulated with Different Methods of Formulation

Authors: Fatima Achouri, Kaddour Chouicha, Abdelwahab Khatir

Abstract:

It is clear that concrete of quality must be made with selected materials chosen in optimum proportions that remain after implementation, a minimum of voids in the material produced. The different methods of formulations what we use, are based for the most part on a granular curve which describes an ‘optimal granularity’. Many authors have engaged in fundamental research on granular arrangements. A comparison of mathematical models reproducing these granular arrangements with experimental measurements of compactness have to verify that the minimum porosity P according to the following extent granular exactly a power law. So the best compactness in the finite medium are obtained with power laws, such as Furnas, Fuller or Talbot, each preferring a particular setting between 0.20 and 0.50. These considerations converge on the assumption that the optimal granularity Caquot approximates by a power law. By analogy, it can then be analyzed as a granular structure of fractal-type since the properties that characterize the internal similarity fractal objects are reflected also by a power law. Optimized mixtures may be described as a series of installments falling granular stuff to better the tank on a regular hierarchical distribution which would give at different scales, by cascading effects, the same structure to the mix. Likely this model may be appropriate for the entire extent of the size distribution of the components, since the cement particles (and silica fume) correctly deflocculated, micrometric dimensions, to chippings sometimes several tens of millimeters. As part of this research, the aim is to give an illustration of the application of fractal analysis to characterize the granular concrete mixtures optimized for a so-called fractal dimension where different concretes were studying that we proved a fractal structure of their granular mixtures regardless of the method of formulation or the type of concrete.

Keywords: concrete formulation, fractal character, granular packing, method of formulation

Procedia PDF Downloads 244
1645 The Intonation of Romanian Greetings: A Sociolinguistics Approach

Authors: Anca-Diana Bibiri, Mihaela Mocanu, Adrian Turculeț

Abstract:

In a language the inventory of greetings is dynamic with frequent input and output, although this is hardly noticed by the speakers. In this register, there are a number of constant, conservative elements that survive different language models (among them, the classic formulae: bună ziua! (good afternoon!), bună seara! (good evening!), noapte bună! (good night!), la revedere! (goodbye!) and a number of items that fail to pass the test of time, according to language use at a time (ciao!, pa!, bai!). The source of innovation depends both of internal factors (contraction, conversion, combination of classic formulae of greetings), and of external ones (borrowings and calques). Their use imposes their frequencies at once, namely the elimination of the use of others. This paper presents a sociolinguistic approach of contemporary Romanian greetings, based on prosodic surveys in two research projects: AMPRom, and SoRoEs. Romanian language presents a rich inventory of questions (especially partial interrogatives questions/WH-Q) which are used as greetings, alone or, more commonly accompanying a proper greeting. The representative of the typical formulae is Ce mai faci? (How are you?), which, unlike its English counterpart How do you do?, has not become a stereotype, but retains an obvious emotional impact, while serving as a mark of sociolinguistic group. The analyzed corpus consists of structures containing greetings recorded in the main Romanian cultural (urban) centers. From the methodological point of view, the acoustic analysis of the recorded data is performed using software tools (GoldWave, Praat), identifying intonation patterns related to three sociolinguistics variables: age, sex and level of education. The intonation patterns of the analyzed statements are at the interface between partial questions and typical greetings.

Keywords: acoustic analysis, greetings, Romanian language, sociolinguistics

Procedia PDF Downloads 327
1644 Dynamic Programming Based Algorithm for the Unit Commitment of the Transmission-Constrained Multi-Site Combined Heat and Power System

Authors: A. Rong, P. B. Luh, R. Lahdelma

Abstract:

High penetration of intermittent renewable energy sources (RES) such as solar power and wind power into the energy system has caused temporal and spatial imbalance between electric power supply and demand for some countries and regions. This brings about the critical need for coordinating power production and power exchange for different regions. As compared with the power-only systems, the combined heat and power (CHP) systems can provide additional flexibility of utilizing RES by exploiting the interdependence of power and heat production in the CHP plant. In the CHP system, power production can be influenced by adjusting heat production level and electric power can be used to satisfy heat demand by electric boiler or heat pump in conjunction with heat storage, which is much cheaper than electric storage. This paper addresses multi-site CHP systems without considering RES, which lay foundation for handling penetration of RES. The problem under study is the unit commitment (UC) of the transmission-constrained multi-site CHP systems. We solve the problem by combining linear relaxation of ON/OFF states and sequential dynamic programming (DP) techniques, where relaxed states are used to reduce the dimension of the UC problem and DP for improving the solution quality. Numerical results for daily scheduling with realistic models and data show that DP-based algorithm is from a few to a few hundred times faster than CPLEX (standard commercial optimization software) with good solution accuracy (less than 1% relative gap from the optimal solution on the average).

Keywords: dynamic programming, multi-site combined heat and power system, relaxed states, transmission-constrained generation unit commitment

Procedia PDF Downloads 357
1643 Influence of Hearing Aids on Non-Medically Treatable Deafness

Authors: Niragira Donatien

Abstract:

The progress of technology creates new expectations for patients. The world of deafness is no exception. In recent years, there have been considerable advances in the field of technologies aimed at assisting failing hearing. According to the usual medical vocabulary, hearing aids are actually orthotics. They do not replace an organ but compensate for a functional impairment. The amplifier hearing amplification is useful for a large number of people with hearing loss. Hearing aids restore speech audibility. However, their benefits vary depending on the quality of residual hearing. The hearing aid is not a "cure" for deafness. It cannot correct all affected hearing abilities. It should be considered as an aid to communicate who the best candidates for hearing aids are. The urge to judge from the audiogram alone should be resisted here, as audiometry only indicates the ability to detect non-verbal sounds. To prevent hearing aids from ending up in the drawer, it is important to ensure that the patient's disability situations justify the use of this type of orthosis. If the problems of receptive pre-fitting counselling are crucial, the person with hearing loss must be informed of the advantages and disadvantages of amplification in his or her case. Their expectations must be realistic. They also need to be aware that the adaptation process requires a good deal of patience and perseverance. They should be informed about the various models and types of hearing aids, including all the aesthetic, functional, and financial considerations. If the person's motivation "survives" pre-fitting counselling, we are in the presence of a good candidate for amplification. In addition to its relevance, hearing aids raise other questions: Should one or both ears be fitted? In short, all these questions show that the results found in this study significantly improve the quality of audibility in the patient, from where this technology must be made accessible everywhere in the world. So we want to progress with the technology.

Keywords: audiology, influence, hearing, madicaly, treatable

Procedia PDF Downloads 44
1642 Political Deprivations, Political Risk and the Extent of Skilled Labor Migration from Pakistan: Finding of a Time-Series Analysis

Authors: Syed Toqueer Akhter, Hussain Hamid

Abstract:

Over the last few decades an upward trend has been observed in the case of labor migration from Pakistan. The emigrants are not just economically motivated and in search of a safe living environment towards more developed countries in Europe, North America and Middle East. The opportunity cost of migration comes in the form of brain drain that is the loss of qualified and skilled human capital. Throughout the history of Pakistan, situations of political instability have emerged ranging from violation of political rights, political disappearances to political assassinations. Providing security to the citizens is a major issue faced in Pakistan due to increase in crime and terrorist activities. The aim of the study is to test the impact of political instability, appearing in the form of political terror, violation of political rights and civil liberty on skilled migration of labor. Three proxies are used to measure the political instability; political terror scale (based on a scale of 1-5, the political terror and violence that a country encounters in a particular year), political rights (a rating of 1-7, that describes political rights as the ability for the people to participate without restraint in political process) and civil liberty (a rating of 1-7, civil liberty is defined as the freedom of expression and rights without government intervention). Using time series data from 1980-2011, the distributed lag models were used for estimation because migration is not a onetime process, previous events and migration can lead to more migration. Our research clearly shows that political instability appearing in the form of political terror, political rights and civil liberty all appeared significant in explaining the extent of skilled migration of Pakistan.

Keywords: skilled labor migration, political terror, political rights, civil liberty, distributed lag model

Procedia PDF Downloads 1013
1641 Memory Based Reinforcement Learning with Transformers for Long Horizon Timescales and Continuous Action Spaces

Authors: Shweta Singh, Sudaman Katti

Abstract:

The most well-known sequence models make use of complex recurrent neural networks in an encoder-decoder configuration. The model used in this research makes use of a transformer, which is based purely on a self-attention mechanism, without relying on recurrence at all. More specifically, encoders and decoders which make use of self-attention and operate based on a memory, are used. In this research work, results for various 3D visual and non-visual reinforcement learning tasks designed in Unity software were obtained. Convolutional neural networks, more specifically, nature CNN architecture, are used for input processing in visual tasks, and comparison with standard long short-term memory (LSTM) architecture is performed for both visual tasks based on CNNs and non-visual tasks based on coordinate inputs. This research work combines the transformer architecture with the proximal policy optimization technique used popularly in reinforcement learning for stability and better policy updates while training, especially for continuous action spaces, which are used in this research work. Certain tasks in this paper are long horizon tasks that carry on for a longer duration and require extensive use of memory-based functionalities like storage of experiences and choosing appropriate actions based on recall. The transformer, which makes use of memory and self-attention mechanism in an encoder-decoder configuration proved to have better performance when compared to LSTM in terms of exploration and rewards achieved. Such memory based architectures can be used extensively in the field of cognitive robotics and reinforcement learning.

Keywords: convolutional neural networks, reinforcement learning, self-attention, transformers, unity

Procedia PDF Downloads 119
1640 Disturbed Cellular Iron Metabolism Genes in Neurodevelopmental Disorders is Different from Neurodegenerative Disorders

Authors: O. H. Gebril, N. A. Meguid

Abstract:

Background: Iron had been a focus of interest recently as a main exaggerating factor for oxidative stresses in the central nervous system and a link to various neurological disorders is suspected. Many studies with various techniques showed evidence of disturbed iron-related proteins in the cell in human and animal models of neurodegenerative disorders. Also, linkage to significant pathological changes had been evidenced e.g. apoptosis and cell signaling. On the other hand, the role of iron in neurodevelopmental disorders is still unclear. With increasing prevalence of autism worldwide, some changes in iron parameters and its stores were documented in many studies. This study includes Haemochromatosis HFE gene polymorphisms (p.H63D and p.C282Y) and ferroportin gene (SLC40A1) Q248H polymorphism in autism and control children. Materials and Methods: Whole genome DNA was extracted; p.H63D and p.C282Y genotyping was studied using specific sequence amplification followed by restriction enzyme digestion on a sample of autism patients (25 cases) and twenty controls. Results: The p.H63D is seen more than the C282Y among both autism and control samples, with no significant association of p.H63D or p.C282Y polymorphism and autism was revealed. Also, no association with Q248H polymorphism was evidenced. Conclusion: The study results do not prove the role of cellular iron genes polymorphisms as risk factors for neurodevelopmental disorders, and in turn highlights the specificity of cellular iron related pathways in neurodegeneration. These results demand further gene expression studies to elucidate the main pathophysiological pathways that are disturbed in autism and other neurodevelopmental disorders.

Keywords: iron, neurodevelopmental, oxidative stress, haemohromatosis, ferroportin, genes

Procedia PDF Downloads 353
1639 Development of Energy Benchmarks Using Mandatory Energy and Emissions Reporting Data: Ontario Post-Secondary Residences

Authors: C. Xavier Mendieta, J. J McArthur

Abstract:

Governments are playing an increasingly active role in reducing carbon emissions, and a key strategy has been the introduction of mandatory energy disclosure policies. These policies have resulted in a significant amount of publicly available data, providing researchers with a unique opportunity to develop location-specific energy and carbon emission benchmarks from this data set, which can then be used to develop building archetypes and used to inform urban energy models. This study presents the development of such a benchmark using the public reporting data. The data from Ontario’s Ministry of Energy for Post-Secondary Educational Institutions are being used to develop a series of building archetype dynamic building loads and energy benchmarks to fill a gap in the currently available building database. This paper presents the development of a benchmark for college and university residences within ASHRAE climate zone 6 areas in Ontario using the mandatory disclosure energy and greenhouse gas emissions data. The methodology presented includes data cleaning, statistical analysis, and benchmark development, and lessons learned from this investigation are presented and discussed to inform the development of future energy benchmarks from this larger data set. The key findings from this initial benchmarking study are: (1) the importance of careful data screening and outlier identification to develop a valid dataset; (2) the key features used to develop a model of the data are building age, size, and occupancy schedules and these can be used to estimate energy consumption; and (3) policy changes affecting the primary energy generation significantly affected greenhouse gas emissions, and consideration of these factors was critical to evaluate the validity of the reported data.

Keywords: building archetypes, data analysis, energy benchmarks, GHG emissions

Procedia PDF Downloads 293
1638 New Photosensitizers Encapsulated within Arene-Ruthenium Complexes Active in Photodynamic Therapy: Intracellular Signaling and Evaluation in Colorectal Cancer Models

Authors: Suzan Ghaddar, Aline Pinon, Manuel Gallardo-villagran, Mona Diab-assaf, Bruno Therrien, Bertrand Liagre

Abstract:

Colorectal cancer (CRC) is the third most common cancer and exhibits a consistently rising incidence worldwide. Despite notable advancements in CRC treatment, frequent occurrences of side effects and the development of therapy resistance persistently challenge current approaches. Eventually, innovations in focal therapies remain imperative to enhance the patient’s overall quality of life. Photodynamic therapy (PDT) emerges as a promising treatment modality, clinically used for the treatment of various cancer types. It relies on the use of photosensitive molecules called photosensitizers (PS), which are photoactivated after accumulation in cancer cells, to induce the production of reactive oxygen species (ROS) that cause cancer cell death. Among commonly used metal-based drugs in cancer therapy, ruthenium (Ru) possesses favorable attributes that demonstrate its selectivity towards cancer cells and render it suitable for anti-cancer drug design. In vitro studies using distinct arene-Ru complexes, encapsulating porphin PS, are conducted on human HCT116 and HT-29 colorectal cancer cell lines. These studies encompass the evaluation of the antiproliferative effect, ROS production, apoptosis, cell cycle progression, molecular localization, and protein expression. Preliminary results indicated that these complexes exert significant photocytotoxicity on the studied colorectal cancer cell lines, representing them as promising and potential candidates for anti- cancer agents.

Keywords: colorectal cancer, photodynamic therapy, photosensitizers, arene-ruthenium complexes, apoptosis

Procedia PDF Downloads 79
1637 Automated Detection of Targets and Retrieve the Corresponding Analytics Using Augmented Reality

Authors: Suvarna Kumar Gogula, Sandhya Devi Gogula, P. Chanakya

Abstract:

Augmented reality is defined as the collection of the digital (or) computer generated information like images, audio, video, 3d models, etc. and overlay them over the real time environment. Augmented reality can be thought as a blend between completely synthetic and completely real. Augmented reality provides scope in a wide range of industries like manufacturing, retail, gaming, advertisement, tourism, etc. and brings out new dimensions in the modern digital world. As it overlays the content, it makes the users enhance the knowledge by providing the content blended with real world. In this application, we integrated augmented reality with data analytics and integrated with cloud so the virtual content will be generated on the basis of the data present in the database and we used marker based augmented reality where every marker will be stored in the database with corresponding unique ID. This application can be used in wide range of industries for different business processes, but in this paper, we mainly focus on the marketing industry which helps the customer in gaining the knowledge about the products in the market which mainly focus on their prices, customer feedback, quality, and other benefits. This application also focuses on providing better market strategy information for marketing managers who obtain the data about the stocks, sales, customer response about the product, etc. In this paper, we also included the reports from the feedback got from different people after the demonstration, and finally, we presented the future scope of Augmented Reality in different business processes by integrating with new technologies like cloud, big data, artificial intelligence, etc.

Keywords: augmented reality, data analytics, catch room, marketing and sales

Procedia PDF Downloads 223
1636 Hexane Extract of Thymus serpyllum L.: GC-MS Profile, Antioxidant Potential and Anticancer Impact on HepG2 (Liver Carcinoma) Cell Line

Authors: Salma Baig, Bakrudeen Ali Ahmad, Ainnul Hamidah Syahadah Azizan, Hapipah Mohd Ali, Elham Rouhollahi, Mahmood Ameen Abdulla

Abstract:

Free radical damage induced by reactive oxygen species (ROS) contributes to etiology of many chronic diseases, cancer being one of them. Recent studies have been successful in ROS targeted therapies via antioxidants using mouse models in cancer therapeutics. The present study was designed to scrutinize anticancer activity, antioxidant activity of 5 different extracts of Thymus serpyllum in MDA-MB-231, MCF-7, HepG2, HCT-116, PC3, and A549. Identification of the phytochemicals present in the most active extract of Thymus serpyllum was conducted using gas chromatography coupled with mass spectrophotometry and antioxidant activity was measured by using DPPH radical scavenging and FRAP assay. Anticancer impact of the extract in terms of IC50 was evaluated using MTT cell viability assay. Results revealed that the hexane extract showed the best anticancer activity in HepG2 (Liver Carcinoma Cell Line) with an IC50 value of 23 ± 0.14 µg/ml followed by 25 µg/ml in HCT-116 (Colon Cancer Cell Line), 30 µm/ml in MCF-7 (Breast Cancer Cell Line), 35 µg/ml in MDA-MB-231 (Breast Cancer Cell Line), 57 µg/ml in PC3 (Prostate Cancer Cell Line) and 60 µg/ml in A549 (Lung Carcinoma Cell Line). GC-MS profile of the hexane extract showed the presence of 31 compounds with carvacrol, thymol and thymoquione being the major compounds. Phenolics such as Vitamin E, terpinen-4-ol, borneol and phytol were also identified. Hence, here we present the first report on cytotoxicity of hexane extract of Thymus serpyllum extract in HepG2 cell line with a robust anticancer activity with an IC50 of 23 ± 0.14 µg/ml.

Keywords: Thymus serpyllum L., hexane extract, GC-MS profile, antioxidant activity, anticancer activity, HepG2 cell line

Procedia PDF Downloads 501
1635 The Implementation of Educational Partnerships for Undergraduate Students at Yogyakarta State University

Authors: Broto Seno

Abstract:

This study aims to describe and examine more in the implementation of educational partnerships for undergraduate students at Yogyakarta State University (YSU), which is more focused on educational partnerships abroad. This study used descriptive qualitative approach. The study subjects consisted of a vice-rector, two staff education partnerships, four vice-dean, nine undergraduate students and three foreign students. Techniques of data collection using interviews and document review. Validity test of the data source using triangulation. Data analysis using flow models Miles and Huberman, namely data reduction, data display, and conclusion. Results of this study showed that the implementation of educational partnerships abroad for undergraduate students at YSU meets six of the nine indicators of the success of strategic partnerships. Six indicators are long-term, strategic, mutual trust, sustainable competitive advantages, mutual benefit for all the partners, and the separate and positive impact. The indicator has not been achieved is cooperative development, successful, and world class / best practice. These results were obtained based on the discussion of the four formulation of the problem, namely: 1) Implementation and development of educational partnerships abroad has been running good enough, but not maximized. 2) Benefits of the implementation of educational partnerships abroad is providing learning experiences for students, institutions of experience in comparison to each faculty, and improving the network of educational partnerships for YSU toward World Class University. 3) The sustainability of educational partnerships abroad is pursuing a strategy of development through improved management of the partnership. 4) Supporting factors of educational partnerships abroad is the support of YSU, YSU’s partner and society. Inhibiting factors of educational partnerships abroad is not running optimally management.

Keywords: partnership, education, YSU, institutions and faculties

Procedia PDF Downloads 321
1634 Evaluation of Microstructure, Mechanical and Abrasive Wear Response of in situ TiC Particles Reinforced Zinc Aluminum Matrix Alloy Composites

Authors: Mohammad M. Khan, Pankaj Agarwal

Abstract:

The present investigation deals with the microstructures, mechanical and detailed wear characteristics of in situ TiC particles reinforced zinc aluminum-based metal matrix composites. The composites have been synthesized by liquid metallurgy route using vortex technique. The composite was found to be harder than the matrix alloy due to high hardness of the dispersoid particles therein. The former was also lower in ultimate tensile strength and ductility as compared to the matrix alloy. This could be explained to be due to the use of coarser size dispersoid and larger interparticle spacing. Reasonably uniform distribution of the dispersoid phase in the alloy matrix and good interfacial bonding between the dispersoid and matrix was observed. The composite exhibited predominantly brittle mode of fracture with microcracking in the dispersoid phase indicating effective easy transfer of load from matrix to the dispersoid particles. To study the wear behavior of the samples three different types of tests were performed namely: (i) sliding wear tests using a pin on disc machine under dry condition, (ii) high stress (two-body) abrasive wear tests using different combinations of abrasive media and specimen surfaces under the conditions of varying abrasive size, traversal distance and load, and (iii) low-stress (three-body) abrasion tests using a rubber wheel abrasion tester at various loads and traversal distances using different abrasive media. In sliding wear test, significantly lower wear rates were observed in the case of base alloy over that of the composites. This has been attributed to the poor room temperature strength as a result of increased microcracking tendency of the composite over the matrix alloy. Wear surfaces of the composite revealed the presence of fragmented dispersoid particles and microcracking whereas the wear surface of matrix alloy was observed to be smooth with shallow grooves. During high-stress abrasion, the presence of the reinforcement offered increased resistance to the destructive action of the abrasive particles. Microcracking tendency was also enhanced because of the reinforcement in the matrix. The negative effect of the microcracking tendency was predominant by the abrasion resistance of the dispersoid. As a result, the composite attained improved wear resistance than the matrix alloy. The wear rate increased with load and abrasive size due to a larger depth of cut made by the abrasive medium. The wear surfaces revealed fine grooves, and damaged reinforcement particles while subsurface regions revealed limited plastic deformation and microcracking and fracturing of the dispersoid phase. During low-stress abrasion, the composite experienced significantly less wear rate than the matrix alloy irrespective of the test conditions. This could be explained to be due to wear resistance offered by the hard dispersoid phase thereby protecting the softer matrix against the destructive action of the abrasive medium. Abraded surfaces of the composite showed protrusion of dispersoid phase. The subsurface regions of the composites exhibited decohesion of the dispersoid phase along with its microcracking and limited plastic deformation in the vicinity of the abraded surfaces.

Keywords: abrasive wear, liquid metallurgy, metal martix composite, SEM

Procedia PDF Downloads 143
1633 Customer Churn Prediction by Using Four Machine Learning Algorithms Integrating Features Selection and Normalization in the Telecom Sector

Authors: Alanoud Moraya Aldalan, Abdulaziz Almaleh

Abstract:

A crucial component of maintaining a customer-oriented business as in the telecom industry is understanding the reasons and factors that lead to customer churn. Competition between telecom companies has greatly increased in recent years. It has become more important to understand customers’ needs in this strong market of telecom industries, especially for those who are looking to turn over their service providers. So, predictive churn is now a mandatory requirement for retaining those customers. Machine learning can be utilized to accomplish this. Churn Prediction has become a very important topic in terms of machine learning classification in the telecommunications industry. Understanding the factors of customer churn and how they behave is very important to building an effective churn prediction model. This paper aims to predict churn and identify factors of customers’ churn based on their past service usage history. Aiming at this objective, the study makes use of feature selection, normalization, and feature engineering. Then, this study compared the performance of four different machine learning algorithms on the Orange dataset: Logistic Regression, Random Forest, Decision Tree, and Gradient Boosting. Evaluation of the performance was conducted by using the F1 score and ROC-AUC. Comparing the results of this study with existing models has proven to produce better results. The results showed the Gradients Boosting with feature selection technique outperformed in this study by achieving a 99% F1-score and 99% AUC, and all other experiments achieved good results as well.

Keywords: machine learning, gradient boosting, logistic regression, churn, random forest, decision tree, ROC, AUC, F1-score

Procedia PDF Downloads 125
1632 Permeability Prediction Based on Hydraulic Flow Unit Identification and Artificial Neural Networks

Authors: Emad A. Mohammed

Abstract:

The concept of hydraulic flow units (HFU) has been used for decades in the petroleum industry to improve the prediction of permeability. This concept is strongly related to the flow zone indicator (FZI) which is a function of the reservoir rock quality index (RQI). Both indices are based on reservoir porosity and permeability of core samples. It is assumed that core samples with similar FZI values belong to the same HFU. Thus, after dividing the porosity-permeability data based on the HFU, transformations can be done in order to estimate the permeability from the porosity. The conventional practice is to use the power law transformation using conventional HFU where percentage of error is considerably high. In this paper, neural network technique is employed as a soft computing transformation method to predict permeability instead of power law method to avoid higher percentage of error. This technique is based on HFU identification where Amaefule et al. (1993) method is utilized. In this regard, Kozeny and Carman (K–C) model, and modified K–C model by Hasan and Hossain (2011) are employed. A comparison is made between the two transformation techniques for the two porosity-permeability models. Results show that the modified K-C model helps in getting better results with lower percentage of error in predicting permeability. The results also show that the use of artificial intelligence techniques give more accurate prediction than power law method. This study was conducted on a heterogeneous complex carbonate reservoir in Oman. Data were collected from seven wells to obtain the permeability correlations for the whole field. The findings of this study will help in getting better estimation of permeability of a complex reservoir.

Keywords: permeability, hydraulic flow units, artificial intelligence, correlation

Procedia PDF Downloads 123
1631 A Pilot Study on Integration of Simulation in the Nursing Educational Program: Hybrid Simulation

Authors: Vesile Unver, Tulay Basak, Hatice Ayhan, Ilknur Cinar, Emine Iyigun, Nuran Tosun

Abstract:

The aim of this study is to analyze the effects of the hybrid simulation. In this simulation, types standardized patients and task trainers are employed simultaneously. For instance, in order to teach the IV activities standardized patients and IV arm models are used. The study was designed as a quasi-experimental research. Before the implementation an ethical permission was taken from the local ethical commission and administrative permission was granted from the nursing school. The universe of the study included second-grade nursing students (n=77). The participants were selected through simple random sample technique and total of 39 nursing students were included. The views of the participants were collected through a feedback form with 12 items. The form was developed by the authors and “Patient intervention self-confidence/competence scale”. Participants reported advantages of the hybrid simulation practice. Such advantages include the following: developing connections between the simulated scenario and real life situations in clinical conditions; recognition of the need for learning more about clinical practice. They all stated that the implementation was very useful for them. They also added three major gains; improvement of critical thinking skills (94.7%) and the skill of making decisions (97.3%); and feeling as if a nurse (92.1%). In regard to the mean scores of the participants in the patient intervention self-confidence/competence scale, it was found that the total mean score for the scale was 75.23±7.76. The findings obtained in the study suggest that the hybrid simulation has positive effects on the integration of theoretical and practical activities before clinical activities for the nursing students.

Keywords: hybrid simulation, clinical practice, nursing education, nursing students

Procedia PDF Downloads 274
1630 Statistical Model of Water Quality in Estero El Macho, Machala-El Oro

Authors: Rafael Zhindon Almeida

Abstract:

Surface water quality is an important concern for the evaluation and prediction of water quality conditions. The objective of this study is to develop a statistical model that can accurately predict the water quality of the El Macho estuary in the city of Machala, El Oro province. The methodology employed in this study is of a basic type that involves a thorough search for theoretical foundations to improve the understanding of statistical modeling for water quality analysis. The research design is correlational, using a multivariate statistical model involving multiple linear regression and principal component analysis. The results indicate that water quality parameters such as fecal coliforms, biochemical oxygen demand, chemical oxygen demand, iron and dissolved oxygen exceed the allowable limits. The water of the El Macho estuary is determined to be below the required water quality criteria. The multiple linear regression model, based on chemical oxygen demand and total dissolved solids, explains 99.9% of the variance of the dependent variable. In addition, principal component analysis shows that the model has an explanatory power of 86.242%. The study successfully developed a statistical model to evaluate the water quality of the El Macho estuary. The estuary did not meet the water quality criteria, with several parameters exceeding the allowable limits. The multiple linear regression model and principal component analysis provide valuable information on the relationship between the various water quality parameters. The findings of the study emphasize the need for immediate action to improve the water quality of the El Macho estuary to ensure the preservation and protection of this valuable natural resource.

Keywords: statistical modeling, water quality, multiple linear regression, principal components, statistical models

Procedia PDF Downloads 79
1629 Gas Chromatographic: Mass Spectroscopic Analysis of Citrus reticulata Fruit Peel, Zingiber officinale Rhizome, and Sesamum indicum Seed Ethanolic Extracts Possessing Antioxidant Activity and Lipid Profile Effects

Authors: Samar Saadeldin Abdelmotalab Omer, Ikram Mohamed Eltayeb Elsiddig, Saad Mohammed Hussein Ayoub

Abstract:

A variety of herbal medicinal plants are known to confer beneficial effects in regards to modification of cardiovascular ri’=sk factors. The anti-hypercholesterolaemic and antioxidant activities of the crude ethanolic extracts of Citrus reticulate fruit peel, Zingiber officinale rhizome and Sesamum indicum seed extracts have been demonstrated. These plants are assumed to possess biologically active principles, which impart their pharmacologic activities. GC-MS analysis of the ethanolic extracts was carried out to identify the active principles and their percentages of occurrence in the analytes. Analysis of the extracts was carried out using (GS-MS QP) type Schimadzu 2010 equipped with a capillary column RTX-50 (restec), (length 30mm, diameter 0.25mm, and thickness 0.25mm). Helium was used as a carrier gas, the temperature was programmed at 200°C for 5 minutes at a rate of 15ml/minute, and the extracts were injected using split injection mode. The identification of different components was achieved from their Mass Spectra and Retention time, compared with those in the NIST library. The results revealed the presence of 80 compounds in Sudanese locally grown C. reticulata fruit peel extract, most of which were monoterpenoid compounds including Limonene (3.03%), Alpha & Gamma - terpinenes (2.61%), Linalool (1.38%), Citral (1.72%) which are known to have profound antioxidant effects. The Sesquiterpenoids Humulene (0.26%) and Caryophyllene (1.97%) were also identified, the latter known to have profound anti-anxiety and anti-depressant activity in addition to the beneficiary effects in lipid regulation. The analysis of the locally grown S. indicum oily and water soluble portions of seed extract revealed the presence of a total of 64 compounds with considerably high percentage of the mono-unsaturated fatty acid ester methyl oleate (66.99%) in addition to methyl stearate (9.35%) and palmitate (15.71%) of oil portion, whereas, plant sterols including Gamma-sitosterol (13.5%), fucosterol (2.11%) and stigmasterol (1.95%) in addition to gamma-tocopherol (1.16%) were detected in extract water-soluble portion. The latter indicate various principles known to have valuable pharmacological benefits including antioxidant activities and beneficiary effects on intestinal cholesterol absorption and regulation of serum cholesterol levels. Z. officinale rhizome extract analysis revealed the presence of 93 compounds, the most abundant were alpha-zingeberine (16.5%), gingerol (9.25%), alpha-sesquiphellandrene (8.3%), zingerone (6.78%), beta-bisabolene (4.19%), alpha-farnesene (3.56%), ar-curcumene (3.29%), gamma-elemene (1.25%) and a variety of other compounds. The presence of these active principles reflected on the activity of the extract. Activity could be assigned to a single or a combination of two or more extract components. GC-MS analysis concluded the occurrence of compounds known to possess antioxidant activity and lipid profile effects.

Keywords: gas chromatography, indicum, officinale, reticulata

Procedia PDF Downloads 358
1628 Optimal Continuous Scheduled Time for a Cumulative Damage System with Age-Dependent Imperfect Maintenance

Authors: Chin-Chih Chang

Abstract:

Many manufacturing systems suffer failures due to complex degradation processes and various environment conditions such as random shocks. Consider an operating system is subject to random shocks and works at random times for successive jobs. When successive jobs often result in production losses and performance deterioration, it would be better to do maintenance or replacement at a planned time. A preventive replacement (PR) policy is presented to replace the system before a failure occurs at a continuous time T. In such a policy, the failure characteristics of the system are designed as follows. Each job would cause a random amount of additive damage to the system, and the system fails when the cumulative damage has exceeded a failure threshold. Suppose that the deteriorating system suffers one of the two types of shocks with age-dependent probabilities: type-I (minor) shock is rectified by a minimal repair, or type-II (catastrophic) shock causes the system to fail. A corrective replacement (CR) is performed immediately when the system fails. In summary, a generalized maintenance model to scheduling replacement plan for an operating system is presented below. PR is carried out at time T, whereas CR is carried out when any type-II shock occurs and the total damage exceeded a failure level. The main objective is to determine the optimal continuous schedule time of preventive replacement through minimizing the mean cost rate function. The existence and uniqueness of optimal replacement policy are derived analytically. It can be seen that the present model is a generalization of the previous models, and the policy with preventive replacement outperforms the one without preventive replacement.

Keywords: preventive replacement, working time, cumulative damage model, minimal repair, imperfect maintenance, optimization

Procedia PDF Downloads 347
1627 Cost Benefit Analysis: Evaluation among the Millimetre Wavebands and SHF Bands of Small Cell 5G Networks

Authors: Emanuel Teixeira, Anderson Ramos, Marisa Lourenço, Fernando J. Velez, Jon M. Peha

Abstract:

This article discusses the benefit cost analysis aspects of millimetre wavebands (mmWaves) and Super High Frequency (SHF). The devaluation along the distance of the carrier-to-noise-plus-interference ratio with the coverage distance is assessed by considering two different path loss models, the two-slope urban micro Line-of-Sight (UMiLoS) for the SHF band and the modified Friis propagation model, for frequencies above 24 GHz. The equivalent supported throughput is estimated at the 5.62, 28, 38, 60 and 73 GHz frequency bands and the influence of carrier-to-noise-plus-interference ratio in the radio and network optimization process is explored. Mostly owing to the lessening caused by the behaviour of the two-slope propagation model for SHF band, the supported throughput at this band is higher than at the millimetre wavebands only for the longest cell lengths. The benefit cost analysis of these pico-cellular networks was analysed for regular cellular topologies, by considering the unlicensed spectrum. For shortest distances, we can distinguish an optimal of the revenue in percentage terms for values of the cell length, R ≈ 10 m for the millimeter wavebands and for longest distances an optimal of the revenue can be observed at R ≈ 550 m for the 5.62 GHz. It is possible to observe that, for the 5.62 GHz band, the profit is slightly inferior than for millimetre wavebands, for the shortest Rs, and starts to increase for cell lengths approximately equal to the ratio between the break-point distance and the co-channel reuse factor, achieving a maximum for values of R approximately equal to 550 m.

Keywords: millimetre wavebands, SHF band, SINR, cost benefit analysis, 5G

Procedia PDF Downloads 133
1626 Nuclear Powered UAV for Surveillances and Aerial Photography

Authors: Rajasekar Elangopandian, Anand Shanmugam

Abstract:

Now-a-days for surveillances unmanned aerial vehicle plays a vital role. Not only for surveillances, aerial photography disaster management and the notice of earth behavior UAV1s envisages meticulously. To reduce the maintenance and fuel nuclear powered Vehicles are greater support. The design consideration is much important for the UAV manufacturing industry and Research and development agency. Eventually design is looking like a pentagon shaped fuselage and black rubber coated paint in order to escape from the enemy radar and other targets. The pentagon shape fuselage has large space to keep the mini nuclear reactor inside and the material is carbon – carbon fiber specially designed by the software called cosmol and hyper mesh 14.2. So the weight consideration will produce the positive result for productivity. The walls of the fuselage are coated with lead and protective shield. A double layer of W/Bi sheet is proposed for radiation protection at the energy range of 70 Kev to 90 Kev. The designed W/bi sheet, only 0.14 mm thick and is 36% light. The properties of the fillers were determined from zeta potential and particle size measurements. The Exposes of the radiation can be attenuated by 3 ways such as minimizing exposure time, Maximizing distance from the radiation source and shielding the whole vehicle. The inside reactor will be switched ON when the UAV starts its cruise. The moderators and the control rods can be inserted by automation technique by newly developed software. The heat generated by the reactor will be used to run the turbine which is fixed inside the UAV called mini turbine with natural rubber composite Shaft radiation shield. Cooling system will be in two mode such as liquid and air cooled. Liquid coolant for the heat regeneration is ordinary water, liquid sodium, helium and the walls are made up of regenerative and radiation protective material. The other components like camera and arms bay will be located at the bottom of the UAV high are specially made products in order to escape from the radiation. They are coated with lead Pb and natural rubber composite material. This technique provides the long rang and endurance for eternal flight mission until we need any changeability of parts or product. This UAV has the special advantage of ` land on String` means it`ll land at electric line to charge the automated electronics. Then the fuel is enriched uranium (< 5% U - 235) contains hundreds of fuel pins. This technique provides eternal duty for surveillances and aerial photography. The landing of the vehicle is ease of operation likewise the takeoff is also easier than any other mechanism which present in nowadays. This UAV gives great immense and immaculate technology for surveillance and target detecting and smashing the target.

Keywords: mini turbine, liquid coolant for the heat regeneration, in order to escape from the radiation, eternal flight mission, it`ll land at electric line

Procedia PDF Downloads 404
1625 Numerical Analysis Of Stainless Steel Beam To Column Joints With Bolted Flush End Plates

Authors: Takwiir Tahriim Khan, Tausif Khalid, Mohammad Redwan Ahamed, Md Soebur Rahman

Abstract:

The mutual connection in joints has a significant impact on the safe and cost-effective design of steel structures. Generally, the end plates are welded at the end of the beam and columns are bolted with the end plates. Thus, the moment will be transferred at the interface, which is a critical segment at the connection. 3-D Finite Element Models (FEM) has been developed using ABAQUS 2017 software to predict the yield capacity of the end plate connections. The parameters used in this study are the depth, width, and thickness of the end plate, dimensions of the bolt, sectional and material properties of beams and columns. The influence width, depth, and thicknesses of the end plate connection on yield capacity were investigated through parametric studies. The results showed that, for increasing plate thickness from 0.3 inch to 0.8 inch by an increment of 0.1 inch the yield capacity increased by 2.85% on average, for decreasing the end plate depth from 13 inch to 11 inch the yield capacity increased by 25.4 %, and for decreasing the end plate width from 6.5 inch to 5.75 inch the yield capacity increased by 35.4%. Variation in yield capacity was also found by changing the beam and column section. Besides, the numerical results showed a good agreement with published experimental literature with an average variation of less than 8.3 % in yield capacity. So the study allows for a more effective combination of beam, column, and end plate dimensions.

Keywords: steel beam-column joints, finite element analysis, yield moment capacity, parametric study, ABAQUS, bolted joints, flush end plates, moment vs rotation curves

Procedia PDF Downloads 99