Search results for: cost-reflective network pricing method
20493 Meta-Instruction Theory in Mathematics Education and Critique of Bloom’s Theory
Authors: Abdollah Aliesmaeili
Abstract:
The purpose of this research is to present a different perspective on the basic math teaching method called meta-instruction, which reverses the learning path. Meta-instruction is a method of teaching in which the teaching trajectory starts from brain education into learning. This research focuses on the behavior of the mind during learning. In this method, students are not instructed in mathematics, but they are educated. Another goal of the research is to "criticize Bloom's classification in the cognitive domain and reverse it", because it cannot meet the educational and instructional needs of the new generation and "substituting math education instead of math teaching". This is an indirect method of teaching. The method of research is longitudinal through four years. Statistical samples included students ages 6 to 11. The research focuses on improving the mental abilities of children to explore mathematical rules and operations by playing only with eight measurements (any years 2 examinations). The results showed that there is a significant difference between groups in remembering, understanding, and applying. Moreover, educating math is more effective than instructing in overall learning abilities.Keywords: applying, Bloom's taxonomy, brain education, mathematics teaching method, meta-instruction, remembering, starmath method, understanding
Procedia PDF Downloads 2320492 Multi-Labeled Aromatic Medicinal Plant Image Classification Using Deep Learning
Authors: Tsega Asresa, Getahun Tigistu, Melaku Bayih
Abstract:
Computer vision is a subfield of artificial intelligence that allows computers and systems to extract meaning from digital images and video. It is used in a wide range of fields of study, including self-driving cars, video surveillance, medical diagnosis, manufacturing, law, agriculture, quality control, health care, facial recognition, and military applications. Aromatic medicinal plants are botanical raw materials used in cosmetics, medicines, health foods, essential oils, decoration, cleaning, and other natural health products for therapeutic and Aromatic culinary purposes. These plants and their products not only serve as a valuable source of income for farmers and entrepreneurs but also going to export for valuable foreign currency exchange. In Ethiopia, there is a lack of technologies for the classification and identification of Aromatic medicinal plant parts and disease type cured by aromatic medicinal plants. Farmers, industry personnel, academicians, and pharmacists find it difficult to identify plant parts and disease types cured by plants before ingredient extraction in the laboratory. Manual plant identification is a time-consuming, labor-intensive, and lengthy process. To alleviate these challenges, few studies have been conducted in the area to address these issues. One way to overcome these problems is to develop a deep learning model for efficient identification of Aromatic medicinal plant parts with their corresponding disease type. The objective of the proposed study is to identify the aromatic medicinal plant parts and their disease type classification using computer vision technology. Therefore, this research initiated a model for the classification of aromatic medicinal plant parts and their disease type by exploring computer vision technology. Morphological characteristics are still the most important tools for the identification of plants. Leaves are the most widely used parts of plants besides roots, flowers, fruits, and latex. For this study, the researcher used RGB leaf images with a size of 128x128 x3. In this study, the researchers trained five cutting-edge models: convolutional neural network, Inception V3, Residual Neural Network, Mobile Network, and Visual Geometry Group. Those models were chosen after a comprehensive review of the best-performing models. The 80/20 percentage split is used to evaluate the model, and classification metrics are used to compare models. The pre-trained Inception V3 model outperforms well, with training and validation accuracy of 99.8% and 98.7%, respectively.Keywords: aromatic medicinal plant, computer vision, convolutional neural network, deep learning, plant classification, residual neural network
Procedia PDF Downloads 18720491 Effect of Type of Pile and Its Installation Method on Pile Bearing Capacity by Physical Modelling in Frustum Confining Vessel
Authors: Seyed Abolhasan Naeini, M. Mortezaee
Abstract:
Various factors such as the method of installation, the pile type, the pile material and the pile shape, can affect the final bearing capacity of a pile executed in the soil; among them, the method of installation is of special importance. The physical modeling is among the best options in the laboratory study of the piles behavior. Therefore, the current paper first presents and reviews the frustum confining vesel (FCV) as a suitable tool for physical modeling of deep foundations. Then, by describing the loading tests of two open-ended and closed-end steel piles, each of which has been performed in two methods, “with displacement" and "without displacement", the effect of end conditions and installation method on the final bearing capacity of the pile is investigated. The soil used in the current paper is silty sand of Firoozkooh. The results of the experiments show that in general the without displacement installation method has a larger bearing capacity in both piles, and in a specific method of installation the closed ended pile shows a slightly higher bearing capacity.Keywords: physical modeling, frustum confining vessel, pile, bearing capacity, installation method
Procedia PDF Downloads 15320490 Application of Artificial Neural Network in Initiating Cleaning Of Photovoltaic Solar Panels
Authors: Mohamed Mokhtar, Mostafa F. Shaaban
Abstract:
Among the challenges facing solar photovoltaic (PV) systems in the United Arab Emirates (UAE), dust accumulation on solar panels is considered the most severe problem that faces the growth of solar power plants. The accumulation of dust on the solar panels significantly degrades output from these panels. Hence, solar PV panels have to be cleaned manually or using costly automated cleaning methods. This paper focuses on initiating cleaning actions when required to reduce maintenance costs. The cleaning actions are triggered only when the dust level exceeds a threshold value. The amount of dust accumulated on the PV panels is estimated using an artificial neural network (ANN). Experiments are conducted to collect the required data, which are used in the training of the ANN model. Then, this ANN model will be fed by the output power from solar panels, ambient temperature, and solar irradiance, and thus, it will be able to estimate the amount of dust accumulated on solar panels at these conditions. The model was tested on different case studies to confirm the accuracy of the developed model.Keywords: machine learning, dust, PV panels, renewable energy
Procedia PDF Downloads 14420489 Seismic Fragility Functions of RC Moment Frames Using Incremental Dynamic Analyses
Authors: Seung-Won Lee, JongSoo Lee, Won-Jik Yang, Hyung-Joon Kim
Abstract:
A capacity spectrum method (CSM), one of methodologies to evaluate seismic fragilities of building structures, has been long recognized as the most convenient method, even if it contains several limitations to predict the seismic response of structures of interest. This paper proposes the procedure to estimate seismic fragility curves using an incremental dynamic analysis (IDA) rather than the method adopting a CSM. To achieve the research purpose, this study compares the seismic fragility curves of a 5-story reinforced concrete (RC) moment frame obtained from both methods, an IDA method and a CSM. Both seismic fragility curves are similar in slight and moderate damage states whereas the fragility curve obtained from the IDA method presents less variation (or uncertainties) in extensive and complete damage states. This is due to the fact that the IDA method can properly capture the structural response beyond yielding rather than the CSM and can directly calculate higher mode effects. From these observations, the CSM could overestimate seismic vulnerabilities of the studied structure in extensive or complete damage states.Keywords: seismic fragility curve, incremental dynamic analysis, capacity spectrum method, reinforced concrete moment frame
Procedia PDF Downloads 42220488 Calibration of Residential Buildings Energy Simulations Using Real Data from an Extensive in situ Sensor Network – A Study of Energy Performance Gap
Authors: Mathieu Bourdeau, Philippe Basset, Julien Waeytens, Elyes Nefzaoui
Abstract:
As residential buildings account for a third of the overall energy consumption and greenhouse gas emissions in Europe, building energy modeling is an essential tool to reach energy efficiency goals. In the energy modeling process, calibration is a mandatory step to obtain accurate and reliable energy simulations. Nevertheless, the comparison between simulation results and the actual building energy behavior often highlights a significant performance gap. The literature discusses different origins of energy performance gaps, from building design to building operation. Then, building operation description in energy models, especially energy usages and users’ behavior, plays an important role in the reliability of simulations but is also the most accessible target for post-occupancy energy management and optimization. Therefore, the present study aims to discuss results on the calibration ofresidential building energy models using real operation data. Data are collected through a sensor network of more than 180 sensors and advanced energy meters deployed in three collective residential buildings undergoing major retrofit actions. The sensor network is implemented at building scale and in an eight-apartment sample. Data are collected for over one year and half and coverbuilding energy behavior – thermal and electricity, indoor environment, inhabitants’ comfort, occupancy, occupants behavior and energy uses, and local weather. Building energy simulations are performed using a physics-based building energy modeling software (Pleaides software), where the buildings’features are implemented according to the buildingsthermal regulation code compliance study and the retrofit project technical files. Sensitivity analyses are performed to highlight the most energy-driving building features regarding each end-use. These features are then compared with the collected post-occupancy data. Energy-driving features are progressively replaced with field data for a step-by-step calibration of the energy model. Results of this study provide an analysis of energy performance gap on an existing residential case study under deep retrofit actions. It highlights the impact of the different building features on the energy behavior and the performance gap in this context, such as temperature setpoints, indoor occupancy, the building envelopeproperties but also domestic hot water usage or heat gains from electric appliances. The benefits of inputting field data from an extensive instrumentation campaign instead of standardized scenarios are also described. Finally, the exhaustive instrumentation solution provides useful insights on the needs, advantages, and shortcomings of the implemented sensor network for its replicability on a larger scale and for different use cases.Keywords: calibration, building energy modeling, performance gap, sensor network
Procedia PDF Downloads 16020487 Relay Mining: Verifiable Multi-Tenant Distributed Rate Limiting
Authors: Daniel Olshansky, Ramiro Rodrıguez Colmeiro
Abstract:
Relay Mining presents a scalable solution employing probabilistic mechanisms and crypto-economic incentives to estimate RPC volume usage, facilitating decentralized multitenant rate limiting. Network traffic from individual applications can be concurrently serviced by multiple RPC service providers, with costs, rewards, and rate limiting governed by a native cryptocurrency on a distributed ledger. Building upon established research in token bucket algorithms and distributed rate-limiting penalty models, our approach harnesses a feedback loop control mechanism to adjust the difficulty of mining relay rewards, dynamically scaling with network usage growth. By leveraging crypto-economic incentives, we reduce coordination overhead costs and introduce a mechanism for providing RPC services that are both geopolitically and geographically distributed.Keywords: remote procedure call, crypto-economic, commit-reveal, decentralization, scalability, blockchain, rate limiting, token bucket
Procedia PDF Downloads 5420486 Facilitating Factors for the Success of Mobile Service Providers in Bangkok Metropolitan
Authors: Yananda Siraphatthada
Abstract:
The objectives of this research were to study the level of influencing factors, leadership, supply chain management, innovation, competitive advantages, business success, and affecting factors to the business success of the mobile phone system service providers in Bangkok Metropolitan. This research was done by the quantitative approach and the qualitative approach. The quantitative approach was used for questionnaires to collect data from the 331 mobile service shop managers franchised by AIS, Dtac and TrueMove. The mobile phone system service providers/shop managers were randomly stratified and proportionally allocated into subgroups exclusive to the number of the providers in each network. In terms of qualitative method, there were in-depth interviews of 6 mobile service providers/managers of Telewiz and Dtac and TrueMove shop to find the agreement or disagreement with the content analysis method. Descriptive Statistics, including Frequency, Percentage, Means and Standard Deviation were employed; also, the Structural Equation Model (SEM) was used as a tool for data analysis. The content analysis method was applied to identify key patterns emerging from the interview responses. The two data sets were brought together for comparing and contrasting to make the findings, providing triangulation to enrich result interpretation. It revealed that the level of the influencing factors – leadership, innovation management, supply chain management, and business competitiveness had an impact at a great level, but that the level of factors, innovation and the business, financial success and nonbusiness financial success of the mobile phone system service providers in Bangkok Metropolitan, is at the highest level. Moreover, the business influencing factors, competitive advantages in the business of mobile system service providers which were leadership, supply chain management, innovation management, business advantages, and business success, had statistical significance at .01 which corresponded to the data from the interviews.Keywords: mobile service providers, facilitating factors, Bangkok Metropolitan, business success
Procedia PDF Downloads 34820485 Approximations of Fractional Derivatives and Its Applications in Solving Non-Linear Fractional Variational Problems
Authors: Harendra Singh, Rajesh Pandey
Abstract:
The paper presents a numerical method based on operational matrix of integration and Ryleigh method for the solution of a class of non-linear fractional variational problems (NLFVPs). Chebyshev first kind polynomials are used for the construction of operational matrix. Using operational matrix and Ryleigh method the NLFVP is converted into a system of non-linear algebraic equations, and solving these equations we obtained approximate solution for NLFVPs. Convergence analysis of the proposed method is provided. Numerical experiment is done to show the applicability of the proposed numerical method. The obtained numerical results are compared with exact solution and solution obtained from Chebyshev third kind. Further the results are shown graphically for different fractional order involved in the problems.Keywords: non-linear fractional variational problems, Rayleigh-Ritz method, convergence analysis, error analysis
Procedia PDF Downloads 29820484 Use of Artificial Intelligence Based Models to Estimate the Use of a Spectral Band in Cognitive Radio
Authors: Danilo López, Edwin Rivas, Fernando Pedraza
Abstract:
Currently, one of the major challenges in wireless networks is the optimal use of radio spectrum, which is managed inefficiently. One of the solutions to existing problem converges in the use of Cognitive Radio (CR), as an essential parameter so that the use of the available licensed spectrum is possible (by secondary users), well above the usage values that are currently detected; thus allowing the opportunistic use of the channel in the absence of primary users (PU). This article presents the results found when estimating or predicting the future use of a spectral transmission band (from the perspective of the PU) for a chaotic type channel arrival behavior. The time series prediction method (which the PU represents) used is ANFIS (Adaptive Neuro Fuzzy Inference System). The results obtained were compared to those delivered by the RNA (Artificial Neural Network) algorithm. The results show better performance in the characterization (modeling and prediction) with the ANFIS methodology.Keywords: ANFIS, cognitive radio, prediction primary user, RNA
Procedia PDF Downloads 42120483 An Approximation Method for Exact Boundary Controllability of Euler-Bernoulli
Authors: A. Khernane, N. Khelil, L. Djerou
Abstract:
The aim of this work is to study the numerical implementation of the Hilbert uniqueness method for the exact boundary controllability of Euler-Bernoulli beam equation. This study may be difficult. This will depend on the problem under consideration (geometry, control, and dimension) and the numerical method used. Knowledge of the asymptotic behaviour of the control governing the system at time T may be useful for its calculation. This idea will be developed in this study. We have characterized as a first step the solution by a minimization principle and proposed secondly a method for its resolution to approximate the control steering the considered system to rest at time T.Keywords: boundary control, exact controllability, finite difference methods, functional optimization
Procedia PDF Downloads 34720482 Time Series Modelling for Forecasting Wheat Production and Consumption of South Africa in Time of War
Authors: Yiseyon Hosu, Joseph Akande
Abstract:
Wheat is one of the most important staple food grains of human for centuries and is largely consumed in South Africa. It has a special place in the South African economy because of its significance in food security, trade, and industry. This paper modelled and forecast the production and consumption of wheat in South Africa in the time covid-19 and the ongoing Russia-Ukraine war by using annual time series data from 1940–2021 based on the ARIMA models. Both the averaging forecast and selected models forecast indicate that there is the possibility of an increase with respect to production. The minimum and maximum growth in production is projected to be between 3million and 10 million tons, respectively. However, the model also forecast a possibility of depression with respect to consumption in South Africa. Although Covid-19 and the war between Ukraine and Russia, two major producers and exporters of global wheat, are having an effect on the volatility of the prices currently, the wheat production in South African is expected to increase and meat the consumption demand and provided an opportunity for increase export with respect to domestic consumption. The forecasting of production and consumption behaviours of major crops play an important role towards food and nutrition security, these findings can assist policymakers and will provide them with insights into the production and pricing policy of wheat in South Africa.Keywords: ARIMA, food security, price volatility, staple food, South Africa
Procedia PDF Downloads 10220481 Online Battery Equivalent Circuit Model Estimation on Continuous-Time Domain Using Linear Integral Filter Method
Authors: Cheng Zhang, James Marco, Walid Allafi, Truong Q. Dinh, W. D. Widanage
Abstract:
Equivalent circuit models (ECMs) are widely used in battery management systems in electric vehicles and other battery energy storage systems. The battery dynamics and the model parameters vary under different working conditions, such as different temperature and state of charge (SOC) levels, and therefore online parameter identification can improve the modelling accuracy. This paper presents a way of online ECM parameter identification using a continuous time (CT) estimation method. The CT estimation method has several advantages over discrete time (DT) estimation methods for ECM parameter identification due to the widely separated battery dynamic modes and fast sampling. The presented method can be used for online SOC estimation. Test data are collected using a lithium ion cell, and the experimental results show that the presented CT method achieves better modelling accuracy compared with the conventional DT recursive least square method. The effectiveness of the presented method for online SOC estimation is also verified on test data.Keywords: electric circuit model, continuous time domain estimation, linear integral filter method, parameter and SOC estimation, recursive least square
Procedia PDF Downloads 38320480 An Evaluation of the Lae City Road Network Improvement Project
Authors: Murray Matarab Konzang
Abstract:
Lae Port Development Project, Four Lane Highway and other development in the extraction industry which have direct road link to Lae City are predicted to have significant impact on its road network system. This paper evaluates Lae roads improvement program with forecast on planning, economic and the installation of bypasses to ease congestion, effective and convenient transport service for bulk goods and reduce travel time. Land-use transportation study and plans for local area traffic management scheme will be considered. City roads are faced with increased number of traffic and some inadequate road pavement width, poor transport plans, and facilities to meet this transportation demand. Lae also has drainage system which might not hold a 100 year flood. Proper evaluation, plan, design and intersection analysis is needed to evaluate road network system thus recommend improvement and estimate future growth. Repetitive and cyclic loading by heavy commercial vehicles with different axle configurations apply on the flexible pavement which weakens and tear the pavement surface thus small cracks occur. Rain water seeps through and overtime it creates potholes. Effective planning starts from experimental research and appropriate design standards to enable firm embankment, proper drains and quality pavement material. This paper will address traffic problems as well as road pavement, capacities of intersections, and pedestrian flow during peak hours. The outcome of this research will be to identify heavily trafficked road sections and recommend treatments to reduce traffic congestions, road classification, and proposal for bypass routes and improvement. First part of this study will describe transport or traffic related problems within the city. Second part would be to identify challenges imposed by traffic and road related problems and thirdly to recommend solutions after the analyzing traffic data that will indicate current capacities of road intersections and finally recommended treatment for improvement and future growth.Keywords: Lae, road network, highway, vehicle traffic, planning
Procedia PDF Downloads 35820479 Characteristics of Business Models of Industrial-Internet-of-Things Platforms
Authors: Peter Kress, Alexander Pflaum, Ulrich Loewen
Abstract:
The number of Internet-of-Things (IoT) platforms is steadily increasing across various industries, especially for smart factories, smart homes and smart mobility. Also in the manufacturing industry, the number of Industrial-IoT platforms is growing. Both IT players, start-ups and increasingly also established industry players and small-and-medium-enterprises introduce offerings for the connection of industrial equipment on platforms, enabled by advanced information and communication technology. Beside the offered functionalities, the established ecosystem of partners around a platform is one of the key differentiators to generate a competitive advantage. The key question is how platform operators design the business model around their platform to attract a high number of customers and partners to co-create value for the entire ecosystem. The present research tries to answer this question by determining the key characteristics of business models of successful platforms in the manufacturing industry. To achieve that, the authors selected an explorative qualitative research approach and created an inductive comparative case study. The authors generated valuable descriptive insights of the business model elements (e.g., value proposition, pricing model or partnering model) of various established platforms. Furthermore, patterns across the various cases were identified to derive propositions for the successful design of business models of platforms in the manufacturing industry.Keywords: industrial-internet-of-things, business models, platforms, ecosystems, case study
Procedia PDF Downloads 24320478 Physics Informed Deep Residual Networks Based Type-A Aortic Dissection Prediction
Abstract:
Purpose: Acute Type A aortic dissection is a well-known cause of extremely high mortality rate. A highly accurate and cost-effective non-invasive predictor is critically needed so that the patient can be treated at earlier stage. Although various CFD approaches have been tried to establish some prediction frameworks, they are sensitive to uncertainty in both image segmentation and boundary conditions. Tedious pre-processing and demanding calibration procedures requirement further compound the issue, thus hampering their clinical applicability. Using the latest physics informed deep learning methods to establish an accurate and cost-effective predictor framework are amongst the main goals for a better Type A aortic dissection treatment. Methods: Via training a novel physics-informed deep residual network, with non-invasive 4D MRI displacement vectors as inputs, the trained model can cost-effectively calculate all these biomarkers: aortic blood pressure, WSS, and OSI, which are used to predict potential type A aortic dissection to avoid the high mortality events down the road. Results: The proposed deep learning method has been successfully trained and tested with both synthetic 3D aneurysm dataset and a clinical dataset in the aortic dissection context using Google colab environment. In both cases, the model has generated aortic blood pressure, WSS, and OSI results matching the expected patient’s health status. Conclusion: The proposed novel physics-informed deep residual network shows great potential to create a cost-effective, non-invasive predictor framework. Additional physics-based de-noising algorithm will be added to make the model more robust to clinical data noises. Further studies will be conducted in collaboration with big institutions such as Cleveland Clinic with more clinical samples to further improve the model’s clinical applicability.Keywords: type-a aortic dissection, deep residual networks, blood flow modeling, data-driven modeling, non-invasive diagnostics, deep learning, artificial intelligence.
Procedia PDF Downloads 8920477 Numerical Investigation of Embankment Settlement Improved by Method of Preloading by Vertical Drains
Authors: Seyed Abolhasan Naeini, Saeideh Mohammadi
Abstract:
Time dependent settlement due to loading on soft saturated soils produces many problems such as high consolidation settlements and low consolidation rates. Also, long term consolidation settlement of soft soil underlying the embankment leads to unpredicted settlements and cracks on soil surface. Preloading method is an effective improvement method to solve this problem. Using vertical drains in preloading method is an effective method for improving soft soils. Applying deep soil mixing method on soft soils is another effective method for improving soft soils. There are little studies on using two methods of preloading and deep soil mixing simultaneously. In this paper, the concurrent effect of preloading with deep soil mixing by vertical drains is investigated through a finite element code, Plaxis2D. The influence of parameters such as deep soil mixing columns spacing, existence of vertical drains and distance between them, on settlement and stability factor of safety of embankment embedded on soft soil is investigated in this research.Keywords: preloading, soft soil, vertical drains, deep soil mixing, consolidation settlement
Procedia PDF Downloads 21620476 Decision Support System for Fetus Status Evaluation Using Cardiotocograms
Authors: Oyebade K. Oyedotun
Abstract:
The cardiotocogram is a technical recording of the heartbeat rate and uterine contractions of a fetus during pregnancy. During pregnancy, several complications can occur to both the mother and the fetus; hence it is very crucial that medical experts are able to find technical means to check the healthiness of the mother and especially the fetus. It is very important that the fetus develops as expected in stages during the pregnancy period; however, the task of monitoring the health status of the fetus is not that which is easily achieved as the fetus is not wholly physically available to medical experts for inspection. Hence, doctors have to resort to some other tests that can give an indication of the status of the fetus. One of such diagnostic test is to obtain cardiotocograms of the fetus. From the analysis of the cardiotocograms, medical experts can determine the status of the fetus, and therefore necessary medical interventions. Generally, medical experts classify examined cardiotocograms into ‘normal’, ‘suspect’, or ‘pathological’. This work presents an artificial neural network based decision support system which can filter cardiotocograms data, producing the corresponding statuses of the fetuses. The capability of artificial neural network to explore the cardiotocogram data and learn features that distinguish one class from the others has been exploited in this research. In this research, feedforward and radial basis neural networks were trained on a publicly available database to classify the processed cardiotocogram data into one of the three classes: ‘normal’, ‘suspect’, or ‘pathological’. Classification accuracies of 87.8% and 89.2% were achieved during the test phase of the trained network for the feedforward and radial basis neural networks respectively. It is the hope that while the system described in this work may not be a complete replacement for a medical expert in fetus status evaluation, it can significantly reinforce the confidence in medical diagnosis reached by experts.Keywords: decision support, cardiotocogram, classification, neural networks
Procedia PDF Downloads 33220475 Optimal Design of the Power Generation Network in California: Moving towards 100% Renewable Electricity by 2045
Authors: Wennan Long, Yuhao Nie, Yunan Li, Adam Brandt
Abstract:
To fight against climate change, California government issued the Senate Bill No. 100 (SB-100) in 2018 September, which aims at achieving a target of 100% renewable electricity by the end of 2045. A capacity expansion problem is solved in this case study using a binary quadratic programming model. The optimal locations and capacities of the potential renewable power plants (i.e., solar, wind, biomass, geothermal and hydropower), the phase-out schedule of existing fossil-based (nature gas) power plants and the transmission of electricity across the entire network are determined with the minimal total annualized cost measured by net present value (NPV). The results show that the renewable electricity contribution could increase to 85.9% by 2030 and reach 100% by 2035. Fossil-based power plants will be totally phased out around 2035 and solar and wind will finally become the most dominant renewable energy resource in California electricity mix.Keywords: 100% renewable electricity, California, capacity expansion, mixed integer non-linear programming
Procedia PDF Downloads 17120474 A Hybrid Normalized Gradient Correlation Based Thermal Image Registration for Morphoea
Authors: L. I. Izhar, T. Stathaki, K. Howell
Abstract:
Analyzing and interpreting of thermograms have been increasingly employed in the diagnosis and monitoring of diseases thanks to its non-invasive, non-harmful nature and low cost. In this paper, a novel system is proposed to improve diagnosis and monitoring of morphoea skin disorder based on integration with the published lines of Blaschko. In the proposed system, image registration based on global and local registration methods are found inevitable. This paper presents a modified normalized gradient cross-correlation (NGC) method to reduce large geometrical differences between two multimodal images that are represented by smooth gray edge maps is proposed for the global registration approach. This method is improved further by incorporating an iterative-based normalized cross-correlation coefficient (NCC) method. It is found that by replacing the final registration part of the NGC method where translational differences are solved in the spatial Fourier domain with the NCC method performed in the spatial domain, the performance and robustness of the NGC method can be greatly improved. It is shown in this paper that the hybrid NGC method not only outperforms phase correlation (PC) method but also improved misregistration due to translation, suffered by the modified NGC method alone for thermograms with ill-defined jawline. This also demonstrates that by using the gradients of the gray edge maps and a hybrid technique, the performance of the PC based image registration method can be greatly improved.Keywords: Blaschko’s lines, image registration, morphoea, thermal imaging
Procedia PDF Downloads 31020473 Historical Hashtags: An Investigation of the #CometLanding Tweets
Authors: Noor Farizah Ibrahim, Christopher Durugbo
Abstract:
This study aims to investigate how the Twittersphere reacted during the recent historical event of robotic landing on a comet. The news is about Philae, a robotic lander from European Space Agency (ESA), which successfully made the first-ever rendezvous and touchdown of its kind on a nucleus comet on November 12, 2014. In order to understand how Twitter is practically used in spreading messages on historical events, we conducted an analysis of one-week tweet feeds that contain the #CometLanding hashtag. We studied the trends of tweets, the diffusion of the information and the characteristics of the social network created. The results indicated that the use of Twitter as a platform enables online communities to engage and spread the historical event through social media network (e.g. tweets, retweets, mentions and replies). In addition, it was found that comprehensible and understandable hashtags could influence users to follow the same tweet stream compared to other laborious hashtags which were difficult to understand by users in online communities.Keywords: diffusion of information, hashtag, social media, Twitter
Procedia PDF Downloads 32520472 Comparison of Allowable Stress Method and Time History Response Analysis for Seismic Design of Buildings
Authors: Sayuri Inoue, Naohiro Nakamura, Tsubasa Hamada
Abstract:
The seismic design method of buildings is classified into two types: static design and dynamic design. The static design is a design method that exerts static force as seismic force and is a relatively simple design method created based on the experience of seismic motion in the past 100 years. At present, static design is used for most of the Japanese buildings. Dynamic design mainly refers to the time history response analysis. It is a comparatively difficult design method that input the earthquake motion assumed in the building model and examine the response. Currently, it is only used for skyscrapers and specific buildings. In the present design standard in Japan, it is good to use either the design method of the static design and the dynamic design in the medium and high-rise buildings. However, when actually designing middle and high-rise buildings by two kinds of design methods, the relatively simple static design method satisfies the criteria, but in the case of a little difficult dynamic design method, the criterion isn't often satisfied. This is because the dynamic design method was built with the intention of designing super high-rise buildings. In short, higher safety is required as compared with general buildings, and criteria become stricter. The authors consider applying the dynamic design method to general buildings designed by the static design method so far. The reason is that application of the dynamic design method is reasonable for buildings that are out of the conventional standard structural form such as emphasizing design. For the purpose, it is important to compare the design results when the criteria of both design methods are arranged side by side. In this study, we performed time history response analysis to medium-rise buildings that were actually designed with allowable stress method. Quantitative comparison between static design and dynamic design was conducted, and characteristics of both design methods were examined.Keywords: buildings, seismic design, allowable stress design, time history response analysis, Japanese seismic code
Procedia PDF Downloads 15520471 Microgrid Design Under Optimal Control With Batch Reinforcement Learning
Authors: Valentin Père, Mathieu Milhé, Fabien Baillon, Jean-Louis Dirion
Abstract:
Microgrids offer potential solutions to meet the need for local grid stability and increase isolated networks autonomy with the integration of intermittent renewable energy production and storage facilities. In such a context, sizing production and storage for a given network is a complex task, highly depending on input data such as power load profile and renewable resource availability. This work aims at developing an operating cost computation methodology for different microgrid designs based on the use of deep reinforcement learning (RL) algorithms to tackle the optimal operation problem in stochastic environments. RL is a data-based sequential decision control method based on Markov decision processes that enable the consideration of random variables for control at a chosen time scale. Agents trained via RL constitute a promising class of Energy Management Systems (EMS) for the operation of microgrids with energy storage. Microgrid sizing (or design) is generally performed by minimizing investment costs and operational costs arising from the EMS behavior. The latter might include economic aspects (power purchase, facilities aging), social aspects (load curtailment), and ecological aspects (carbon emissions). Sizing variables are related to major constraints on the optimal operation of the network by the EMS. In this work, an islanded mode microgrid is considered. Renewable generation is done with photovoltaic panels; an electrochemical battery ensures short-term electricity storage. The controllable unit is a hydrogen tank that is used as a long-term storage unit. The proposed approach focus on the transfer of agent learning for the near-optimal operating cost approximation with deep RL for each microgrid size. Like most data-based algorithms, the training step in RL leads to important computer time. The objective of this work is thus to study the potential of Batch-Constrained Q-learning (BCQ) for the optimal sizing of microgrids and especially to reduce the computation time of operating cost estimation in several microgrid configurations. BCQ is an off-line RL algorithm that is known to be data efficient and can learn better policies than on-line RL algorithms on the same buffer. The general idea is to use the learned policy of agents trained in similar environments to constitute a buffer. The latter is used to train BCQ, and thus the agent learning can be performed without update during interaction sampling. A comparison between online RL and the presented method is performed based on the score by environment and on the computation time.Keywords: batch-constrained reinforcement learning, control, design, optimal
Procedia PDF Downloads 12320470 Second Order Analysis of Frames Using Modified Newmark Method
Authors: Seyed Amin Vakili, Sahar Sadat Vakili, Seyed Ehsan Vakili, Nader Abdoli Yazdi
Abstract:
The main purpose of this paper is to present the Modified Newmark Method as a method of non-linear frame analysis by considering the effect of the axial load (second order analysis). The discussion will be restricted to plane frameworks containing a constant cross-section for each element. In addition, it is assumed that the frames are prevented from out-of-plane deflection. This part of the investigation is performed to generalize the established method for the assemblage structures such as frameworks. As explained, the governing differential equations are non-linear and cannot be formulated easily due to unknown axial load of the struts in the frame. By the assumption of constant axial load, the governing equations are changed to linear ones in most methods. Since the modeling and the solutions of the non-linear form of the governing equations are cumbersome, the linear form of the equations would be used in the established method. However, according to the ability of the method to reconsider the minor omitted parameters in modeling during the solution procedure, the axial load in the elements at each stage of the iteration can be computed and applied in the next stage. Therefore, the ability of the method to present an accurate approach to the solutions of non-linear equations will be demonstrated again in this paper.Keywords: nonlinear, stability, buckling, modified newmark method
Procedia PDF Downloads 42620469 Game-Theory-Based on Downlink Spectrum Allocation in Two-Tier Networks
Authors: Yu Zhang, Ye Tian, Fang Ye Yixuan Kang
Abstract:
The capacity of conventional cellular networks has reached its upper bound and it can be well handled by introducing femtocells with low-cost and easy-to-deploy. Spectrum interference issue becomes more critical in peace with the value-added multimedia services growing up increasingly in two-tier cellular networks. Spectrum allocation is one of effective methods in interference mitigation technology. This paper proposes a game-theory-based on OFDMA downlink spectrum allocation aiming at reducing co-channel interference in two-tier femtocell networks. The framework is formulated as a non-cooperative game, wherein the femto base stations are players and frequency channels available are strategies. The scheme takes full account of competitive behavior and fairness among stations. In addition, the utility function reflects the interference from the standpoint of channels essentially. This work focuses on co-channel interference and puts forward a negative logarithm interference function on distance weight ratio aiming at suppressing co-channel interference in the same layer network. This scenario is more suitable for actual network deployment and the system possesses high robustness. According to the proposed mechanism, interference exists only when players employ the same channel for data communication. This paper focuses on implementing spectrum allocation in a distributed fashion. Numerical results show that signal to interference and noise ratio can be obviously improved through the spectrum allocation scheme and the users quality of service in downlink can be satisfied. Besides, the average spectrum efficiency in cellular network can be significantly promoted as simulations results shown.Keywords: femtocell networks, game theory, interference mitigation, spectrum allocation
Procedia PDF Downloads 15620468 Reliability-Based Method for Assessing Liquefaction Potential of Soils
Authors: Mehran Naghizaderokni, Asscar Janalizadechobbasty
Abstract:
This paper explores probabilistic method for assessing the liquefaction potential of sandy soils. The current simplified methods for assessing soil liquefaction potential use a deterministic safety factor in order to determine whether liquefaction will occur or not. However, these methods are unable to determine the liquefaction probability related to a safety factor. A solution to this problem can be found by reliability analysis.This paper presents a reliability analysis method based on the popular certain liquefaction analysis method. The proposed probabilistic method is formulated based on the results of reliability analyses of 190 field records and observations of soil performance against liquefaction. The results of the present study show that confidence coefficient greater and smaller than 1 does not mean safety and/or liquefaction in cadence for liquefaction, and for assuring liquefaction probability, reliability based method analysis should be used. This reliability method uses the empirical acceleration attenuation law in the Chalos area to derive the probability density distribution function and the statistics for the earthquake-induced cyclic shear stress ratio (CSR). The CSR and CRR statistics are used in continuity with the first order and second moment method to calculate the relation between the liquefaction probability, the safety factor and the reliability index. Based on the proposed method, the liquefaction probability related to a safety factor can be easily calculated. The influence of some of the soil parameters on the liquefaction probability can be quantitatively evaluated.Keywords: liquefaction, reliability analysis, chalos area, civil and structural engineering
Procedia PDF Downloads 47020467 Parallelizing the Hybrid Pseudo-Spectral Time Domain/Finite Difference Time Domain Algorithms for the Large-Scale Electromagnetic Simulations Using Massage Passing Interface Library
Authors: Donggun Lee, Q-Han Park
Abstract:
Due to its coarse grid, the Pseudo-Spectral Time Domain (PSTD) method has advantages against the Finite Difference Time Domain (FDTD) method in terms of memory requirement and operation time. However, since the efficiency of parallelization is much lower than that of FDTD, PSTD is not a useful method for a large-scale electromagnetic simulation in a parallel platform. In this paper, we propose the parallelization technique of the hybrid PSTD-FDTD (HPF) method which simultaneously possesses the efficient parallelizability of FDTD and the quick speed and low memory requirement of PSTD. Parallelization cost of the HPF method is exactly the same as the parallel FDTD, but still, it occupies much less memory space and has faster operation speed than the parallel FDTD. Experiments in distributed memory systems have shown that the parallel HPF method saves up to 96% of the operation time and reduces 84% of the memory requirement. Also, by combining the OpenMP library to the MPI library, we further reduced the operation time of the parallel HPF method by 50%.Keywords: FDTD, hybrid, MPI, OpenMP, PSTD, parallelization
Procedia PDF Downloads 14820466 An ANOVA-based Sequential Forward Channel Selection Framework for Brain-Computer Interface Application based on EEG Signals Driven by Motor Imagery
Authors: Forouzan Salehi Fergeni
Abstract:
Converting the movement intents of a person into commands for action employing brain signals like electroencephalogram signals is a brain-computer interface (BCI) system. When left or right-hand motions are imagined, different patterns of brain activity appear, which can be employed as BCI signals for control. To make better the brain-computer interface (BCI) structures, effective and accurate techniques for increasing the classifying precision of motor imagery (MI) based on electroencephalography (EEG) are greatly needed. Subject dependency and non-stationary are two features of EEG signals. So, EEG signals must be effectively processed before being used in BCI applications. In the present study, after applying an 8 to 30 band-pass filter, a car spatial filter is rendered for the purpose of denoising, and then, a method of analysis of variance is used to select more appropriate and informative channels from a category of a large number of different channels. After ordering channels based on their efficiencies, a sequential forward channel selection is employed to choose just a few reliable ones. Features from two domains of time and wavelet are extracted and shortlisted with the help of a statistical technique, namely the t-test. Finally, the selected features are classified with different machine learning and neural network classifiers being k-nearest neighbor, Probabilistic neural network, support-vector-machine, Extreme learning machine, decision tree, Multi-layer perceptron, and linear discriminant analysis with the purpose of comparing their performance in this application. Utilizing a ten-fold cross-validation approach, tests are performed on a motor imagery dataset found in the BCI competition III. Outcomes demonstrated that the SVM classifier got the greatest classification precision of 97% when compared to the other available approaches. The entire investigative findings confirm that the suggested framework is reliable and computationally effective for the construction of BCI systems and surpasses the existing methods.Keywords: brain-computer interface, channel selection, motor imagery, support-vector-machine
Procedia PDF Downloads 5120465 The Use of Fractional Brownian Motion in the Generation of Bed Topography for Bodies of Water Coupled with the Lattice Boltzmann Method
Authors: Elysia Barker, Jian Guo Zhou, Ling Qian, Steve Decent
Abstract:
A method of modelling topography used in the simulation of riverbeds is proposed in this paper, which removes the need for datapoints and measurements of physical terrain. While complex scans of the contours of a surface can be achieved with other methods, this requires specialised tools, which the proposed method overcomes by using fractional Brownian motion (FBM) as a basis to estimate the real surface within a 15% margin of error while attempting to optimise algorithmic efficiency. This removes the need for complex, expensive equipment and reduces resources spent modelling bed topography. This method also accounts for the change in topography over time due to erosion, sediment transport, and other external factors which could affect the topography of the ground by updating its parameters and generating a new bed. The lattice Boltzmann method (LBM) is used to simulate both stationary and steady flow cases in a side-by-side comparison over the generated bed topography using the proposed method and a test case taken from an external source. The method, if successful, will be incorporated into the current LBM program used in the testing phase, which will allow an automatic generation of topography for the given situation in future research, removing the need for bed data to be specified.Keywords: bed topography, FBM, LBM, shallow water, simulations
Procedia PDF Downloads 9820464 Kernel Parallelization Equation for Identifying Structures under Unknown and Periodic Loads
Authors: Seyed Sadegh Naseralavi
Abstract:
This paper presents a Kernel parallelization equation for damage identification in structures under unknown periodic excitations. Herein, the dynamic differential equation of the motion of structure is viewed as a mapping from displacements to external forces. Utilizing this viewpoint, a new method for damage detection in structures under periodic loads is presented. The developed method requires only two periods of load. The method detects the damages without finding the input loads. The method is based on the fact that structural displacements under free and forced vibrations are associated with two parallel subspaces in the displacement space. Considering the concept, kernel parallelization equation (KPE) is derived for damage detection under unknown periodic loads. The method is verified for a case study under periodic loads.Keywords: Kernel, unknown periodic load, damage detection, Kernel parallelization equation
Procedia PDF Downloads 284