Search results for: circuit models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 7432

Search results for: circuit models

5662 A Hierarchical Bayesian Calibration of Data-Driven Models for Composite Laminate Consolidation

Authors: Nikolaos Papadimas, Joanna Bennett, Amir Sakhaei, Timothy Dodwell

Abstract:

Composite modeling of consolidation processes is playing an important role in the process and part design by indicating the formation of possible unwanted prior to expensive experimental iterative trial and development programs. Composite materials in their uncured state display complex constitutive behavior, which has received much academic interest, and this with different models proposed. Errors from modeling and statistical which arise from this fitting will propagate through any simulation in which the material model is used. A general hyperelastic polynomial representation was proposed, which can be readily implemented in various nonlinear finite element packages. In our case, FEniCS was chosen. The coefficients are assumed uncertain, and therefore the distribution of parameters learned using Markov Chain Monte Carlo (MCMC) methods. In engineering, the approach often followed is to select a single set of model parameters, which on average, best fits a set of experiments. There are good statistical reasons why this is not a rigorous approach to take. To overcome these challenges, A hierarchical Bayesian framework was proposed in which population distribution of model parameters is inferred from an ensemble of experiments tests. The resulting sampled distribution of hyperparameters is approximated using Maximum Entropy methods so that the distribution of samples can be readily sampled when embedded within a stochastic finite element simulation. The methodology is validated and demonstrated on a set of consolidation experiments of AS4/8852 with various stacking sequences. The resulting distributions are then applied to stochastic finite element simulations of the consolidation of curved parts, leading to a distribution of possible model outputs. With this, the paper, as far as the authors are aware, represents the first stochastic finite element implementation in composite process modelling.

Keywords: data-driven , material consolidation, stochastic finite elements, surrogate models

Procedia PDF Downloads 145
5661 Optimum Design of Alkali Activated Slag Concretes for Low Chloride Ion Permeability and Water Absorption Capacity

Authors: Müzeyyen Balçikanli, Erdoğan Özbay, Hakan Tacettin Türker, Okan Karahan, Cengiz Duran Atiş

Abstract:

In this research, effect of curing time (TC), curing temperature (CT), sodium concentration (SC) and silicate modules (SM) on the compressive strength, chloride ion permeability, and water absorption capacity of alkali activated slag (AAS) concretes were investigated. For maximization of compressive strength while for minimization of chloride ion permeability and water absorption capacity of AAS concretes, best possible combination of CT, CTime, SC and SM were determined. An experimental program was conducted by using the central composite design method. Alkali solution-slag ratio was kept constant at 0.53 in all mixture. The effects of the independent parameters were characterized and analyzed by using statistically significant quadratic regression models on the measured properties (dependent parameters). The proposed regression models are valid for AAS concretes with the SC from 0.1% to 7.5%, SM from 0.4 to 3.2, CT from 20 °C to 94 °C and TC from 1.2 hours to 25 hours. The results of test and analysis indicate that the most effective parameter for the compressive strength, chloride ion permeability and water absorption capacity is the sodium concentration.

Keywords: alkali activation, slag, rapid chloride permeability, water absorption capacity

Procedia PDF Downloads 311
5660 Recognition of Gene Names from Gene Pathway Figures Using Siamese Network

Authors: Muhammad Azam, Micheal Olaolu Arowolo, Fei He, Mihail Popescu, Dong Xu

Abstract:

The number of biological papers is growing quickly, which means that the number of biological pathway figures in those papers is also increasing quickly. Each pathway figure shows extensive biological information, like the names of genes and how the genes are related. However, manually annotating pathway figures takes a lot of time and work. Even though using advanced image understanding models could speed up the process of curation, these models still need to be made more accurate. To improve gene name recognition from pathway figures, we applied a Siamese network to map image segments to a library of pictures containing known genes in a similar way to person recognition from photos in many photo applications. We used a triple loss function and a triplet spatial pyramid pooling network by combining the triplet convolution neural network and the spatial pyramid pooling (TSPP-Net). We compared VGG19 and VGG16 as the Siamese network model. VGG16 achieved better performance with an accuracy of 93%, which is much higher than OCR results.

Keywords: biological pathway, image understanding, gene name recognition, object detection, Siamese network, VGG

Procedia PDF Downloads 291
5659 A Comparison of Convolutional Neural Network Architectures for the Classification of Alzheimer’s Disease Patients Using MRI Scans

Authors: Tomas Premoli, Sareh Rowlands

Abstract:

In this study, we investigate the impact of various convolutional neural network (CNN) architectures on the accuracy of diagnosing Alzheimer’s disease (AD) using patient MRI scans. Alzheimer’s disease is a debilitating neurodegenerative disorder that affects millions worldwide. Early, accurate, and non-invasive diagnostic methods are required for providing optimal care and symptom management. Deep learning techniques, particularly CNNs, have shown great promise in enhancing this diagnostic process. We aim to contribute to the ongoing research in this field by comparing the effectiveness of different CNN architectures and providing insights for future studies. Our methodology involved preprocessing MRI data, implementing multiple CNN architectures, and evaluating the performance of each model. We employed intensity normalization, linear registration, and skull stripping for our preprocessing. The selected architectures included VGG, ResNet, and DenseNet models, all implemented using the Keras library. We employed transfer learning and trained models from scratch to compare their effectiveness. Our findings demonstrated significant differences in performance among the tested architectures, with DenseNet201 achieving the highest accuracy of 86.4%. Transfer learning proved to be helpful in improving model performance. We also identified potential areas for future research, such as experimenting with other architectures, optimizing hyperparameters, and employing fine-tuning strategies. By providing a comprehensive analysis of the selected CNN architectures, we offer a solid foundation for future research in Alzheimer’s disease diagnosis using deep learning techniques. Our study highlights the potential of CNNs as a valuable diagnostic tool and emphasizes the importance of ongoing research to develop more accurate and effective models.

Keywords: Alzheimer’s disease, convolutional neural networks, deep learning, medical imaging, MRI

Procedia PDF Downloads 73
5658 Performance Improvement of Information System of a Banking System Based on Integrated Resilience Engineering Design

Authors: S. H. Iranmanesh, L. Aliabadi, A. Mollajan

Abstract:

Integrated resilience engineering (IRE) is capable of returning banking systems to the normal state in extensive economic circumstances. In this study, information system of a large bank (with several branches) is assessed and optimized under severe economic conditions. Data envelopment analysis (DEA) models are employed to achieve the objective of this study. Nine IRE factors are considered to be the outputs, and a dummy variable is defined as the input of the DEA models. A standard questionnaire is designed and distributed among executive managers to be considered as the decision-making units (DMUs). Reliability and validity of the questionnaire is examined based on Cronbach's alpha and t-test. The most appropriate DEA model is determined based on average efficiency and normality test. It is shown that the proposed integrated design provides higher efficiency than the conventional RE design. Results of sensitivity and perturbation analysis indicate that self-organization, fault tolerance, and reporting culture respectively compose about 50 percent of total weight.

Keywords: banking system, Data Envelopment Analysis (DEA), Integrated Resilience Engineering (IRE), performance evaluation, perturbation analysis

Procedia PDF Downloads 188
5657 Exploration and Evaluation of the Effect of Multiple Countermeasures on Road Safety

Authors: Atheer Al-Nuaimi, Harry Evdorides

Abstract:

Every day many people die or get disabled or injured on roads around the world, which necessitates more specific treatments for transportation safety issues. International road assessment program (iRAP) model is one of the comprehensive road safety models which accounting for many factors that affect road safety in a cost-effective way in low and middle income countries. In iRAP model road safety has been divided into five star ratings from 1 star (the lowest level) to 5 star (the highest level). These star ratings are based on star rating score which is calculated by iRAP methodology depending on road attributes, traffic volumes and operating speeds. The outcome of iRAP methodology are the treatments that can be used to improve road safety and reduce fatalities and serious injuries (FSI) numbers. These countermeasures can be used separately as a single countermeasure or mix as multiple countermeasures for a location. There is general agreement that the adequacy of a countermeasure is liable to consistent losses when it is utilized as a part of mix with different countermeasures. That is, accident diminishment appraisals of individual countermeasures cannot be easily added together. The iRAP model philosophy makes utilization of a multiple countermeasure adjustment factors to predict diminishments in the effectiveness of road safety countermeasures when more than one countermeasure is chosen. A multiple countermeasure correction factors are figured for every 100-meter segment and for every accident type. However, restrictions of this methodology incorporate a presumable over-estimation in the predicted crash reduction. This study aims to adjust this correction factor by developing new models to calculate the effect of using multiple countermeasures on the number of fatalities for a location or an entire road. Regression models have been used to establish relationships between crash frequencies and the factors that affect their rates. Multiple linear regression, negative binomial regression, and Poisson regression techniques were used to develop models that can address the effectiveness of using multiple countermeasures. Analyses are conducted using The R Project for Statistical Computing showed that a model developed by negative binomial regression technique could give more reliable results of the predicted number of fatalities after the implementation of road safety multiple countermeasures than the results from iRAP model. The results also showed that the negative binomial regression approach gives more precise results in comparison with multiple linear and Poisson regression techniques because of the overdispersion and standard error issues.

Keywords: international road assessment program, negative binomial, road multiple countermeasures, road safety

Procedia PDF Downloads 240
5656 What 4th-Year Primary-School Students are Thinking: A Paper Airplane Problem

Authors: Neslihan Şahin Çelik, Ali Eraslan

Abstract:

In recent years, mathematics educators have frequently stressed the necessity of instructing students about models and modeling approaches that encompass cognitive and metacognitive thought processes, starting from the first years of school and continuing on through the years of higher education. The purpose of this study is to examine the thought processes of 4th-grade primary school students in their modeling activities and to explore the difficulties encountered in these processes, if any. The study, of qualitative design, was conducted in the 2015-2016 academic year at a public state-school located in a central city in the Black Sea Region of Turkey. A preliminary study was first implemented with designated 4th grade students, after which the criterion sampling method was used to select three students that would be recruited into the focus group. The focus group that was thus formed was asked to work on the model eliciting activity of the Paper Airplane Problem and the entire process was recorded on video. The Paper Airplane Problem required the students to determine the winner with respect to: (a) the plane that stays in the air for the longest time; (b) the plane that travels the greatest distance in a straight-line path; and (c) the overall winner for the contest. A written transcript was made of the video recording, after which the recording and the students' worksheets were analyzed using the Blum and Ferri modeling cycle. The results of the study revealed that the students tested the hypotheses related to daily life that they had set up, generated ideas of their own, verified their models by making connections with real life, and tried to make their models generalizable. On the other hand, the students had some difficulties in terms of their interpretation of the table of data and their ways of operating on the data during the modeling processes.

Keywords: primary school students, model eliciting activity, mathematical modeling, modeling process, paper airplane problem

Procedia PDF Downloads 358
5655 Creativity and Innovation in Postgraduate Supervision

Authors: Rajendra Chetty

Abstract:

The paper aims to address two aspects of postgraduate studies: interdisciplinary research and creative models of supervision. Interdisciplinary research can be viewed as a key imperative to solve complex problems. While excellent research requires a context of disciplinary strength, the cutting edge is often found at the intersection between disciplines. Interdisciplinary research foregrounds a team approach and information, methodologies, designs, and theories from different disciplines are integrated to advance fundamental understanding or to solve problems whose solutions are beyond the scope of a single discipline. Our aim should also be to generate research that transcends the original disciplines i.e. transdisciplinary research. Complexity is characteristic of the knowledge economy, hence, postgraduate research and engaged scholarship should be viewed by universities as primary vehicles through which knowledge can be generated to have a meaningful impact on society. There are far too many ‘ordinary’ studies that fall into the realm of credentialism and certification as opposed to significant studies that generate new knowledge and provide a trajectory for further academic discourse. Secondly, the paper will look at models of supervision that are different to the dominant ‘apprentice’ or individual approach. A reflective practitioner approach would be used to discuss a range of supervision models that resonate well with the principles of interdisciplinarity, growth in the postgraduate sector and a commitment to engaged scholarship. The global demand for postgraduate education has resulted in increased intake and new demands to limited supervision capacity at institutions. Team supervision lodged within large-scale research projects, working with a cohort of students within a research theme, the journal article route of doctoral studies and the professional PhD are some of the models that provide an alternative to the traditional approach. International cooperation should be encouraged in the production of high-impact research and institutions should be committed to stimulating international linkages which would result in co-supervision and mobility of postgraduate students and global significance of postgraduate research. International linkages are also valuable in increasing the capacity for supervision at new and developing universities. Innovative co-supervision and joint-degree options with global partners should be explored within strategic planning for innovative postgraduate programmes. Co-supervision of PhD students is probably the strongest driver (besides funding) for collaborative research as it provides the glue of shared interest, advantage and commitment between supervisors. The students’ field serves and informs the co-supervisors own research agendas and helps to shape over-arching research themes through shared research findings.

Keywords: interdisciplinarity, internationalisation, postgraduate, supervision

Procedia PDF Downloads 238
5654 Long Short-Term Memory Stream Cruise Control Method for Automated Drift Detection and Adaptation

Authors: Mohammad Abu-Shaira, Weishi Shi

Abstract:

Adaptive learning, a commonly employed solution to drift, involves updating predictive models online during their operation to react to concept drifts, thereby serving as a critical component and natural extension for online learning systems that learn incrementally from each example. This paper introduces LSTM-SCCM “Long Short-Term Memory Stream Cruise Control Method”, a drift adaptation-as-a-service framework for online learning. LSTM-SCCM automates drift adaptation through prompt detection, drift magnitude quantification, dynamic hyperparameter tuning, performing shortterm optimization and model recalibration for immediate adjustments, and, when necessary, conducting long-term model recalibration to ensure deeper enhancements in model performance. LSTM-SCCM is incorporated into a suite of cutting-edge online regression models, assessing their performance across various types of concept drift using diverse datasets with varying characteristics. The findings demonstrate that LSTM-SCCM represents a notable advancement in both model performance and efficacy in handling concept drift occurrences. LSTM-SCCM stands out as the sole framework adept at effectively tackling concept drifts within regression scenarios. Its proactive approach to drift adaptation distinguishes it from conventional reactive methods, which typically rely on retraining after significant degradation to model performance caused by drifts. Additionally, LSTM-SCCM employs an in-memory approach combined with the Self-Adjusting Memory (SAM) architecture to enhance real-time processing and adaptability. The framework incorporates variable thresholding techniques and does not assume any particular data distribution, making it an ideal choice for managing high-dimensional datasets and efficiently handling large-scale data. Our experiments, which include abrupt, incremental, and gradual drifts across both low- and high-dimensional datasets with varying noise levels, and applied to four state-of-the-art online regression models, demonstrate that LSTM-SCCM is versatile and effective, rendering it a valuable solution for online regression models to address concept drift.

Keywords: automated drift detection and adaptation, concept drift, hyperparameters optimization, online and adaptive learning, regression

Procedia PDF Downloads 11
5653 Transformation of Industrial Policy towards Industry 4.0 and Its Impact on Firms' Competition

Authors: Arūnas Burinskas

Abstract:

Although Europe is on the threshold of a new industrial revolution called Industry 4.0, many believe that this will increase the flexibility of production, the mass adaptation of products to consumers and the speed of their service; it will also improve product quality and dramatically increase productivity. However, as expected, all the benefits of Industry 4.0 face many of the inevitable changes and challenges they pose. One of them is the inevitable transformation of current competition and business models. This article examines the possible results of competitive conversion from the classic Bertrand and Cournot models to qualitatively new competition based on innovation. Ability to deliver a new product quickly and the possibility to produce the individual design (through flexible and quickly configurable factories) by reducing equipment failures and increasing process automation and control is highly important. This study shows that the ongoing transformation of the competition model is changing the game. This, together with the creation of complex value networks, means huge investments that make it particularly difficult for small and medium-sized enterprises. In addition, the ongoing digitalization of data raises new concerns regarding legal obligations, intellectual property, and security.

Keywords: Bertrand and Cournot Competition, competition model, industry 4.0, industrial organisation, monopolistic competition

Procedia PDF Downloads 138
5652 Automated Transformation of 3D Point Cloud to BIM Model: Leveraging Algorithmic Modeling for Efficient Reconstruction

Authors: Radul Shishkov, Orlin Davchev

Abstract:

The digital era has revolutionized architectural practices, with building information modeling (BIM) emerging as a pivotal tool for architects, engineers, and construction professionals. However, the transition from traditional methods to BIM-centric approaches poses significant challenges, particularly in the context of existing structures. This research introduces a technical approach to bridge this gap through the development of algorithms that facilitate the automated transformation of 3D point cloud data into detailed BIM models. The core of this research lies in the application of algorithmic modeling and computational design methods to interpret and reconstruct point cloud data -a collection of data points in space, typically produced by 3D scanners- into comprehensive BIM models. This process involves complex stages of data cleaning, feature extraction, and geometric reconstruction, which are traditionally time-consuming and prone to human error. By automating these stages, our approach significantly enhances the efficiency and accuracy of creating BIM models for existing buildings. The proposed algorithms are designed to identify key architectural elements within point clouds, such as walls, windows, doors, and other structural components, and to translate these elements into their corresponding BIM representations. This includes the integration of parametric modeling techniques to ensure that the generated BIM models are not only geometrically accurate but also embedded with essential architectural and structural information. Our methodology has been tested on several real-world case studies, demonstrating its capability to handle diverse architectural styles and complexities. The results showcase a substantial reduction in time and resources required for BIM model generation while maintaining high levels of accuracy and detail. This research contributes significantly to the field of architectural technology by providing a scalable and efficient solution for the integration of existing structures into the BIM framework. It paves the way for more seamless and integrated workflows in renovation and heritage conservation projects, where the accuracy of existing conditions plays a critical role. The implications of this study extend beyond architectural practices, offering potential benefits in urban planning, facility management, and historic preservation.

Keywords: BIM, 3D point cloud, algorithmic modeling, computational design, architectural reconstruction

Procedia PDF Downloads 63
5651 Prediction of Gully Erosion with Stochastic Modeling by using Geographic Information System and Remote Sensing Data in North of Iran

Authors: Reza Zakerinejad

Abstract:

Gully erosion is a serious problem that threading the sustainability of agricultural area and rangeland and water in a large part of Iran. This type of water erosion is the main source of sedimentation in many catchment areas in the north of Iran. Since in many national assessment approaches just qualitative models were applied the aim of this study is to predict the spatial distribution of gully erosion processes by means of detail terrain analysis and GIS -based logistic regression in the loess deposition in a case study in the Golestan Province. This study the DEM with 25 meter result ion from ASTER data has been used. The Landsat ETM data have been used to mapping of land use. The TreeNet model as a stochastic modeling was applied to prediction the susceptible area for gully erosion. In this model ROC we have set 20 % of data as learning and 20 % as learning data. Therefore, applying the GIS and satellite image analysis techniques has been used to derive the input information for these stochastic models. The result of this study showed a high accurate map of potential for gully erosion.

Keywords: TreeNet model, terrain analysis, Golestan Province, Iran

Procedia PDF Downloads 535
5650 Day of the Week Patterns and the Financial Trends' Role: Evidence from the Greek Stock Market during the Euro Era

Authors: Nikolaos Konstantopoulos, Aristeidis Samitas, Vasileiou Evangelos

Abstract:

The purpose of this study is to examine if the financial trends influence not only the stock markets’ returns, but also their anomalies. We choose to study the day of the week effect (DOW) for the Greek stock market during the Euro period (2002-12), because during the specific period there are not significant structural changes and there are long term financial trends. Moreover, in order to avoid possible methodological counterarguments that usually arise in the literature, we apply several linear (OLS) and nonlinear (GARCH family) models to our sample until we reach to the conclusion that the TGARCH model fits better to our sample than any other. Our results suggest that in the Greek stock market there is a long term predisposition for positive/negative returns depending on the weekday. However, the statistical significance is influenced from the financial trend. This influence may be the reason why there are conflict findings in the literature through the time. Finally, we combine the DOW’s empirical findings from 1985-2012 and we may assume that in the Greek case there is a tendency for long lived turn of the week effect.

Keywords: day of the week effect, GARCH family models, Athens stock exchange, economic growth, crisis

Procedia PDF Downloads 410
5649 Importance of New Policies of Process Management for Internet of Things Based on Forensic Investigation

Authors: Venkata Venugopal Rao Gudlur

Abstract:

The Proposed Policies referred to as “SOP”, on the Internet of Things (IoT) based Forensic Investigation into Process Management is the latest revolution to save time and quick solution for investigators. The forensic investigation process has been developed over many years from time to time it has been given the required information with no policies in investigation processes. This research reveals that the current IoT based forensic investigation into Process Management based is more connected to devices which is the latest revolution and policies. All future development in real-time information on gathering monitoring is evolved with smart sensor-based technologies connected directly to IoT. This paper present conceptual framework on process management. The smart devices are leading the way in terms of automated forensic models and frameworks established by different scholars. These models and frameworks were mostly focused on offering a roadmap for performing forensic operations with no policies in place. These initiatives would bring a tremendous benefit to process management and IoT forensic investigators proposing policies. The forensic investigation process may enhance more security and reduced data losses and vulnerabilities.

Keywords: Internet of Things, Process Management, Forensic Investigation, M2M Framework

Procedia PDF Downloads 102
5648 Thermal Properties of Polyhedral Oligomeric Silsesquioxanes/Polyimide Nanocomposite

Authors: Seyfullah Madakbas, Hatice Birtane, Memet Vezir Kahraman

Abstract:

In this study, we aimed to synthesize and characterize polyhedral oligomeric silsesquioxanes containing polyimide nanocomposite. Polyimide nanocomposites widely have been used in membranes in fuel cell, solar cell, gas filtration, sensors, aerospace components, printed circuit boards. Firstly, polyamic acid was synthesized and characterized by Fourier Transform Infrared. Then, polyhedral oligomeric silsesquioxanes containing polyimide nanocomposite was prepared with thermal imidization method. The obtained polyimide nanocomposite was characterized by Fourier Transform Infrared, Scanning Electron Microscope, Thermal Gravimetric Analysis and Differential Scanning Calorimetry. Thermal stability of polyimide nanocomposite was evaluated by thermal gravimetric analysis and differential scanning calorimetry. Surface morphology of composite samples was investigated by scanning electron microscope. The obtained results prove that successfully prepared polyhedral oligomeric silsesquioxanes are containing polyimide nanocomposite. The obtained nanocomposite can be used in many industries such as electronics, automotive, aerospace, etc.

Keywords: polyimide, nanocomposite, polyhedral oligomeric silsesquioxanes

Procedia PDF Downloads 179
5647 Microstrip Bandpass Filter with Wide Stopband and High Out-of-Band Rejection Based on Inter-Digital Capacitor

Authors: Mohamad Farhat, Bal Virdee

Abstract:

This paper present a compact Microstrip Bandpass filter exhibiting a very wide stop band and high selectivity. The filter comprises of asymmetric resonator structures, which are interconnected by an inter-digital capacitor to enable the realization of a wide bandwidth with high rejection level. High selectivity is obtained by optimizing the parameters of the interdigital capacitor. The filter has high out-of-band rejection (> 30 dB), less than 0.6 dB of insertion-loss, up to 5.5 GHz spurii free, and about 18 dB of return-loss. Full-wave electromagnetic simulator ADSTM (Mom) is used to analyze and optimize the prototype bandpass filter. The proposed technique was verified practically to validate the design methodology. The experimental results of the prototype circuit are presented and a good agreement was obtained comparing with the simulation results. The dimensions of the proposed filter are 32 x 24 mm2.The filter’s characteristics and compact size make it suitable for wireless communication systems.

Keywords: asymmetric resonator, bandpass filter, microstrip, spurious suppression, ultra-wide stop band

Procedia PDF Downloads 189
5646 Sustainability in Community-Based Forestry Management: A Case from Nepal

Authors: Tanka Nath Dahal

Abstract:

Community-based forestry is seen as a promising instrument for sustainable forest management (SFM) through the purposeful involvement of local communities. Globally, forest area managed by local communities is on the rise. However, transferring management responsibilities to forest users alone cannot guarantee the sustainability of forest management. A monitoring tool, that allows the local communities to track the progress of forest management towards the goal of sustainability, is essential. A case study, including six forest user groups (FUGs), two from each three community-based forestry models—community forestry (CF), buffer zone community forestry (BZCF), and collaborative forest management (CFM) representing three different physiographic regions, was conducted in Nepal. The study explores which community-based forest management model (CF, BZCF or CFM) is doing well in terms of sustainable forest management. The study assesses the overall performance of the three models towards SFM using locally developed criteria (four), indicators (26) and verifiers (60). This paper attempts to quantify the sustainability of the models using sustainability index for individual criteria (SIIC), and overall sustainability index (OSI). In addition, rating to the criteria and scoring of the verifiers by the FUGs were done. Among the four criteria, the FUGs ascribed the highest weightage to institutional framework and governance criterion; followed by economic and social benefits, forest management practices, and extent of forest resources. Similarly, the SIIC was found to be the highest for the institutional framework and governance criterion. The average values of OSI for CFM, CF, and BZCF were 0.48, 0.51 and 0.60 respectively; suggesting that buffer zone community forestry is the more sustainable model among the three. The study also suggested that the SIIC and OSI help local communities to quantify the overall progress of their forestry practices towards sustainability. The indices provided a clear picture of forest management practices to indicate the direction where they are heading in terms of sustainability; and informed the users on issues to pay attention to enhancing the sustainability of their forests.

Keywords: community forestry, collaborative management, overall sustainability, sustainability index for individual criteria

Procedia PDF Downloads 248
5645 Subjectivity in Miracle Aesthetic Clinic Ambient Media Advertisement

Authors: Wegig Muwonugroho

Abstract:

Subjectivity in advertisement is a ‘power’ possessed by advertisements to construct trend, concept, truth, and ideology through subconscious mind. Advertisements, in performing their functions as message conveyors, use such visual representation to inspire what’s ideal to the people. Ambient media is advertising medium making the best use of the environment where the advertisement is located. Miracle Aesthetic Clinic (Miracle) popularizes the visual representation of its ambient media advertisement through the omission of face-image of both female mannequins that function as its ambient media models. Usually, the face of a model in advertisement is an image commodity having selling values; however, the faces of ambient media models in Miracle advertisement campaign are suppressed over the table and wall. This face concealing aspect creates not only a paradox of subjectivity but also plurality of meaning. This research applies critical discourse analysis method to analyze subjectivity in obtaining the insight of ambient media’s meaning. First, in the stage of textual analysis, the embedding attributes upon female mannequins imply that the models are denoted as the representation of modern women, which are identical with the identities of their social milieus. The communication signs aimed to be constructed are the women who lose their subjectivities and ‘feel embarrassed’ to flaunt their faces to the public because of pimples on their faces. Second, in the stage of analysis of discourse practice, it points out that ambient media as communication media has been comprehensively responded by the targeted audiences. Ambient media has a role as an actor because of its eyes-catching setting, and taking space over the area where the public are wandering around. Indeed, when the public realize that the ambient media models are motionless -unlike human- stronger relation then appears, marked by several responses from targeted audiences. Third, in the stage of analysis of social practice, soap operas and celebrity gossip shows on the television become a dominant discourse influencing advertisement meaning. The subjectivity of Miracle Advertisement corners women by the absence of women participation in public space, the representation of women in isolation, and the portrayal of women as an anxious person in the social rank when their faces suffered from pimples. The Ambient media as the advertisement campaign of Miracle is quite success in constructing a new trend discourse of face beauty that is not limited on benchmarks of common beauty virtues, but the idea of beauty can be presented by ‘when woman doesn’t look good’ visualization.

Keywords: ambient media, advertisement, subjectivity, power

Procedia PDF Downloads 321
5644 Modeling of Timing in a Cyber Conflict to Inform Critical Infrastructure Defense

Authors: Brian Connett, Bryan O'Halloran

Abstract:

Systems assets within critical infrastructures were seemingly safe from the exploitation or attack by nefarious cyberspace actors. Now, critical infrastructure is a target and the resources to exploit the cyber physical systems exist. These resources are characterized in terms of patience, stealth, replication-ability and extraordinary robustness. System owners are obligated to maintain a high level of protection measures. The difficulty lies in knowing when to fortify a critical infrastructure against an impending attack. Models currently exist that demonstrate the value of knowing the attacker’s capabilities in the cyber realm and the strength of the target. The shortcomings of these models are that they are not designed to respond to the inherent fast timing of an attack, an impetus that can be derived based on open-source reporting, common knowledge of exploits of and the physical architecture of the infrastructure. A useful model will inform systems owners how to align infrastructure architecture in a manner that is responsive to the capability, willingness and timing of the attacker. This research group has used an existing theoretical model for estimating parameters, and through analysis, to develop a decision tool for would-be target owners. The continuation of the research develops further this model by estimating the variable parameters. Understanding these parameter estimations will uniquely position the decision maker to posture having revealed the vulnerabilities of an attacker’s, persistence and stealth. This research explores different approaches to improve on current attacker-defender models that focus on cyber threats. An existing foundational model takes the point of view of an attacker who must decide what cyber resource to use and when to use it to exploit a system vulnerability. It is valuable for estimating parameters for the model, and through analysis, develop a decision tool for would-be target owners.

Keywords: critical infrastructure, cyber physical systems, modeling, exploitation

Procedia PDF Downloads 192
5643 Study of Electrical Properties of An-Fl Based Organic Semiconducting Thin Film

Authors: A.G. S. Aldajani, N. Smida, M. G. Althobaiti, B. Zaidi

Abstract:

In order to exploit the good electrical properties of anthracene and the excellent properties of fluorescein, new hybrid material has been synthesized (An-Fl). Current-voltage measurements were done on a new single-layer ITO/An-FL/Al device of typically 100 nm thickness. Atypical diode behavior is observed with a turn-on voltage of 4.4 V, a dynamic resistance of 74.07 KΩ and a rectification ratio of 2.02 due to unbalanced transport. Results show also that the current-voltage characteristics present three different regimes of the power-law (J~Vᵐ) for which the conduction mechanism is well described with space-charge-limited current conduction mechanism (SCLC) with a charge carrier mobility of 2.38.10⁻⁵cm2V⁻¹S⁻¹. Moreover, the electrical transport properties of this device have been carried out using a dependent frequency study in the range (50 Hz–1.4 MHz) for different applied biases (from 0 to 6 V). At lower frequency, the σdc values increase with bias voltage rising, supporting that the mobile ion can hop successfully to its nearest vacant site. From σac and impedance measurements, the equivalent electrical circuit is evidenced, where the conductivity process is coherent with an exponential trap distribution caused by structural defects and/or chemical impurities.

Keywords: semiconducting polymer, conductivity, SCLC, impedance spectroscopy

Procedia PDF Downloads 178
5642 Machine Learning Data Architecture

Authors: Neerav Kumar, Naumaan Nayyar, Sharath Kashyap

Abstract:

Most companies see an increase in the adoption of machine learning (ML) applications across internal and external-facing use cases. ML applications vend output either in batch or real-time patterns. A complete batch ML pipeline architecture comprises data sourcing, feature engineering, model training, model deployment, model output vending into a data store for downstream application. Due to unclear role expectations, we have observed that scientists specializing in building and optimizing models are investing significant efforts into building the other components of the architecture, which we do not believe is the best use of scientists’ bandwidth. We propose a system architecture created using AWS services that bring industry best practices to managing the workflow and simplifies the process of model deployment and end-to-end data integration for an ML application. This narrows down the scope of scientists’ work to model building and refinement while specialized data engineers take over the deployment, pipeline orchestration, data quality, data permission system, etc. The pipeline infrastructure is built and deployed as code (using terraform, cdk, cloudformation, etc.) which makes it easy to replicate and/or extend the architecture to other models that are used in an organization.

Keywords: data pipeline, machine learning, AWS, architecture, batch machine learning

Procedia PDF Downloads 63
5641 DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection

Authors: Hongyu Chen, Li Jiang

Abstract:

Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead.

Keywords: GAN, discrete feature, Wasserstein distance, multiple intermediate layers

Procedia PDF Downloads 129
5640 Micromechanical Modelling of Ductile Damage with a Cohesive-Volumetric Approach

Authors: Noe Brice Nkoumbou Kaptchouang, Pierre-Guy Vincent, Yann Monerie

Abstract:

The present work addresses the modelling and the simulation of crack initiation and propagation in ductile materials which failed by void nucleation, growth, and coalescence. One of the current research frameworks on crack propagation is the use of cohesive-volumetric approach where the crack growth is modelled as a decohesion of two surfaces in a continuum material. In this framework, the material behavior is characterized by two constitutive relations, the volumetric constitutive law relating stress and strain, and a traction-separation law across a two-dimensional surface embedded in the three-dimensional continuum. Several cohesive models have been proposed for the simulation of crack growth in brittle materials. On the other hand, the application of cohesive models in modelling crack growth in ductile material is still a relatively open field. One idea developed in the literature is to identify the traction separation for ductile material based on the behavior of a continuously-deforming unit cell failing by void growth and coalescence. Following this method, the present study proposed a semi-analytical cohesive model for ductile material based on a micromechanical approach. The strain localization band prior to ductile failure is modelled as a cohesive band, and the Gurson-Tvergaard-Needleman plasticity model (GTN) is used to model the behavior of the cohesive band and derived a corresponding traction separation law. The numerical implementation of the model is realized using the non-smooth contact method (NSCD) where cohesive models are introduced as mixed boundary conditions between each volumetric finite element. The present approach is applied to the simulation of crack growth in nuclear ferritic steel. The model provides an alternative way to simulate crack propagation using the numerical efficiency of cohesive model with a traction separation law directly derived from porous continuous model.

Keywords: ductile failure, cohesive model, GTN model, numerical simulation

Procedia PDF Downloads 149
5639 Considering Climate Change in Food Security: A Sociological Study Investigating the Modern Agricultural Practices and Food Security in Bangladesh

Authors: Hosen Tilat Mahal, Monir Hossain

Abstract:

Despite being a food-sufficient country after revolutionary changes in agricultural inputs, Bangladesh still has food insecurity and undernutrition. This study examines the association between agricultural practices (as social practices) and food security concentrating on the potential impact of sociodemographic factors and climate change. Using data from the 2012 Bangladesh Integrated Household Survey (BIHS), this study shows how modifiedagricultural practices are strongly associated with climate change and different sociodemographic factors (land ownership, religion, gender, education, and occupation) subsequently affect the status of food security in Bangladesh. We used linear and logistic regression models to analyze the association between modified agricultural practices and food security. The findings indicate that socioeconomic statuses are significant predictors of determining agricultural practices in a society like Bangladesh and control food security at the household level. Moreover, climate change is adversely impactingeven the modified agricultural and food security association version. We conclude that agricultural practices must consider climate change while boosting food security. Therefore, future research should integrate climate change into the agriculture and food-related mitigation and resiliency models.

Keywords: food security, agricultural productivity, climate change, bangladesh

Procedia PDF Downloads 123
5638 Systematic Study of Structure Property Relationship in Highly Crosslinked Elastomers

Authors: Natarajan Ramasamy, Gurulingamurthy Haralur, Ramesh Nivarthu, Nikhil Kumar Singha

Abstract:

Elastomers are polymeric materials with varied backbone architectures ranging from linear to dendrimeric structures and wide varieties of monomeric repeat units. These elastomers show strongly viscous and weakly elastic when it is not cross-linked. But when crosslinked, based on the extent the properties of these elastomers can range from highly flexible to highly stiff nature. Lightly cross-linked systems are well studied and reported. Understanding the nature of highly cross-linked rubber based upon chemical structure and architecture is critical for varieties of applications. One of the critical parameters is cross-link density. In the current work, we have studied the highly cross-linked state of linear, lightly branched to star-shaped branched elastomers and determined the cross-linked density by using different models. Change in hardness, shift in Tg, change in modulus and swelling behavior were measured experimentally as a function of the extent of curing. These properties were analyzed using varied models to determine cross-link density. We used hardness measurements to examine cure time. Hardness to the extent of curing relationship is determined. It is well known that micromechanical transitions like Tg and storage modulus are related to the extent of crosslinking. The Tg of the elastomer in different crosslinked state was determined by DMA, and based on plateau modulus the crosslink density is estimated by using Nielsen’s model. Usually for lightly crosslinked systems, based on equilibrium swelling ratio in solvent the cross link density is estimated by using Flory–Rhener model. When it comes to highly crosslinked system, Flory-Rhener model is not valid because of smaller chain length. So models based on the assumption of polymer as a Non-Gaussian chain like 1) Helmis–Heinrich–Straube (HHS) model, 2) Gloria M.gusler and Yoram Cohen Model, 3) Barbara D. Barr-Howell and Nikolaos A. Peppas model is used for estimating crosslink density. In this work, correction factors are determined to the existing models and based upon it structure-property relationship of highly crosslinked elastomers was studied.

Keywords: dynamic mechanical analysis, glass transition temperature, parts per hundred grams of rubber, crosslink density, number of networks per unit volume of elastomer

Procedia PDF Downloads 165
5637 Seismic Fragility Assessment of Continuous Integral Bridge Frames with Variable Expansion Joint Clearances

Authors: P. Mounnarath, U. Schmitz, Ch. Zhang

Abstract:

Fragility analysis is an effective tool for the seismic vulnerability assessment of civil structures in the last several years. The design of the expansion joints according to various bridge design codes is almost inconsistent, and only a few studies have focused on this problem so far. In this study, the influence of the expansion joint clearances between the girder ends and the abutment backwalls on the seismic fragility assessment of continuous integral bridge frames is investigated. The gaps (ranging from 60 mm, 150 mm, 250 mm and 350 mm) are designed by following two different bridge design code specifications, namely, Caltrans and Eurocode 8-2. Five bridge models are analyzed and compared. The first bridge model serves as a reference. This model uses three-dimensional reinforced concrete fiber beam-column elements with simplified supports at both ends of the girder. The other four models also employ reinforced concrete fiber beam-column elements but include the abutment backfill stiffness and four different gap values. The nonlinear time history analysis is performed. The artificial ground motion sets, which have the peak ground accelerations (PGAs) ranging from 0.1 g to 1.0 g with an increment of 0.05 g, are taken as input. The soil-structure interaction and the P-Δ effects are also included in the analysis. The component fragility curves in terms of the curvature ductility demand to the capacity ratio of the piers and the displacement demand to the capacity ratio of the abutment sliding bearings are established and compared. The system fragility curves are then obtained by combining the component fragility curves. Our results show that in the component fragility analysis, the reference bridge model exhibits a severe vulnerability compared to that of other sophisticated bridge models for all damage states. In the system fragility analysis, the reference curves illustrate a smaller damage probability in the earlier PGA ranges for the first three damage states, they then show a higher fragility compared to other curves in the larger PGA levels. In the fourth damage state, the reference curve has the smallest vulnerability. In both the component and the system fragility analysis, the same trend is found that the bridge models with smaller clearances exhibit a smaller fragility compared to that with larger openings. However, the bridge model with a maximum clearance still induces a minimum pounding force effect.

Keywords: expansion joint clearance, fiber beam-column element, fragility assessment, time history analysis

Procedia PDF Downloads 435
5636 Predictive Maintenance of Electrical Induction Motors Using Machine Learning

Authors: Muhammad Bilal, Adil Ahmed

Abstract:

This study proposes an approach for electrical induction motor predictive maintenance utilizing machine learning algorithms. On the basis of a study of temperature data obtained from sensors put on the motor, the goal is to predict motor failures. The proposed models are trained to identify whether a motor is defective or not by utilizing machine learning algorithms like Support Vector Machines (SVM) and K-Nearest Neighbors (KNN). According to a thorough study of the literature, earlier research has used motor current signature analysis (MCSA) and vibration data to forecast motor failures. The temperature signal methodology, which has clear advantages over the conventional MCSA and vibration analysis methods in terms of cost-effectiveness, is the main subject of this research. The acquired results emphasize the applicability and effectiveness of the temperature-based predictive maintenance strategy by demonstrating the successful categorization of defective motors using the suggested machine learning models.

Keywords: predictive maintenance, electrical induction motors, machine learning, temperature signal methodology, motor failures

Procedia PDF Downloads 117
5635 High-Quality Flavor of Black Belly Pork under Lightning Corona Discharge Using Tesla Coil for High Voltage Education

Authors: Kyung-Hoon Jang, Jae-Hyo Park, Kwang-Yeop Jang, Dongjin Kim

Abstract:

The Tesla coil is an electrical resonant transformer circuit designed by inventor Nikola Tesla in 1891. It is used to produce high voltage, low current and high frequency alternating current electricity. Tesla experimented with a number of different configurations consisting of two or sometimes three coupled resonant electric circuits. This paper focuses on development and high voltage education to apply a Tesla coil to cuisine for high quality flavor and taste conditioning as well as high voltage education under 50 kV corona discharge. The result revealed that the velocity of roasted black belly pork by Tesla coil is faster than that of conventional methods such as hot grill and steel plate etc. depending on applied voltage level and applied voltage time. Besides, carbohydrate and crude protein increased, whereas natrium and saccharides significantly decreased after lightning surge by Tesla coil. This idea will be useful in high voltage education and high voltage application.

Keywords: corona discharge, Tesla coil, high voltage application, high voltage education

Procedia PDF Downloads 328
5634 Optimal Economic Restructuring Aimed at an Optimal Increase in GDP Constrained by a Decrease in Energy Consumption and CO2 Emissions

Authors: Alexander Vaninsky

Abstract:

The objective of this paper is finding the way of economic restructuring - that is, change in the shares of sectoral gross outputs - resulting in the maximum possible increase in the gross domestic product (GDP) combined with decreases in energy consumption and CO2 emissions. It uses an input-output model for the GDP and factorial models for the energy consumption and CO2 emissions to determine the projection of the gradient of GDP, and the antigradients of the energy consumption and CO2 emissions, respectively, on a subspace formed by the structure-related variables. Since the gradient (antigradient) provides a direction of the steepest increase (decrease) of the objective function, and their projections retain this property for the functions' limitation to the subspace, each of the three directional vectors solves a particular problem of optimal structural change. In the next step, a type of factor analysis is applied to find a convex combination of the projected gradient and antigradients having maximal possible positive correlation with each of the three. This convex combination provides the desired direction of the structural change. The national economy of the United States is used as an example of applications.

Keywords: economic restructuring, input-output analysis, divisia index, factorial decomposition, E3 models

Procedia PDF Downloads 314
5633 Adaptive Backstepping Control of Uncertain Nonlinear Systems with Input Backlash

Authors: Ali Anwar, Hu Qinglei, Li Bo, Muhammad Taha Ali

Abstract:

In this paper a generic model of perturbed nonlinear systems is considered which is affected by hard backlash nonlinearity at the input. The nonlinearity is modelled by a dynamic differential equation which presents a more precise shape as compared to the existing linear models and is compatible with nonlinear design technique such as backstepping. Moreover, a novel backstepping based nonlinear control law is designed which explicitly incorporates a continuous-time adaptive backlash inverse model. It provides a significant flexibility to control engineers, whereby they can use the estimated backlash spacing value specified on actuators such as gears etc. in the adaptive Backlash Inverse model during the control design. It ensures not only global stability but also stringent transient performance with desired precision. It is also robust to external disturbances upon which the bounds are taken as unknown and traverses the backlash spacing efficiently with underestimated information about the actual value. The continuous-time backlash inverse model is distinguished in the sense that other models are either discrete-time or involve complex computations. Furthermore, numerical simulations are presented which not only illustrate the effectiveness of proposed control law but also its comparison with PID and other backstepping controllers.

Keywords: adaptive control, hysteresis, backlash inverse, nonlinear system, robust control, backstepping

Procedia PDF Downloads 460