Search results for: explicit algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 4073

Search results for: explicit algorithm

2303 The Impact of Technology on Handicapped and Disability

Authors: George Kamil Kamal Abdelnor

Abstract:

Every major educational institution has incorporated diversity, equity, and inclusion (DEI) principles into its administrative, hiring, and pedagogical practices. Yet these DEI principles rarely incorporate explicit language or critical thinking about disability. Despite the fact that according to the World Health Organization, one in five people worldwide is disabled, making disabled people the larger minority group in the world, disability remains the neglected stepchild of DEI. Drawing on disability studies and crip theory frameworks, the underlying causes of this exclusion of disability from DEI, such as stigma, shame, invisible disabilities, institutionalization/segregation/delineation from family, and competing models and definitions of disability are examined. This paper explores both the ideological and practical shifts necessary to include disability in university DEI initiatives. It offers positive examples as well as conceptual frameworks such as 'divers ability' for so doing. Using Georgetown University’s 2020-2022 DEI initiatives as a case study, this paper describes how curricular infusion, accessibility, identity, community, and diversity administration infused one university’s DEI initiatives with concrete disability-inclusive measures. It concludes with a consideration of how the very framework of DEI itself might be challenged and transformed if disability were to be included.

Keywords: cognitive disability, cognitive diversity, disability, higher education disability, Standardized Index of Diversity of Disability (SIDD), differential and diversity in disability, 60+ population diversity, equity, inclusion, crip theory, accessibility

Procedia PDF Downloads 46
2302 Numerical Investigation of Fiber-Reinforced Polymer (FRP) Panels Resistance to Blast Loads

Authors: Sameh Ahmed, Khaled Galal

Abstract:

Fiber-reinforced polymer (FRP) sandwich panels are increasingly making their way into structural engineering applications. One of these applications is the blast mitigation. This is attributed to FRP ability of absorbing considerable amount of energy relative to their low density. In this study, FRP sandwich panels are numerically studied using an explicit finite element code ANSYS AUTODYN. The numerical model is then validated with the experimental field tests in the literature. The inner core configurations that have been studied in the experimental field tests were formed from different orientations of the honeycomb shape. On the other hand, the conducted numerical study has proposed a new core configuration. The new core configuration is formulated from a combination of woven and honeycomb shapes. Throughout this study, two performance parameters are considered; the amount of the energy absorbed by the panels and the peak deformation of the panels. Following, a parametric study has been conducted with more variations of the studied parameters to examine the enhancement of the panels' performance. It is found that the numerical results have shown a good agreement with the experimental measurements. Furthermore, the analyses have revealed that using the proposed core configuration obviously enhances the FRP panels’ behavior when subjected to blast loads.

Keywords: blast load, fiber reinforced polymers, finite element modeling, sandwich panels

Procedia PDF Downloads 313
2301 Automatic Multi-Label Image Annotation System Guided by Firefly Algorithm and Bayesian Method

Authors: Saad M. Darwish, Mohamed A. El-Iskandarani, Guitar M. Shawkat

Abstract:

Nowadays, the amount of available multimedia data is continuously on the rise. The need to find a required image for an ordinary user is a challenging task. Content based image retrieval (CBIR) computes relevance based on the visual similarity of low-level image features such as color, textures, etc. However, there is a gap between low-level visual features and semantic meanings required by applications. The typical method of bridging the semantic gap is through the automatic image annotation (AIA) that extracts semantic features using machine learning techniques. In this paper, a multi-label image annotation system guided by Firefly and Bayesian method is proposed. Firstly, images are segmented using the maximum variance intra cluster and Firefly algorithm, which is a swarm-based approach with high convergence speed, less computation rate and search for the optimal multiple threshold. Feature extraction techniques based on color features and region properties are applied to obtain the representative features. After that, the images are annotated using translation model based on the Net Bayes system, which is efficient for multi-label learning with high precision and less complexity. Experiments are performed using Corel Database. The results show that the proposed system is better than traditional ones for automatic image annotation and retrieval.

Keywords: feature extraction, feature selection, image annotation, classification

Procedia PDF Downloads 588
2300 An Ecosystem Approach to Natural Resource Management: Case Study of the Topčiderska River, Serbia

Authors: Katarina Lazarević, Mirjana Todosijević, Tijana Vulević, Natalija Momirović, Ranka Erić

Abstract:

Due to increasing demand, climate change, and world population growth, natural resources are getting exploit fast. One of the most important natural resources is soil, which is susceptible to degradation. Erosion as one of the forms of land degradation is also one of the most global environmental problems. Ecosystem services are often defined as benefits that nature provides to humankind. Soil, as the foundation of basic ecosystem functions, provides benefits to people, erosion control, water infiltration, food, fuel, fibers… This research is using the ecosystem approach as a strategy for natural resources management for promoting sustainability and conservation. The research was done on the Topčiderska River basin (Belgrade, Serbia). The InVEST Sediment Delivery Ratio model was used, to quantify erosion intensity with a spatial distribution output map of overland sediment generation and delivery to the stream. InVEST SDR, a spatially explicit model, is using a method based on the concept of hydrological connectivity and (R) USLE model. This, combined with socio-economic and law and policy analysis, gives a full set of information to decision-makers helping them to successfully manage and deliver sustainable ecosystems.

Keywords: ecosystem services, InVEST model, soil erosion, sustainability

Procedia PDF Downloads 145
2299 Model Predictive Control with Unscented Kalman Filter for Nonlinear Implicit Systems

Authors: Takashi Shimizu, Tomoaki Hashimoto

Abstract:

A class of implicit systems is known as a more generalized class of systems than a class of explicit systems. To establish a control method for such a generalized class of systems, we adopt model predictive control method which is a kind of optimal feedback control with a performance index that has a moving initial time and terminal time. However, model predictive control method is inapplicable to systems whose all state variables are not exactly known. In other words, model predictive control method is inapplicable to systems with limited measurable states. In fact, it is usual that the state variables of systems are measured through outputs, hence, only limited parts of them can be used directly. It is also usual that output signals are disturbed by process and sensor noises. Hence, it is important to establish a state estimation method for nonlinear implicit systems with taking the process noise and sensor noise into consideration. To this purpose, we apply the model predictive control method and unscented Kalman filter for solving the optimization and estimation problems of nonlinear implicit systems, respectively. The objective of this study is to establish a model predictive control with unscented Kalman filter for nonlinear implicit systems.

Keywords: optimal control, nonlinear systems, state estimation, Kalman filter

Procedia PDF Downloads 206
2298 Critical Discourse Analysis of Political TV Talk Show of Pakistani Media

Authors: Sumaira Saleem, Sajjad Hussain, Asma Kashif Shahzad, Hina Shaheen

Abstract:

This study aims at exploring the relationship between language and ideology and how such relationships are represented in the analysis of spoken texts, following Van Dijk’s Socio Cognitive Model (2002). In this study, it is tried to show that political Talk shows broadcast by Private TV channels are working apparatuses of ideology and store meanings which are not always obvious for readers. This analysis was about the situation created by Arslan Iftkhar, the son of ex-Chief Justice of Pakistan, Iftikhar Muhammad Chaudry and PTI Chief Imran Khan. Arslan Iftikhar submitted an application against Imran Khan that he is not able to become a member of parliament of Pakistan. In the application, he demanded the documents, which are submitted by Imran Khan at the time of Election to the Election Commission of Pakistan. Murad Ali from PTI also submitted an application against PM Nawaz Sharif to the Election Commission of Pakistan for providing the copies. It also suggests that these talk shows mystify the agency of processes by using various strategies. In other words, critical text analyses reveal how these choices enable speakers to manipulate the realizations of agency and power in the representation of action to produce particular meanings which are not always explicit for all readers.

Keywords: ECP, CDA, socio cognitive model, ideology, TV channels, power

Procedia PDF Downloads 741
2297 Newly-Rediscovered Manuscripts Talking about Seventeenth-Century French Harpsichord Pedagogy

Authors: David Chung

Abstract:

The development of seventeenth-century French harpsichord music is enigmatic in several respects. Although little is known about the formation of this style before 1650 (we have names of composers, but no surviving music), the style has attained a high degree of refinement and sophistication in the music of the earliest known masters (e.g. Chambonnières, Louis Couperin and D’Anglebert). In fact, how the seventeenth-century musicians acquired the skills of their art remains largely steeped in mystery, as the earliest major treatise on French keyboard pedagogy was not published until 1702 by Saint Lambert. This study fills this lacuna by surveying some twenty recently-rediscovered manuscripts, which offer ample materials for revisiting key issues pertaining to seventeenth-century harpsichord pedagogy. By analyzing the musical contents, the verbal information and explicit notation (such as written-out ornaments and rhythmic effects), this study provides a rich picture of the process of learning at the time, with engaging details of performance nuances often lacking in tutors and treatises. Of even greater significance, that creative skills (such as continuo and ornamentation) were taught alongside fundamental knowledge (solfèges, note values, etc.) at the earliest stage of learning offers fresh challenge for modern pedagogues to rethink how harpsichord pedagogy can be revamped to cater for our own pedagogical and aesthetic needs.

Keywords: French, harpsichord, pedagogy, seventeenth century

Procedia PDF Downloads 261
2296 Detection of Curvilinear Structure via Recursive Anisotropic Diffusion

Authors: Sardorbek Numonov, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Dongeun Choi, Byung-Woo Hong

Abstract:

The detection of curvilinear structures often plays an important role in the analysis of images. In particular, it is considered as a crucial step for the diagnosis of chronic respiratory diseases to localize the fissures in chest CT imagery where the lung is divided into five lobes by the fissures that are characterized by linear features in appearance. However, the characteristic linear features for the fissures are often shown to be subtle due to the high intensity variability, pathological deformation or image noise involved in the imaging procedure, which leads to the uncertainty in the quantification of anatomical or functional properties of the lung. Thus, it is desired to enhance the linear features present in the chest CT images so that the distinctiveness in the delineation of the lobe is improved. We propose a recursive diffusion process that prefers coherent features based on the analysis of structure tensor in an anisotropic manner. The local image features associated with certain scales and directions can be characterized by the eigenanalysis of the structure tensor that is often regularized via isotropic diffusion filters. However, the isotropic diffusion filters involved in the computation of the structure tensor generally blur geometrically significant structure of the features leading to the degradation of the characteristic power in the feature space. Thus, it is required to take into consideration of local structure of the feature in scale and direction when computing the structure tensor. We apply an anisotropic diffusion in consideration of scale and direction of the features in the computation of the structure tensor that subsequently provides the geometrical structure of the features by its eigenanalysis that determines the shape of the anisotropic diffusion kernel. The recursive application of the anisotropic diffusion with the kernel the shape of which is derived from the structure tensor leading to the anisotropic scale-space where the geometrical features are preserved via the eigenanalysis of the structure tensor computed from the diffused image. The recursive interaction between the anisotropic diffusion based on the geometry-driven kernels and the computation of the structure tensor that determines the shape of the diffusion kernels yields a scale-space where geometrical properties of the image structure are effectively characterized. We apply our recursive anisotropic diffusion algorithm to the detection of curvilinear structure in the chest CT imagery where the fissures present curvilinear features and define the boundary of lobes. It is shown that our algorithm yields precise detection of the fissures while overcoming the subtlety in defining the characteristic linear features. The quantitative evaluation demonstrates the robustness and effectiveness of the proposed algorithm for the detection of fissures in the chest CT in terms of the false positive and the true positive measures. The receiver operating characteristic curves indicate the potential of our algorithm as a segmentation tool in the clinical environment. This work was supported by the MISP(Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by the IITP(Institute for Information and Communications Technology Promotion).

Keywords: anisotropic diffusion, chest CT imagery, chronic respiratory disease, curvilinear structure, fissure detection, structure tensor

Procedia PDF Downloads 235
2295 Enhanced Planar Pattern Tracking for an Outdoor Augmented Reality System

Authors: L. Yu, W. K. Li, S. K. Ong, A. Y. C. Nee

Abstract:

In this paper, a scalable augmented reality framework for handheld devices is presented. The presented framework is enabled by using a server-client data communication structure, in which the search for tracking targets among a database of images is performed on the server-side while pixel-wise 3D tracking is performed on the client-side, which, in this case, is a handheld mobile device. Image search on the server-side adopts a residual-enhanced image descriptors representation that gives the framework a scalability property. The tracking algorithm on the client-side is based on a gravity-aligned feature descriptor which takes the advantage of a sensor-equipped mobile device and an optimized intensity-based image alignment approach that ensures the accuracy of 3D tracking. Automatic content streaming is achieved by using a key-frame selection algorithm, client working phase monitoring and standardized rules for content communication between the server and client. The recognition accuracy test performed on a standard dataset shows that the method adopted in the presented framework outperforms the Bag-of-Words (BoW) method that has been used in some of the previous systems. Experimental test conducted on a set of video sequences indicated the real-time performance of the tracking system with a frame rate at 15-30 frames per second. The presented framework is exposed to be functional in practical situations with a demonstration application on a campus walk-around.

Keywords: augmented reality framework, server-client model, vision-based tracking, image search

Procedia PDF Downloads 279
2294 Fast Approximate Bayesian Contextual Cold Start Learning (FAB-COST)

Authors: Jack R. McKenzie, Peter A. Appleby, Thomas House, Neil Walton

Abstract:

Cold-start is a notoriously difficult problem which can occur in recommendation systems, and arises when there is insufficient information to draw inferences for users or items. To address this challenge, a contextual bandit algorithm – the Fast Approximate Bayesian Contextual Cold Start Learning algorithm (FAB-COST) – is proposed, which is designed to provide improved accuracy compared to the traditionally used Laplace approximation in the logistic contextual bandit, while controlling both algorithmic complexity and computational cost. To this end, FAB-COST uses a combination of two moment projection variational methods: Expectation Propagation (EP), which performs well at the cold start, but becomes slow as the amount of data increases; and Assumed Density Filtering (ADF), which has slower growth of computational cost with data size but requires more data to obtain an acceptable level of accuracy. By switching from EP to ADF when the dataset becomes large, it is able to exploit their complementary strengths. The empirical justification for FAB-COST is presented, and systematically compared to other approaches on simulated data. In a benchmark against the Laplace approximation on real data consisting of over 670, 000 impressions from autotrader.co.uk, FAB-COST demonstrates at one point increase of over 16% in user clicks. On the basis of these results, it is argued that FAB-COST is likely to be an attractive approach to cold-start recommendation systems in a variety of contexts.

Keywords: cold-start learning, expectation propagation, multi-armed bandits, Thompson Sampling, variational inference

Procedia PDF Downloads 111
2293 Seismic Performance of Benchmark Building Installed with Semi-Active Dampers

Authors: B. R. Raut

Abstract:

The seismic performance of 20-storey benchmark building with semi-active dampers is investigated under various earthquake ground motions. The Semi-Active Variable Friction Dampers (SAVFD) and Magnetorheological Dampers (MR) are used in this study. A recently proposed predictive control algorithm is employed for SAVFD and a simple mechanical model based on a Bouc–Wen element with clipped optimal control algorithm is employed for MR damper. A parametric study is carried out to ascertain the optimum parameters of the semi-active controllers, which yields the minimum performance indices of controlled benchmark building. The effectiveness of dampers is studied in terms of the reduction in structural responses and performance criteria. To minimize the cost of the dampers, the optimal location of the damper, rather than providing the dampers at all floors, is also investigated. The semi-active dampers installed in benchmark building effectively reduces the earthquake-induced responses. Lesser number of dampers at appropriate locations also provides comparable response of benchmark building, thereby reducing cost of dampers significantly. The effectiveness of two semi-active devices in mitigating seismic responses is cross compared. Among two semi-active devices majority of the performance criteria of MR dampers are lower than SAVFD installed with benchmark building. Thus the performance of the MR dampers is far better than SAVFD in reducing displacement, drift, acceleration and base shear of mid to high-rise building against seismic forces.

Keywords: benchmark building, control strategy, input excitation, MR dampers, peak response, semi-active variable friction dampers

Procedia PDF Downloads 289
2292 Interpretation of the Russia-Ukraine 2022 War via N-Gram Analysis

Authors: Elcin Timur Cakmak, Ayse Oguzlar

Abstract:

This study presents the results of the tweets sent by Twitter users on social media about the Russia-Ukraine war by bigram and trigram methods. On February 24, 2022, Russian President Vladimir Putin declared a military operation against Ukraine, and all eyes were turned to this war. Many people living in Russia and Ukraine reacted to this war and protested and also expressed their deep concern about this war as they felt the safety of their families and their futures were at stake. Most people, especially those living in Russia and Ukraine, express their views on the war in different ways. The most popular way to do this is through social media. Many people prefer to convey their feelings using Twitter, one of the most frequently used social media tools. Since the beginning of the war, it is seen that there have been thousands of tweets about the war from many countries of the world on Twitter. These tweets accumulated in data sources are extracted using various codes for analysis through Twitter API and analysed by Python programming language. The aim of the study is to find the word sequences in these tweets by the n-gram method, which is known for its widespread use in computational linguistics and natural language processing. The tweet language used in the study is English. The data set consists of the data obtained from Twitter between February 24, 2022, and April 24, 2022. The tweets obtained from Twitter using the #ukraine, #russia, #war, #putin, #zelensky hashtags together were captured as raw data, and the remaining tweets were included in the analysis stage after they were cleaned through the preprocessing stage. In the data analysis part, the sentiments are found to present what people send as a message about the war on Twitter. Regarding this, negative messages make up the majority of all the tweets as a ratio of %63,6. Furthermore, the most frequently used bigram and trigram word groups are found. Regarding the results, the most frequently used word groups are “he, is”, “I, do”, “I, am” for bigrams. Also, the most frequently used word groups are “I, do, not”, “I, am, not”, “I, can, not” for trigrams. In the machine learning phase, the accuracy of classifications is measured by Classification and Regression Trees (CART) and Naïve Bayes (NB) algorithms. The algorithms are used separately for bigrams and trigrams. We gained the highest accuracy and F-measure values by the NB algorithm and the highest precision and recall values by the CART algorithm for bigrams. On the other hand, the highest values for accuracy, precision, and F-measure values are achieved by the CART algorithm, and the highest value for the recall is gained by NB for trigrams.

Keywords: classification algorithms, machine learning, sentiment analysis, Twitter

Procedia PDF Downloads 79
2291 Innovative Ideas through Collaboration with Potential Users

Authors: Martin Hewing, Katharina Hölzle

Abstract:

Organizations increasingly use environmental stimuli and ideas from users within participatory innovation processes in order to tap new sources of knowledge. The research presented in this article focuses on users who shape the distant edges of markets and currently are not using products and services from a domain– so called potential users. Those users at the peripheries are perceived to contribute more novel information, by which they better reflect shifts in needs and behavior than current users in the core market. Their contributions in collaborative and creative problem-solving processes and how they generate ideas for discontinuous innovations are of particular interest. With an experimental design, we compare ideas from potential and current users and analyze the effects of cognitive distance in collaboration and the utilization of explicit and tacit knowledge. We find potential users to generate more original ideas, particularly when they collaborate with someone experienced within the domain. Their ideas are most obviously characterized by an increased level of surprise and unusualness compared to dominant designs, which is rooted in contexts and does not require technological leaps. Collaboration with potential users can therefore result in new ways to leverage technological competences. Furthermore, the cross-fertilization arising from cognitive distance between a potential and a current user is asymmetric due to differences in the nature of their utilized knowledge and personal objectives. This paper discusses implications for innovation research and the management of early innovation processes.

Keywords: user collaboration, co-creation, discontinuous innovation, innovation research

Procedia PDF Downloads 509
2290 Unified Coordinate System Approach for Swarm Search Algorithms in Global Information Deficit Environments

Authors: Rohit Dey, Sailendra Karra

Abstract:

This paper aims at solving the problem of multi-target searching in a Global Positioning System (GPS) denied environment using swarm robots with limited sensing and communication abilities. Typically, existing swarm-based search algorithms rely on the presence of a global coordinate system (vis-à-vis, GPS) that is shared by the entire swarm which, in turn, limits its application in a real-world scenario. This can be attributed to the fact that robots in a swarm need to share information among themselves regarding their location and signal from targets to decide their future course of action but this information is only meaningful when they all share the same coordinate frame. The paper addresses this very issue by eliminating any dependency of a search algorithm on the need of a predetermined global coordinate frame by the unification of the relative coordinate of individual robots when within the communication range, therefore, making the system more robust in real scenarios. Our algorithm assumes that all the robots in the swarm are equipped with range and bearing sensors and have limited sensing range and communication abilities. Initially, every robot maintains their relative coordinate frame and follow Levy walk random exploration until they come in range with other robots. When two or more robots are within communication range, they share sensor information and their location w.r.t. their coordinate frames based on which we unify their coordinate frames. Now they can share information about the areas that were already explored, information about the surroundings, and target signal from their location to make decisions about their future movement based on the search algorithm. During the process of exploration, there can be several small groups of robots having their own coordinate systems but eventually, it is expected for all the robots to be under one global coordinate frame where they can communicate information on the exploration area following swarm search techniques. Using the proposed method, swarm-based search algorithms can work in a real-world scenario without GPS and any initial information about the size and shape of the environment. Initial simulation results show that running our modified-Particle Swarm Optimization (PSO) without global information we can still achieve the desired results that are comparable to basic PSO working with GPS. In the full paper, we plan on doing the comparison study between different strategies to unify the coordinate system and to implement them on other bio-inspired algorithms, to work in GPS denied environment.

Keywords: bio-inspired search algorithms, decentralized control, GPS denied environment, swarm robotics, target searching, unifying coordinate systems

Procedia PDF Downloads 141
2289 The Role of Knowledge Management in Global Software Engineering

Authors: Samina Khalid, Tehmina Khalil, Smeea Arshad

Abstract:

Knowledge management is essential ingredient of successful coordination in globally distributed software engineering. Various frameworks, KMSs, and tools have been proposed to foster coordination and communication between virtual teams but practical implementation of these solutions has not been found. Organizations have to face challenges to implement knowledge management system. For this purpose at first, a literature review is arranged to investigate about challenges that restrict organizations to implement KMS and then by taking in account these challenges a problem of need of integrated solution in the form of standardized KMS that can easily store tacit and explicit knowledge, has traced down to facilitate coordination and collaboration among virtual teams. Literature review has been already shown that knowledge is a complex perception with profound meanings, and one of the most important resources that contributes to the competitive advantage of an organization. In order to meet the different challenges caused by not properly managing knowledge related to projects among virtual teams in GSE, we suggest making use of the cloud computing model. In this research a distributed architecture to support KM storage is proposed called conceptual framework of KM as a service in cloud. Framework presented is enhanced and conceptual framework of KM is embedded into that framework to store projects related knowledge for future use.

Keywords: management, Globsl software development, global software engineering

Procedia PDF Downloads 528
2288 Markowitz and Implementation of a Multi-Objective Evolutionary Technique Applied to the Colombia Stock Exchange (2009-2015)

Authors: Feijoo E. Colomine Duran, Carlos E. Peñaloza Corredor

Abstract:

There modeling component selection financial investment (Portfolio) a variety of problems that can be addressed with optimization techniques under evolutionary schemes. For his feature, the problem of selection of investment components of a dichotomous relationship between two elements that are opposed: The Portfolio Performance and Risk presented by choosing it. This relationship was modeled by Markowitz through a media problem (Performance) - variance (risk), ie must Maximize Performance and Minimize Risk. This research included the study and implementation of multi-objective evolutionary techniques to solve these problems, taking as experimental framework financial market equities Colombia Stock Exchange between 2009-2015. Comparisons three multiobjective evolutionary algorithms, namely the Nondominated Sorting Genetic Algorithm II (NSGA-II), the Strength Pareto Evolutionary Algorithm 2 (SPEA2) and Indicator-Based Selection in Multiobjective Search (IBEA) were performed using two measures well known performance: The Hypervolume indicator and R_2 indicator, also it became a nonparametric statistical analysis and the Wilcoxon rank-sum test. The comparative analysis also includes an evaluation of the financial efficiency of the investment portfolio chosen by the implementation of various algorithms through the Sharpe ratio. It is shown that the portfolio provided by the implementation of the algorithms mentioned above is very well located between the different stock indices provided by the Colombia Stock Exchange.

Keywords: finance, optimization, portfolio, Markowitz, evolutionary algorithms

Procedia PDF Downloads 308
2287 Women's Sexual Experience in Pakistan: Associations of Patriarchy and Psychological Distress

Authors: Sana Tahir, Haya Fatimah

Abstract:

Sexuality is a social construct which is considered as the most confidential affair among individuals where women tend to refrain themselves more from sexually explicit behavior than men. Patriarchy has an elevated influence on the expression of female sexuality. While women’s sexual experiences are suppressed men are entitled to pleasure themselves according to their desire. The purpose of this study is to explore how the internalization of patriarchy affects women’s sexuality. Similarly, it was investigated how women sexuality is associated with psychological distress. The sample consisted of 100(age 20-40) married women. Participants were selected through a combination of convenient and snowball sampling. Women were asked to provide data regarding patriarchal beliefs, sexual awareness and DAS (depression, anxiety, and stress). Pearson Product Moment Correlation Analyze was conducted to examine the nature of the relationship between patriarchal beliefs, sexual awareness and psychological distress in married women. There is a significant negative relation between sexual awareness and patriarchal beliefs (r=-.391, p<.001). There also lies a significant negative relation between sexual awareness and depression, anxiety, stress (r=-.359, p<.001) (r=.301, p=.002) (r=-.221, p=.027). The results reveal that women with strong patriarchal beliefs have less sexual awareness in terms of sexual consciousness, sexual monitoring, sexual assertiveness and sexual appeal consciousness. Similarly, women with strong patriarchal beliefs and less sexual awareness have high levels of depression, anxiety, and stress.

Keywords: female sexuality, patriarchy, psychological distress, sexual awareness

Procedia PDF Downloads 305
2286 Moving on or Deciding to Let Go: The Effects of Emotional and Decisional Forgiveness on Intentional Forgetting

Authors: Saima Noreen, Malcolm D. MacLeod

Abstract:

Different types of forgiveness (emotional and decisional) have been shown to have differential effects on incidental forgetting of information related to a prior transgression. The present study explored the extent to which emotional and decisional forgiveness also influenced intentional forgetting; that is, the extent to which forgetting occurs following an explicit instruction to forget. Using the List-Method Directed Forgetting (LMDF) paradigm, 236 participants were presented with a hypothetical transgression and then assigned to an emotional forgiveness, a decisional forgiveness, or a no-forgiveness manipulation. Participants were then presented with two-word lists each comprising transgression-relevant and transgression-irrelevant words. Following the presentation of the first list, participants were told to either remember or forget the previously learned list of words. Participants in the emotional forgiveness condition were found to remember fewer relevant and more irrelevant transgression-related words, while the opposite was true for both decisional forgiveness and no-forgiveness conditions. Furthermore, when directed to forget words in List 1, participants in the decisional and no-forgiveness conditions were less able to forget relevant transgression-related words in comparison to participants in the emotional forgiveness condition. This study suggests that emotional forgiveness plays a pivotal role in the intentional forgetting of transgression-related information. The potential implications of these findings for coping with unpleasant incidents will be considered.

Keywords: decisional forgiveness, directed forgetting, emotional forgiveness, executive control, forgiveness

Procedia PDF Downloads 237
2285 Extended Kalman Filter and Markov Chain Monte Carlo Method for Uncertainty Estimation: Application to X-Ray Fluorescence Machine Calibration and Metal Testing

Authors: S. Bouhouche, R. Drai, J. Bast

Abstract:

This paper is concerned with a method for uncertainty evaluation of steel sample content using X-Ray Fluorescence method. The considered method of analysis is a comparative technique based on the X-Ray Fluorescence; the calibration step assumes the adequate chemical composition of metallic analyzed sample. It is proposed in this work a new combined approach using the Kalman Filter and Markov Chain Monte Carlo (MCMC) for uncertainty estimation of steel content analysis. The Kalman filter algorithm is extended to the model identification of the chemical analysis process using the main factors affecting the analysis results; in this case, the estimated states are reduced to the model parameters. The MCMC is a stochastic method that computes the statistical properties of the considered states such as the probability distribution function (PDF) according to the initial state and the target distribution using Monte Carlo simulation algorithm. Conventional approach is based on the linear correlation, the uncertainty budget is established for steel Mn(wt%), Cr(wt%), Ni(wt%) and Mo(wt%) content respectively. A comparative study between the conventional procedure and the proposed method is given. This kind of approaches is applied for constructing an accurate computing procedure of uncertainty measurement.

Keywords: Kalman filter, Markov chain Monte Carlo, x-ray fluorescence calibration and testing, steel content measurement, uncertainty measurement

Procedia PDF Downloads 289
2284 Artificial Neural Network in Ultra-High Precision Grinding of Borosilicate-Crown Glass

Authors: Goodness Onwuka, Khaled Abou-El-Hossein

Abstract:

Borosilicate-crown (BK7) glass has found broad application in the optic and automotive industries and the growing demands for nanometric surface finishes is becoming a necessity in such applications. Thus, it has become paramount to optimize the parameters influencing the surface roughness of this precision lens. The research was carried out on a 4-axes Nanoform 250 precision lathe machine with an ultra-high precision grinding spindle. The experiment varied the machining parameters of feed rate, wheel speed and depth of cut at three levels for different combinations using Box Behnken design of experiment and the resulting surface roughness values were measured using a Taylor Hobson Dimension XL optical profiler. Acoustic emission monitoring technique was applied at a high sampling rate to monitor the machining process while further signal processing and feature extraction methods were implemented to generate the input to a neural network algorithm. This paper highlights the training and development of a back propagation neural network prediction algorithm through careful selection of parameters and the result show a better classification accuracy when compared to a previously developed response surface model with very similar machining parameters. Hence artificial neural network algorithms provide better surface roughness prediction accuracy in the ultra-high precision grinding of BK7 glass.

Keywords: acoustic emission technique, artificial neural network, surface roughness, ultra-high precision grinding

Procedia PDF Downloads 305
2283 Combination of Geological, Geophysical and Reservoir Engineering Analyses in Field Development: A Case Study

Authors: Atif Zafar, Fan Haijun

Abstract:

A sequence of different Reservoir Engineering methods and tools in reservoir characterization and field development are presented in this paper. The real data of Jin Gas Field of L-Basin of Pakistan is used. The basic concept behind this work is to enlighten the importance of well test analysis in a broader way (i.e. reservoir characterization and field development) unlike to just determine the permeability and skin parameters. Normally in the case of reservoir characterization we rely on well test analysis to some extent but for field development plan, the well test analysis has become a forgotten tool specifically for locations of new development wells. This paper describes the successful implementation of well test analysis in Jin Gas Field where the main uncertainties are identified during initial stage of field development when location of new development well was marked only on the basis of G&G (Geologic and Geophysical) data. The seismic interpretation could not encounter one of the boundary (fault, sub-seismic fault, heterogeneity) near the main and only producing well of Jin Gas Field whereas the results of the model from the well test analysis played a very crucial rule in order to propose the location of second well of the newly discovered field. The results from different methods of well test analysis of Jin Gas Field are also integrated with and supported by other tools of Reservoir Engineering i.e. Material Balance Method and Volumetric Method. In this way, a comprehensive way out and algorithm is obtained in order to integrate the well test analyses with Geological and Geophysical analyses for reservoir characterization and field development. On the strong basis of this working and algorithm, it was successfully evaluated that the proposed location of new development well was not justified and it must be somewhere else except South direction.

Keywords: field development plan, reservoir characterization, reservoir engineering, well test analysis

Procedia PDF Downloads 370
2282 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 153
2281 Optimization by Means of Genetic Algorithm of the Equivalent Electrical Circuit Model of Different Order for Li-ion Battery Pack

Authors: V. Pizarro-Carmona, S. Castano-Solis, M. Cortés-Carmona, J. Fraile-Ardanuy, D. Jimenez-Bermejo

Abstract:

The purpose of this article is to optimize the Equivalent Electric Circuit Model (EECM) of different orders to obtain greater precision in the modeling of Li-ion battery packs. Optimization includes considering circuits based on 1RC, 2RC and 3RC networks, with a dependent voltage source and a series resistor. The parameters are obtained experimentally using tests in the time domain and in the frequency domain. Due to the high non-linearity of the behavior of the battery pack, Genetic Algorithm (GA) was used to solve and optimize the parameters of each EECM considered (1RC, 2RC and 3RC). The objective of the estimation is to minimize the mean square error between the measured impedance in the real battery pack and those generated by the simulation of different proposed circuit models. The results have been verified by comparing the Nyquist graphs of the estimation of the complex impedance of the pack. As a result of the optimization, the 2RC and 3RC circuit alternatives are considered as viable to represent the battery behavior. These battery pack models are experimentally validated using a hardware-in-the-loop (HIL) simulation platform that reproduces the well-known New York City cycle (NYCC) and Federal Test Procedure (FTP) driving cycles for electric vehicles. The results show that using GA optimization allows obtaining EECs with 2RC or 3RC networks, with high precision to represent the dynamic behavior of a battery pack in vehicular applications.

Keywords: Li-ion battery packs modeling optimized, EECM, GA, electric vehicle applications

Procedia PDF Downloads 129
2280 High-Resolution Spatiotemporal Retrievals of Aerosol Optical Depth from Geostationary Satellite Using Sara Algorithm

Authors: Muhammad Bilal, Zhongfeng Qiu

Abstract:

Aerosols, suspended particles in the atmosphere, play an important role in the earth energy budget, climate change, degradation of atmospheric visibility, urban air quality, and human health. To fully understand aerosol effects, retrieval of aerosol optical properties such as aerosol optical depth (AOD) at high spatiotemporal resolution is required. Therefore, in the present study, hourly AOD observations at 500 m resolution were retrieved from the geostationary ocean color imager (GOCI) using the simplified aerosol retrieval algorithm (SARA) over the urban area of Beijing for the year 2016. The SARA requires top-of-the-atmosphere (TOA) reflectance, solar and sensor geometry information and surface reflectance observations to retrieve an accurate AOD. For validation of the GOCI retrieved AOD, AOD measurements were obtained from the aerosol robotic network (AERONET) version 3 level 2.0 (cloud-screened and quality assured) data. The errors and uncertainties were reported using the root mean square error (RMSE), relative percent mean error (RPME), and the expected error (EE = ± (0.05 + 0.15AOD). Results showed that the high spatiotemporal GOCI AOD observations were well correlated with the AERONET AOD measurements with a correlation coefficient (R) of 0.92, RMSE of 0.07, and RPME of 5%, and 90% of the observations were within the EE. The results suggested that the SARA is robust and has the ability to retrieve high-resolution spatiotemporal AOD observations over the urban area using the geostationary satellite.

Keywords: AEORNET, AOD, SARA, GOCI, Beijing

Procedia PDF Downloads 177
2279 Control of Base Isolated Benchmark using Combined Control Strategy with Fuzzy Algorithm Subjected to Near-Field Earthquakes

Authors: Hashem Shariatmadar, Mozhgansadat Momtazdargahi

Abstract:

The purpose of control structure against earthquake is to dissipate earthquake input energy to the structure and reduce the plastic deformation of structural members. There are different methods for control structure against earthquake to reduce the structure response that they are active, semi-active, inactive and hybrid. In this paper two different combined control systems are used first system comprises base isolator and multi tuned mass dampers (BI & MTMD) and another combination is hybrid base isolator and multi tuned mass dampers (HBI & MTMD) for controlling an eight story isolated benchmark steel structure. Active control force of hybrid isolator is estimated by fuzzy logic algorithms. The influences of the combined systems on the responses of the benchmark structure under the two near-field earthquake (Newhall & Elcentro) are evaluated by nonlinear dynamic time history analysis. Applications of combined control systems consisting of passive or active systems installed in parallel to base-isolation bearings have the capability of reducing response quantities of base-isolated (relative and absolute displacement) structures significantly. Therefore in design and control of irregular isolated structures using the proposed control systems, structural demands (relative and absolute displacement and etc.) in each direction must be considered separately.

Keywords: base-isolated benchmark structure, multi-tuned mass dampers, hybrid isolators, near-field earthquake, fuzzy algorithm

Procedia PDF Downloads 308
2278 Concept for Determining the Focus of Technology Monitoring Activities

Authors: Guenther Schuh, Christina Koenig, Nico Schoen, Markus Wellensiek

Abstract:

Identification and selection of appropriate product and manufacturing technologies are key factors for competitiveness and market success of technology-based companies. Therefore many companies perform technology intelligence (TI) activities to ensure the identification of evolving technologies at the right time. Technology monitoring is one of the three base activities of TI, besides scanning and scouting. As the technological progress is accelerating, more and more technologies are being developed. Against the background of limited resources it is therefore necessary to focus TI activities. In this paper, we propose a concept for defining appropriate search fields for technology monitoring. This limitation of search space leads to more concentrated monitoring activities. The concept will be introduced and demonstrated through an anonymized case study conducted within an industry project at the Fraunhofer Institute for Production Technology. The described concept provides a customized monitoring approach, which is suitable for use in technology-oriented companies especially those that have not yet defined an explicit technology strategy. It is shown in this paper that the definition of search fields and search tasks are suitable methods to define topics of interest and thus to direct monitoring activities. Current as well as planned product, production and material technologies as well as existing skills, capabilities and resources form the basis of the described derivation of relevant search areas. To further improve the concept of technology monitoring the proposed concept should be extended during future research e.g. by the definition of relevant monitoring parameters.

Keywords: monitoring radar, search field, technology intelligence, technology monitoring

Procedia PDF Downloads 478
2277 Optimization of the Performance of a Solar Concentrator System with a Cavity Receiver Using the Genetic Algorithm

Authors: Foozhan Gharehkhani

Abstract:

The use of solar energy as a sustainable renewable energy source has gained significant attention in recent years. Solar concentrating systems (CSP), which direct solar radiation onto a receiver, are an effective means of producing high-temperature thermal energy. Cavity receivers, known for their high thermal efficiency and reduced heat losses, are particularly noteworthy in these systems. Optimizing their design can enhance energy efficiency and reduce costs. This study leverages the genetic algorithm, a powerful optimization tool inspired by natural evolution, to optimize the performance of a solar concentrator system with a cavity receiver, aiming for a more efficient and cost-effective design. In this study, a system consisting of a solar concentrator and a cavity receiver was analyzed. The concentrator was designed as a parabolic dish, and the receiver had a cylindrical cavity with a helical structure. The primary parameters were defined as the cavity diameter (D), the receiver height (h), and the helical pipe diameter (d). Initially, the system was optimized to achieve the maximum heat flux, and the optimal parameter values along with the maximum heat flux were obtained. Subsequently, a multi-objective optimization approach was applied, aiming to maximize the heat flux while minimizing the system construction cost. The optimization process was conducted using the genetic algorithm implemented in MATLAB with precise execution. The results of this study revealed that the optimal dimensions of the receiver, including the cavity diameter (D), receiver height (h), and helical pipe diameter (d), were determined to be 0.142 m, 0.1385 m, and 0.011 m, respectively. This optimization resulted in improvements of 3% in the cavity diameter, 8% in the height, and 5% in the helical pipe diameter. Furthermore, the results indicated that the primary focus of this research was the accurate thermal modeling of the solar collection system. The simulations and the obtained results demonstrated that the optimization applied to this system maximized its thermal performance and elevated its energy efficiency to a desirable level. Moreover, this study successfully modeled and controlled effective temperature variations at different angles of solar irradiation, highlighting significant improvements in system efficiency. The significance of this research lies in leveraging solar energy as one of the prominent renewable energy sources, playing a key role in replacing fossil fuels. Considering the environmental and economic challenges associated with the excessive use of fossil resources—such as increased greenhouse gas emissions, environmental degradation, and the depletion of fossil energy reserves—developing technologies related to renewable energy has become a vital priority. Among these, solar concentrating systems, capable of achieving high temperatures, are particularly important for industrial and heating applications. This research aims to optimize the performance of such systems through precise design and simulation, making a significant contribution to the advancement of advanced technologies and the efficient utilization of solar energy in Iran, thereby addressing the country's future energy needs effectively.

Keywords: cavity receiver, genetic algorithm, optimization, solar concentrator system performance

Procedia PDF Downloads 15
2276 Parallel Self Organizing Neural Network Based Estimation of Archie’s Parameters and Water Saturation in Sandstone Reservoir

Authors: G. M. Hamada, A. A. Al-Gathe, A. M. Al-Khudafi

Abstract:

Determination of water saturation in sandstone is a vital question to determine the initial oil or gas in place in reservoir rocks. Water saturation determination using electrical measurements is mainly on Archie’s formula. Consequently accuracy of Archie’s formula parameters affects water saturation values rigorously. Determination of Archie’s parameters a, m, and n is proceeded by three conventional techniques, Core Archie-Parameter Estimation (CAPE) and 3-D. This work introduces the hybrid system of parallel self-organizing neural network (PSONN) targeting accepted values of Archie’s parameters and, consequently, reliable water saturation values. This work focuses on Archie’s parameters determination techniques; conventional technique, CAPE technique, and 3-D technique, and then the calculation of water saturation using current. Using the same data, a hybrid parallel self-organizing neural network (PSONN) algorithm is used to estimate Archie’s parameters and predict water saturation. Results have shown that estimated Arche’s parameters m, a, and n are highly accepted with statistical analysis, indicating that the PSONN model has a lower statistical error and higher correlation coefficient. This study was conducted using a high number of measurement points for 144 core plugs from a sandstone reservoir. PSONN algorithm can provide reliable water saturation values, and it can supplement or even replace the conventional techniques to determine Archie’s parameters and thereby calculate water saturation profiles.

Keywords: water saturation, Archie’s parameters, artificial intelligence, PSONN, sandstone reservoir

Procedia PDF Downloads 133
2275 Development of Equivalent Inelastic Springs to Model C-Devices

Authors: Oday Al-Mamoori, J. Enrique Martinez-Rueda

Abstract:

'C' shape yielding devices (C-devices) are effective tools for introducing supplemental sources of energy dissipation by hysteresis. Studies have shown that C-devices made of mild steel can be successfully applied as integral parts of seismic retrofitting schemes. However, explicit modelling of these devices can become cumbersome, expensive and time consuming. The device under study in this article has been previously used in non-invasive dissipative bracing for seismic retrofitting. The device is cut from a mild steel plate and has an overall shape that resembles that of a rectangular portal frame with circular interior corner transitions to avoid stress concentration and to control the extension of the dissipative region of the device. A number of inelastic finite element (FE) analyses using either inelastic 2D plane stress elements or inelastic fibre frame elements are reported and used to calibrate a 1D equivalent inelastic spring model that effectively reproduces the cyclic response of the device. The more elaborate FE model accounts for the frictional forces developed between the steel plate and the bolts used to connect the C-device to structural members. FE results also allow the visualization of the inelastic regions of the device where energy dissipation is expected to occur. FE analysis results are in a good agreement with experimental observations.

Keywords: C-device, equivalent nonlinear spring, FE analyses, reversed cyclic tests

Procedia PDF Downloads 153
2274 Flood Planning Based on Risk Optimization: A Case Study in Phan-Calo River Basin in Vinh Phuc Province, Vietnam

Authors: Nguyen Quang Kim, Nguyen Thu Hien, Nguyen Thien Dung

Abstract:

Flood disasters are increasing worldwide in both frequency and magnitude. Every year in Vietnam, flood causes great damage to people, property, and environmental degradation. The flood risk management policy in Vietnam is currently updated. The planning of flood mitigation strategies is reviewed to make a decision how to reach sustainable flood risk reduction. This paper discusses the basic approach where the measures of flood protection are chosen based on minimizing the present value of expected monetary expenses, total residual risk and costs of flood control measures. This approach will be proposed and demonstrated in a case study for flood risk management in Vinh Phuc province of Vietnam. Research also proposed the framework to find a solution of optimal protection level and optimal measures of the flood. It provides an explicit economic basis for flood risk management plans and interactive effects of options for flood damage reduction. The results of the case study are demonstrated and discussed which would provide the processing of actions helped decision makers to choose flood risk reduction investment options.

Keywords: drainage plan, flood planning, flood risk, residual risk, risk optimization

Procedia PDF Downloads 249