Search results for: deductive reasoning algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3922

Search results for: deductive reasoning algorithm

1612 EhfadHaya (SaveLife) / AateHayah (GiveLife) Blood Donor Website

Authors: Sameer Muhammad Aslam, Nura Said Mohsin Al-Saifi

Abstract:

This research shows the process of creating a blood donation website for Oman. Blood donation is a widespread, crucial, ongoing process, so it is important that this website is easy to use. Several automated blood management systems are available, but none provides an effective algorithm that takes into account variables such as frequency of donation, donation date, and gender. In Oman, the Ministry of Health maintains a blood bank and keeps donors informed about the need for blood through a website. They also inform donors and the wider public where and when is their next blood donation event. The website's main goals are to educate the community about the benefits of blood donation. It also manages donor and receiver documentation and encourages voluntary blood donation by providing easy access to information about blood types and blood distribution in various hospitals in Oman, based on hospital needs.

Keywords: Oman, blood bank, blood donors, donor website

Procedia PDF Downloads 194
1611 Isogeometric Topology Optimization in Cracked Structures Design

Authors: Dongkyu Lee, Thanh Banh Thien, Soomi Shin

Abstract:

In the present study, the isogeometric topology optimization is proposed for cracked structures through using Solid Isotropic Material with Penalization (SIMP) as a design model. Design density variables defined in the variable space are used to approximate the element analysis density by the bivariate B-spline basis functions. The mathematical formulation of topology optimization problem solving minimum structural compliance is an alternating active-phase algorithm with the Gauss-Seidel version as an optimization model of optimality criteria. Stiffness and adjoint sensitivity formulations linked to strain energy of cracked structure are proposed in terms of design density variables. Numerical examples demonstrate interactions of topology optimization to structures design with cracks.

Keywords: topology optimization, isogeometric, NURBS, design

Procedia PDF Downloads 468
1610 Applying Sequential Pattern Mining to Generate Block for Scheduling Problems

Authors: Meng-Hui Chen, Chen-Yu Kao, Chia-Yu Hsu, Pei-Chann Chang

Abstract:

The main idea in this paper is using sequential pattern mining to find the information which is helpful for finding high performance solutions. By combining this information, it is defined as blocks. Using the blocks to generate artificial chromosomes (ACs) could improve the structure of solutions. Estimation of Distribution Algorithms (EDAs) is adapted to solve the combinatorial problems. Nevertheless many of these approaches are advantageous for this application, but only some of them are used to enhance the efficiency of application. Generating ACs uses patterns and EDAs could increase the diversity. According to the experimental result, the algorithm which we proposed has a better performance to solve the permutation flow-shop problems.

Keywords: combinatorial problems, sequential pattern mining, estimationof distribution algorithms, artificial chromosomes

Procedia PDF Downloads 579
1609 Optimal Injected Current Control for Shunt Active Power Filter Using Artificial Intelligence

Authors: Brahim Berbaoui

Abstract:

In this paper, a new particle swarm optimization (PSO) based method is proposed for the implantation of optimal harmonic power flow in power systems. In this algorithm approach, proportional integral controller for reference compensating currents of active power filter is performed in order to minimize the total harmonic distortion (THD). The simulation results show that the new control method using PSO approach is not only easy to be implanted, but also very effective in reducing the unwanted harmonics and compensating reactive power. The studies carried out have been accomplished using the MATLAB Simulink Power System Toolbox.

Keywords: shunt active power filter, power quality, current control, proportional integral controller, particle swarm optimization

Procedia PDF Downloads 594
1608 Analysis of Nonlinear and Non-Stationary Signal to Extract the Features Using Hilbert Huang Transform

Authors: A. N. Paithane, D. S. Bormane, S. D. Shirbahadurkar

Abstract:

It has been seen that emotion recognition is an important research topic in the field of Human and computer interface. A novel technique for Feature Extraction (FE) has been presented here, further a new method has been used for human emotion recognition which is based on HHT method. This method is feasible for analyzing the nonlinear and non-stationary signals. Each signal has been decomposed into the IMF using the EMD. These functions are used to extract the features using fission and fusion process. The decomposition technique which we adopt is a new technique for adaptively decomposing signals. In this perspective, we have reported here potential usefulness of EMD based techniques.We evaluated the algorithm on Augsburg University Database; the manually annotated database.

Keywords: intrinsic mode function (IMF), Hilbert-Huang transform (HHT), empirical mode decomposition (EMD), emotion detection, electrocardiogram (ECG)

Procedia PDF Downloads 561
1607 Impairments Correction of Six-Port Based Millimeter-Wave Radar

Authors: Dan Ohev Zion, Alon Cohen

Abstract:

In recent years, the presence of short-range millimeter-wave radar in civil application has increased significantly. Autonomous driving, security, 3D imaging and high data rate communication systems are a few examples. The next challenge is the integration inside small form-factor devices, such as smartphones (e.g. gesture recognition). The main challenge is implementation of a truly low-power, low-complexity high-resolution radar. The most popular approach is the Frequency Modulated Continuous Wave (FMCW) radar, with an analog multiplication front-end. In this paper, we present an approach for adaptive estimation and correction of impairments of such front-end, specifically implemented using the Six-Port Device (SPD) as the multiplier element. The proposed algorithm was simulated and implemented on a 60 GHz radar lab prototype.

Keywords: radar, FMCW Radar, IQ mismatch, six port

Procedia PDF Downloads 135
1606 New Gas Geothermometers for the Prediction of Subsurface Geothermal Temperatures: An Optimized Application of Artificial Neural Networks and Geochemometric Analysis

Authors: Edgar Santoyo, Daniel Perez-Zarate, Agustin Acevedo, Lorena Diaz-Gonzalez, Mirna Guevara

Abstract:

Four new gas geothermometers have been derived from a multivariate geo chemometric analysis of a geothermal fluid chemistry database, two of which use the natural logarithm of CO₂ and H2S concentrations (mmol/mol), respectively, and the other two use the natural logarithm of the H₂S/H₂ and CO₂/H₂ ratios. As a strict compilation criterion, the database was created with gas-phase composition of fluids and bottomhole temperatures (BHTM) measured in producing wells. The calibration of the geothermometers was based on the geochemical relationship existing between the gas-phase composition of well discharges and the equilibrium temperatures measured at bottomhole conditions. Multivariate statistical analysis together with the use of artificial neural networks (ANN) was successfully applied for correlating the gas-phase compositions and the BHTM. The predicted or simulated bottomhole temperatures (BHTANN), defined as output neurons or simulation targets, were statistically compared with measured temperatures (BHTM). The coefficients of the new geothermometers were obtained from an optimized self-adjusting training algorithm applied to approximately 2,080 ANN architectures with 15,000 simulation iterations each one. The self-adjusting training algorithm used the well-known Levenberg-Marquardt model, which was used to calculate: (i) the number of neurons of the hidden layer; (ii) the training factor and the training patterns of the ANN; (iii) the linear correlation coefficient, R; (iv) the synaptic weighting coefficients; and (v) the statistical parameter, Root Mean Squared Error (RMSE) to evaluate the prediction performance between the BHTM and the simulated BHTANN. The prediction performance of the new gas geothermometers together with those predictions inferred from sixteen well-known gas geothermometers (previously developed) was statistically evaluated by using an external database for avoiding a bias problem. Statistical evaluation was performed through the analysis of the lowest RMSE values computed among the predictions of all the gas geothermometers. The new gas geothermometers developed in this work have been successfully used for predicting subsurface temperatures in high-temperature geothermal systems of Mexico (e.g., Los Azufres, Mich., Los Humeros, Pue., and Cerro Prieto, B.C.) as well as in a blind geothermal system (known as Acoculco, Puebla). The last results of the gas geothermometers (inferred from gas-phase compositions of soil-gas bubble emissions) compare well with the temperature measured in two wells of the blind geothermal system of Acoculco, Puebla (México). Details of this new development are outlined in the present research work. Acknowledgements: The authors acknowledge the funding received from CeMIE-Geo P09 project (SENER-CONACyT).

Keywords: artificial intelligence, gas geochemistry, geochemometrics, geothermal energy

Procedia PDF Downloads 326
1605 Automatic Algorithm for Processing and Analysis of Images from the Comet Assay

Authors: Yeimy L. Quintana, Juan G. Zuluaga, Sandra S. Arango

Abstract:

The comet assay is a method based on electrophoresis that is used to measure DNA damage in cells and has shown important results in the identification of substances with a potential risk to the human population as innumerable physical, chemical and biological agents. With this technique is possible to obtain images like a comet, in which the tail of these refers to damaged fragments of the DNA. One of the main problems is that the image has unequal luminosity caused by the fluorescence microscope and requires different processing to condition it as well as to know how many optimal comets there are per sample and finally to perform the measurements and determine the percentage of DNA damage. In this paper, we propose the design and implementation of software using Image Processing Toolbox-MATLAB that allows the automation of image processing. The software chooses the optimum comets and measuring the necessary parameters to detect the damage.

Keywords: artificial vision, comet assay, DNA damage, image processing

Procedia PDF Downloads 280
1604 Automatic Moment-Based Texture Segmentation

Authors: Tudor Barbu

Abstract:

An automatic moment-based texture segmentation approach is proposed in this paper. First, we describe the related work in this computer vision domain. Our texture feature extraction, the first part of the texture recognition process, produces a set of moment-based feature vectors. For each image pixel, a texture feature vector is computed as a sequence of area moments. Second, an automatic pixel classification approach is proposed. The feature vectors are clustered using some unsupervised classification algorithm, the optimal number of clusters being determined using a measure based on validation indexes. From the resulted pixel classes one determines easily the desired texture regions of the image.

Keywords: image segmentation, moment-based, texture analysis, automatic classification, validation indexes

Procedia PDF Downloads 397
1603 A Mixed Method Approach for Modeling Entry Capacity at Rotary Intersections

Authors: Antonio Pratelli, Lorenzo Brocchini, Reginald Roy Souleyrette

Abstract:

A rotary is a traffic circle intersection where vehicles entering from branches give priority to circulating flow. Vehicles entering the intersection from converging roads move around the central island and weave out of the circle into their desired exiting branch. This creates merging and diverging conflicts among any entry and its successive exit, i.e., a section. Therefore, rotary capacity models are usually based on the weaving of the different movements in any section of the circle, and the maximum rate of flow value is then related to each weaving section of the rotary. Nevertheless, the single-section capacity value does not lead to the typical performance characteristics of the intersection, such as the entry average delay which is directly linked to its level of service. From another point of view, modern roundabout capacity models are based on the limitation of the flow entering from the single entrance due to the amount of flow circulating in front of the entrance itself. Modern roundabouts capacity models generally lead also to a performance evaluation. This paper aims to incorporate a modern roundabout capacity model into an old rotary capacity method to obtain from the latter the single input capacity and ultimately achieve the related performance indicators. Put simply; the main objective is to calculate the average delay of each single roundabout entrance to apply the most common Highway Capacity Manual, or HCM, criteria. The paper is organized as follows: firstly, the rotary and roundabout capacity models are sketched, and it has made a brief introduction to the model combination technique with some practical instances. The successive section is deserved to summarize the TRRL old rotary capacity model and the most recent HCM-7th modern roundabout capacity model. Then, the two models are combined through an iteration-based algorithm, especially set-up and linked to the concept of roundabout total capacity, i.e., the value reached due to a traffic flow pattern leading to the simultaneous congestion of all roundabout entrances. The solution is the average delay for each entrance of the rotary, by which is estimated its respective level of service. In view of further experimental applications, at this research stage, a collection of existing rotary intersections operating with the priority-to-circle rule has already started, both in the US and in Italy. The rotaries have been selected by direct inspection of aerial photos through a map viewer, namely Google Earth. Each instance has been recorded by location, general urban or rural, and its main geometrical patterns. Finally, conclusion remarks are drawn, and a discussion on some further research developments has opened.

Keywords: mixed methods, old rotary and modern roundabout capacity models, total capacity algorithm, level of service estimation

Procedia PDF Downloads 61
1602 MSG Image Encryption Based on AES and RSA Algorithms "MSG Image Security"

Authors: Boukhatem Mohammed Belkaid, Lahdir Mourad

Abstract:

In this paper, we propose a new encryption system for security issues meteorological images from Meteosat Second Generation (MSG), which generates 12 images every 15 minutes. The hybrid encryption scheme is based on AES and RSA algorithms to validate the three security services are authentication, integrity and confidentiality. Privacy is ensured by AES, authenticity is ensured by the RSA algorithm. Integrity is assured by the basic function of the correlation between adjacent pixels. Our system generates a unique password every 15 minutes that will be used to encrypt each frame of the MSG meteorological basis to strengthen and ensure his safety. Several metrics have been used for various tests of our analysis. For the integrity test, we noticed the efficiencies of our system and how the imprint cryptographic changes at reception if a change affects the image in the transmission channel.

Keywords: AES, RSA, integrity, confidentiality, authentication, satellite MSG, encryption, decryption, key, correlation

Procedia PDF Downloads 362
1601 Complementing Assessment Processes with Standardized Tests: A Work in Progress

Authors: Amparo Camacho

Abstract:

ABET accredited programs must assess the development of student learning outcomes (SOs) in engineering programs. Different institutions implement different strategies for this assessment, and they are usually designed “in house.” This paper presents a proposal for including standardized tests to complement the ABET assessment model in an engineering college made up of six distinct engineering programs. The engineering college formulated a model of quality assurance in education to be implemented throughout the six engineering programs to regularly assess and evaluate the achievement of SOs in each program offered. The model uses diverse techniques and sources of data to assess student performance and to implement actions of improvement based on the results of this assessment. The model is called “Assessment Process Model” and it includes SOs A through K, as defined by ABET. SOs can be divided into two categories: “hard skills” and “professional skills” (soft skills). The first includes abilities, such as: applying knowledge of mathematics, science, and engineering and designing and conducting experiments, as well as analyzing and interpreting data. The second category, “professional skills”, includes communicating effectively, and understanding professional and ethnical responsibility. Within the Assessment Process Model, various tools were used to assess SOs, related to both “hard” as well as “soft” skills. The assessment tools designed included: rubrics, surveys, questionnaires, and portfolios. In addition to these instruments, the Engineering College decided to use tools that systematically gather consistent quantitative data. For this reason, an in-house exam was designed and implemented, based on the curriculum of each program. Even though this exam was administered during various academic periods, it is not currently considered standardized. In 2017, the Engineering College included three standardized tests: one to assess mathematical and scientific reasoning and two more to assess reading and writing abilities. With these exams, the college hopes to obtain complementary information that can help better measure the development of both hard and soft skills of students in the different engineering programs. In the first semester of 2017, the three exams were given to three sample groups of students from the six different engineering programs. Students in the sample groups were either from the first, fifth, and tenth semester cohorts. At the time of submission of this paper, the engineering college has descriptive statistical data and is working with various statisticians to have a more in-depth and detailed analysis of the sample group of students’ achievement on the three exams. The overall objective of including standardized exams in the assessment model is to identify more precisely the least developed SOs in order to define and implement educational strategies necessary for students to achieve them in each engineering program.

Keywords: assessment, hard skills, soft skills, standardized tests

Procedia PDF Downloads 266
1600 Economic Decision Making under Cognitive Load: The Role of Numeracy and Financial Literacy

Authors: Vânia Costa, Nuno De Sá Teixeira, Ana C. Santos, Eduardo Santos

Abstract:

Financial literacy and numeracy have been regarded as paramount for rational household decision making in the increasing complexity of financial markets. However, financial decisions are often made under sub-optimal circumstances, including cognitive overload. The present study aims to clarify how financial literacy and numeracy, taken as relevant expert knowledge for financial decision-making, modulate possible effects of cognitive load. Participants were required to perform a choice between a sure loss or a gambling pertaining a financial investment, either with or without a competing memory task. Two experiments were conducted varying only the content of the competing task. In the first, the financial choice task was made while maintaining on working memory a list of five random letters. In the second, cognitive load was based upon the retention of six random digits. In both experiments, one of the items in the list had to be recalled given its serial position. Outcomes of the first experiment revealed no significant main effect or interactions involving cognitive load manipulation and numeracy and financial literacy skills, strongly suggesting that retaining a list of random letters did not interfere with the cognitive abilities required for financial decision making. Conversely, and in the second experiment, a significant interaction between the competing mnesic task and level of financial literacy (but not numeracy) was found for the frequency of choice of a gambling option. Overall, and in the control condition, both participants with high financial literacy and high numeracy were more prone to choose the gambling option. However, and when under cognitive load, participants with high financial literacy were as likely as their illiterate counterparts to choose the gambling option. This outcome is interpreted as evidence that financial literacy prevents intuitive risk-aversion reasoning only under highly favourable conditions, as is the case when no other task is competing for cognitive resources. In contrast, participants with higher levels of numeracy were consistently more prone to choose the gambling option in both experimental conditions. These results are discussed in the light of the opposition between classical dual-process theories and fuzzy-trace theories for intuitive decision making, suggesting that while some instances of expertise (as numeracy) are prone to support easily accessible gist representations, other expert skills (as financial literacy) depend upon deliberative processes. It is furthermore suggested that this dissociation between types of expert knowledge might depend on the degree to which they are generalizable across disparate settings. Finally, applied implications of the present study are discussed with a focus on how it informs financial regulators and the importance and limits of promoting financial literacy and general numeracy.

Keywords: decision making, cognitive load, financial literacy, numeracy

Procedia PDF Downloads 154
1599 Multiple Relaxation Times in the Gibbs Ensemble Monte Carlo Simulation of Phase Separation

Authors: Bina Kumari, Subir K. Sarkar, Pradipta Bandyopadhyay

Abstract:

The autocorrelation function of the density fluctuation is studied in each of the two phases in a Gibbs Ensemble Monte Carlo (GEMC) simulation of the problem of phase separation for a square well potential with various values of its range. We find that the normalized autocorrelation function is described very well as a linear combination of an exponential function with a time scale τ₂ and a stretched exponential function with a time scale τ₁ and an exponent α. Dependence of (α, τ₁, τ₂) on the parameters of the GEMC algorithm and the range of the square well potential is investigated and interpreted. We also analyse the issue of how to choose the parameters of the GEMC simulation optimally.

Keywords: autocorrelation function, density fluctuation, GEMC, simulation

Procedia PDF Downloads 168
1598 Comparative Study between Classical P-Q Method and Modern Fuzzy Controller Method to Improve the Power Quality of an Electrical Network

Authors: A. Morsli, A. Tlemçani, N. Ould Cherchali, M. S. Boucherit

Abstract:

This article presents two methods for the compensation of harmonics generated by a nonlinear load. The first is the classic method P-Q. The second is the controller by modern method of artificial intelligence specifically fuzzy logic. Both methods are applied to an Active Power Filter shunt (APFs) based on a three-phase voltage converter at five levels NPC topology. In calculating the harmonic currents of reference, we use the algorithm P-Q and pulse generation, we use the intersective PWM. For flexibility and dynamics, we use fuzzy logic. The results give us clear that the rate of Harmonic Distortion issued by fuzzy logic is better than P-Q.

Keywords: fuzzy logic controller, P-Q method, pulse width modulation (PWM), shunt active power filter (sAPF), total harmonic distortion (THD)

Procedia PDF Downloads 528
1597 Using Geographic Information Systems in the Desertification Risk’s Cartography: Case South of the Aurès Region, Algeria

Authors: Benmessaoud Hassen

Abstract:

The sensitivity to the desertification map of the south of Aurès region has been elaborated by the crossing of four thematic layers capable to have an impact on the process of desertification. The following step is inspired of MEDALUS (Mediterranean desertification and land Use), which use qualitative index to define the environment zones sensitive to the desertification. The cartographical information of vegetation, the climate, the soil and the socioeconomic state descended from cartographic data transformed to numerical data then seized on, structured and managed by an algorithm dedicated to a geographical information system. In step with information, each layer makes object of 3 or 4 classes, the geometrical median of the four layers used are leaded to sensitivity classes (ISD) of different mapped environment.

Keywords: information systems, thematic layers, the sensitivity to the desertification map, concept MEDALUS, South of Aurès

Procedia PDF Downloads 406
1596 Research on Fuzzy Test Framework Based on Concolic Execution

Authors: Xiong Xie, Yuhang Chen

Abstract:

Vulnerability discovery technology is a significant field of the current. In this paper, a fuzzy framework based on concolic execution has been proposed. Fuzzy test and symbolic execution are widely used in the field of vulnerability discovery technology. But each of them has its own advantages and disadvantages. During the path generation stage, path traversal algorithm based on generation is used to get more accurate path. During the constraint solving stage, dynamic concolic execution is used to avoid the path explosion. If there is external call, the concolic based on function summary is used. Experiments show that the framework can effectively improve the ability of triggering vulnerabilities and code coverage.

Keywords: concolic execution, constraint solving, fuzzy test, vulnerability discovery

Procedia PDF Downloads 206
1595 An Efficient Approach for Speed up Non-Negative Matrix Factorization for High Dimensional Data

Authors: Bharat Singh Om Prakash Vyas

Abstract:

Now a day’s applications deal with High Dimensional Data have tremendously used in the popular areas. To tackle with such kind of data various approached has been developed by researchers in the last few decades. To tackle with such kind of data various approached has been developed by researchers in the last few decades. One of the problems with the NMF approaches, its randomized valued could not provide absolute optimization in limited iteration, but having local optimization. Due to this, we have proposed a new approach that considers the initial values of the decomposition to tackle the issues of computationally expensive. We have devised an algorithm for initializing the values of the decomposed matrix based on the PSO (Particle Swarm Optimization). Through the experimental result, we will show the proposed method converse very fast in comparison to other row rank approximation like simple NMF multiplicative, and ACLS techniques.

Keywords: ALS, NMF, high dimensional data, RMSE

Procedia PDF Downloads 326
1594 Cognition of Driving Context for Driving Assistance

Authors: Manolo Dulva Hina, Clement Thierry, Assia Soukane, Amar Ramdane-Cherif

Abstract:

In this paper, we presented our innovative way of determining the driving context for a driving assistance system. We invoke the fusion of all parameters that describe the context of the environment, the vehicle and the driver to obtain the driving context. We created a training set that stores driving situation patterns and from which the system consults to determine the driving situation. A machine-learning algorithm predicts the driving situation. The driving situation is an input to the fission process that yields the action that must be implemented when the driver needs to be informed or assisted from the given the driving situation. The action may be directed towards the driver, the vehicle or both. This is an ongoing work whose goal is to offer an alternative driving assistance system for safe driving, green driving and comfortable driving. Here, ontologies are used for knowledge representation.

Keywords: cognitive driving, intelligent transportation system, multimodal system, ontology, machine learning

Procedia PDF Downloads 343
1593 Self-Organization-Based Approach for Embedded Real-Time System Design

Authors: S. S. Bendib, L. W. Mouss, S. Kalla

Abstract:

This paper proposes a self-organization-based approach for real-time systems design. The addressed issue is the mapping of an application onto an architecture of heterogeneous processors while optimizing both makespan and reliability. Since this problem is NP-hard, a heuristic algorithm is used to obtain efficiently approximate solutions. The proposed approach takes into consideration the quality as well as the diversity of solutions. Indeed, an alternate treatment of the two objectives allows to produce solutions of good quality while a self-organization approach based on the neighborhood structure is used to reorganize solutions and consequently to enhance their diversity. Produced solutions make different compromises between the makespan and the reliability giving the user the possibility to select the solution suited to his (her) needs.

Keywords: embedded real-time systems design, makespan, reliability, self-organization, compromises

Procedia PDF Downloads 116
1592 Monitoring of Forest Cover Dynamics in the High Atlas of Morocco (Zaouit Ahansal) Using Remote Sensing Techniques and GIS

Authors: Abdelaziz Moujane, Abedelali Boulli, Abdellah Ouigmane

Abstract:

The present work focuses on the assessment of forestlandscape changes in the region of ZaouitAhansal, usingmultitemporal satellite images at high spatial resolution.Severalremotesensingmethodswereappliednamely: The supervised classification algorithm and NDVI whichwerecombined in a GIS environment to quantify the extent and change in density of forest stands (holmoak, juniper, thya, Aleppo pine, crops, and others).The resultsobtainedshowedthat the forest of ZaouitAhansal has undergonesignificantdegradationresulting in a decrease in the area of juniper, cedar, and zeenoak, as well as an increase in the area of baresoil and agricultural land. The remotesensing data providedsatisfactoryresults for identifying and quantifying changes in forestcover. In addition, thisstudycould serve as a reference for the development of management strategies and restoration programs.

Keywords: remote sensing, GIS, satellite image, NDVI, deforestation, zaouit ahansal

Procedia PDF Downloads 135
1591 On the Question of Ideology: Criticism of the Enlightenment Approach and Theory of Ideology as Objective Force in Gramsci and Althusser

Authors: Edoardo Schinco

Abstract:

Studying the Marxist intellectual tradition, it is possible to verify that there were numerous cases of philosophical regression, in which the important achievements of detailed studies have been replaced by naïve ideas and previous misunderstandings: one of most important example of this tendency is related to the question of ideology. According to a common Enlightenment approach, the ideology is essentially not a reality, i.e., a factor capable of having an effect on the reality itself; in other words, the ideology is a mere error without specific historical meaning, which is only due to ignorance or inability of subjects to understand the truth. From this point of view, the consequent and immediate practice against every form of ideology are the rational dialogue, the reasoning based on common sense, in order to dispel the obscurity of ignorance through the light of pure reason. The limits of this philosophical orientation are however both theoretical and practical: on the one hand, the Enlightenment criticism of ideology is not an historicistic thought, since it cannot grasp the inner connection that ties an historical context and its peculiar ideology together; moreover, on the other hand, when the Enlightenment approach fails to release people from their illusions (e.g., when the ideology persists, despite the explanation of its illusoriness), it usually becomes a racist or elitarian thought. Unlike this first conception of ideology, Gramsci attempts to recover Marx’s original thought and to valorize its dialectical methodology with respect to the reality of ideology. As Marx suggests, the ideology – in negative meaning – is surely an error, a misleading knowledge, which aims to defense the current state of things and to conceal social, political or moral contradictions; but, that is precisely why the ideological error is not casual: every ideology mediately roots in a particular material context, from which it takes its reason being. Gramsci avoids, however, any mechanistic interpretation of Marx and, for this reason; he underlines the dialectic relation that exists between material base and ideological superstructure; in this way, a specific ideology is not only a passive product of base but also an active factor that reacts on the base itself and modifies it. Therefore, there is a considerable revaluation of ideology’s role in maintenance of status quo and the consequent thematization of both ideology as objective force, active in history, and ideology as cultural hegemony of ruling class on subordinate groups. Among the Marxists, the French philosopher Louis Althusser also gives his contribution to this crucial question; as follower of Gramsci’s thought, he develops the idea of ideology as an objective force through the notions of Repressive State Apparatus (RSA) and Ideological State Apparatuses (ISA). In addition to this, his philosophy is characterized by the presence of structuralist elements, which must be studied, since they deeply change the theoretical foundation of his Marxist thought.

Keywords: Althusser, enlightenment, Gramsci, ideology

Procedia PDF Downloads 175
1590 Algorithm and Software Based on Multilayer Perceptron Neural Networks for Estimating Channel Use in the Spectral Decision Stage in Cognitive Radio Networks

Authors: Danilo López, Johana Hernández, Edwin Rivas

Abstract:

The use of the Multilayer Perceptron Neural Networks (MLPNN) technique is presented to estimate the future state of use of a licensed channel by primary users (PUs); this will be useful at the spectral decision stage in cognitive radio networks (CRN) to determine approximately in which time instants of future may secondary users (SUs) opportunistically use the spectral bandwidth to send data through the primary wireless network. To validate the results, sequences of occupancy data of channel were generated by simulation. The results show that the prediction percentage is greater than 60% in some of the tests carried out.

Keywords: cognitive radio, neural network, prediction, primary user

Procedia PDF Downloads 347
1589 Deconstructing and Reconstructing the Definition of Inhuman Treatment in International Law

Authors: Sonia Boulos

Abstract:

The prohibition on ‘inhuman treatment’ constitutes one of the central tenets of modern international human rights law. It is incorporated in principal international human rights instruments including Article 5 of the Universal Declaration of Human Rights, and Article 7 of the International Covenant on Civil and Political Rights. However, in the absence of any legislative definition of the term ‘inhuman’, its interpretation becomes challenging. The aim of this article is to critically analyze the interpretation of the term ‘inhuman’ in international human rights law and to suggest a new approach to construct its meaning. The article is composed of two central parts. The first part is a critical appraisal of the interpretation of the term ‘inhuman’ by supra-national human rights law institutions. It highlights the failure of supra-national institutions to provide an independent definition for the term ‘inhuman’. In fact, those institutions consistently fail to distinguish the term ‘inhuman’ from its other kin terms, i.e. ‘cruel’ and ‘degrading.’ Very often, they refer to these three prohibitions as ‘CIDT’, as if they were one collective. They were primarily preoccupied with distinguishing ‘CIDT’ from ‘torture.’ By blurring the conceptual differences between these three terms, supra-national institutions supplemented them with a long list of specific and purely descriptive subsidiary rules. In most cases, those subsidiary rules were announced in the absence of sufficient legal reasoning explaining how they were derived from abstract and evaluative standards embodied in the prohibitions collectively referred to as ‘CIDT.’ By opting for this option, supra-national institutions have created the risk for the development of an incoherent body of jurisprudence on those terms at the international level. They also have failed to provide guidance for domestic courts on how to enforce these prohibitions. While blurring the differences between the terms ‘cruel,’ ‘inhuman,’ and ‘degrading’ has consequences for the three, the term ‘inhuman’ remains the most impoverished one. It is easy to link the term ‘cruel’ to the clause on ‘cruel and unusual punishment’ originating from the English Bill of Rights of 1689. It is also easy to see that the term ‘degrading’ reflects a dignatarian ideal. However, when we turn to the term ‘inhuman’, we are left without any interpretative clue. The second part of the article suggests that the ordinary meaning of the word ‘inhuman’ should be our first clue. However, regaining the conceptual independence of the term ‘inhuman’ requires more than a mere reflection on the word-meaning of the term. Thus, the second part introduces philosophical concepts related to the understanding of what it means to be human. It focuses on ‘the capabilities approach’ and the notion of ‘human functioning’, introduced by Amartya Sen and further explored by Martha Nussbaum. Nussbaum’s work on the basic human capabilities is particularly helpful or even vital for understanding the moral and legal substance of the prohibition on ‘inhuman’ treatment.

Keywords: inhuman treatment, capabilities approach, human functioning, supra-national institutions

Procedia PDF Downloads 257
1588 Metabolic Predictive Model for PMV Control Based on Deep Learning

Authors: Eunji Choi, Borang Park, Youngjae Choi, Jinwoo Moon

Abstract:

In this study, a predictive model for estimating the metabolism (MET) of human body was developed for the optimal control of indoor thermal environment. Human body images for indoor activities and human body joint coordinated values were collected as data sets, which are used in predictive model. A deep learning algorithm was used in an initial model, and its number of hidden layers and hidden neurons were optimized. Lastly, the model prediction performance was analyzed after the model being trained through collected data. In conclusion, the possibility of MET prediction was confirmed, and the direction of the future study was proposed as developing various data and the predictive model.

Keywords: deep learning, indoor quality, metabolism, predictive model

Procedia PDF Downloads 238
1587 Noise Detection Algorithm for Skin Disease Image Identification

Authors: Minakshi Mainaji Sonawane, Bharti W. Gawali, Sudhir Mendhekar, Ramesh R. Manza

Abstract:

People's lives and health are severely impacted by skin diseases. A new study proposes an effective method for identifying the different forms of skin diseases. Image denoising is a technique for improving image quality after it has been harmed by noise. The proposed technique is based on the usage of the wavelet transform. Wavelet transform is the best method for analyzing the image due to the ability to split the image into the sub-band, which has been used to estimate the noise ratio at the noisy image. According to experimental results, the proposed method presents the best values for MSE, PSNR, and Entropy for denoised images. we can found in Also, by using different types of wavelet transform filters is make the proposed approach can obtain the best results 23.13, 20.08, 50.7 for the image denoising process

Keywords: MSE, PSNR, entropy, Gaussian filter, DWT

Procedia PDF Downloads 197
1586 Updating Stochastic Hosting Capacity Algorithm for Voltage Optimization Programs and Interconnect Standards

Authors: Nicholas Burica, Nina Selak

Abstract:

The ADHCAT (Automated Distribution Hosting Capacity Assessment Tool) was designed to run Hosting Capacity Analysis on the ComEd system via a stochastic DER (Distributed Energy Resource) placement on multiple power flow simulations against a set of violation criteria. The violation criteria in the initial version of the tool captured a limited amount of issues that individual departments design against for DER interconnections. Enhancements were made to the tool to further align with individual department violation and operation criteria, as well as the addition of new modules for use for future load profile analysis. A reporting engine was created for future analytical use based on the simulations and observations in the tool.

Keywords: distributed energy resources, hosting capacity, interconnect, voltage optimization

Procedia PDF Downloads 164
1585 Femtocell Stationed Flawless Handover in High Agility Trains

Authors: S. Dhivya, M. Abirami, M. Farjana Parveen, M. Keerthiga

Abstract:

The development of high-speed railway makes people’s lives more and more convenient; meanwhile, handover is the major problem on high-speed railway communication services. In order to overcome that drawback the architecture of Long-Term Evolution (LTE) femtocell networks is used to improve network performance, and the deployment of a femtocell is a key for bandwidth limitation and coverage issues in conventional mobile network system. To increase the handover performance this paper proposed a multiple input multiple output (MIMO) assisted handoff (MAHO) algorithm. It is a technique used in mobile telecom to transfer a mobile phone to a new radio channel with stronger signal strength and improved channel quality.

Keywords: flawless handover, high-speed train, home evolved Node B, LTE, mobile femtocell, RSS

Procedia PDF Downloads 454
1584 Fault Location Identification in High Voltage Transmission Lines

Authors: Khaled M. El Naggar

Abstract:

This paper introduces a digital method for fault section identification in transmission lines. The method uses digital set of the measured short circuit current to locate faults in electrical power systems. The digitized current is used to construct a set of overdetermined system of equations. The problem is then constructed and solved using the proposed digital optimization technique to find the fault distance. The proposed optimization methodology is an application of simulated annealing optimization technique. The method is tested using practical case study to evaluate the proposed method. The accurate results obtained show that the algorithm can be used as a powerful tool in the area of power system protection.

Keywords: optimization, estimation, faults, measurement, high voltage, simulated annealing

Procedia PDF Downloads 377
1583 Topology-Based Character Recognition Method for Coin Date Detection

Authors: Xingyu Pan, Laure Tougne

Abstract:

For recognizing coins, the graved release date is important information to identify precisely its monetary type. However, reading characters in coins meets much more obstacles than traditional character recognition tasks in the other fields, such as reading scanned documents or license plates. To address this challenging issue in a numismatic context, we propose a training-free approach dedicated to detection and recognition of the release date of the coin. In the first step, the date zone is detected by comparing histogram features; in the second step, a topology-based algorithm is introduced to recognize coin numbers with various font types represented by binary gradient map. Our method obtained a recognition rate of 92% on synthetic data and of 44% on real noised data.

Keywords: coin, detection, character recognition, topology

Procedia PDF Downloads 238