Search results for: explicit representation of solutions
5150 A Fundamental Functional Equation for Lie Algebras
Authors: Ih-Ching Hsu
Abstract:
Inspired by the so called Jacobi Identity (x y) z + (y z) x + (z x) y = 0, the following class of functional equations EQ I: F [F (x, y), z] + F [F (y, z), x] + F [F (z, x), y] = 0 is proposed, researched and generalized. Research methodologies begin with classical methods for functional equations, then evolve into discovering of any implicit algebraic structures. One of this paper’s major findings is that EQ I, under two additional conditions F (x, x) = 0 and F (x, y) + F (y, x) = 0, proves to be a fundamental functional equation for Lie Algebras. Existence of non-trivial solutions for EQ I can be proven by defining F (p, q) = [p q] = pq –qp, where p and q are quaternions, and pq is the quaternion product of p and q. EQ I can be generalized to the following class of functional equations EQ II: F [G (x, y), z] + F [G (y, z), x] + F [G (z, x), y] = 0. Concluding Statement: With a major finding proven, and non-trivial solutions derived, this research paper illustrates and provides a new functional equation scheme for studies in two major areas: (1) What underlying algebraic structures can be defined and/or derived from EQ I or EQ II? (2) What conditions can be imposed so that conditional general solutions to EQ I and EQ II can be found, investigated and applied?Keywords: fundamental functional equation, generalized functional equations, Lie algebras, quaternions
Procedia PDF Downloads 2245149 Providing Resilience: An Overview of the Actions in an Elderly Suburban Area in Rio de Janeiro
Authors: Alan Silva, Carla Cipolla
Abstract:
The increase of life expectancy in the world is a current challenge for governments, demanding solutions towards elderly people. In this context, service design and age-friendly design appear as an approach to create solutions which favor active aging by social inclusion and better life quality. In essence, the age-friendly design aims to include elderly people in the democratic process of creation in order to strengthen the participation and empowerment of them through intellectual, social, civic, recreational, cultural and spiritual activities. All of these activities aim to provide resilience to this segment by granting access to the reserves needed for adaptation and growth in the face of life's challenges. On that approach, the following research brings an overview of the actions related to the integration and social qualification of the elderly people, considering a suburban area of Rio de Janeiro. Based on Design Thinking presented by Brown (2009), this research has a qualitative-exploratory approach demanding certain necessities and actions, which are collected through observation and interviews about the daily life of the elderly community individuals searching for information about personal capacitation and social integration of the studied population. Subsequently, a critical analysis is done on this overview, pointing out the potentialities and limitations of these actions. At the end of the research, a well-being map of solutions classified as physical, mental and social is created, also indicating which current services are relevant and which activities can be transformed into services to that community. In conclusion, the contribution of this research is the construction of a map of solutions that provides resilience to the studied public and favors the concept of active aging in society. From this map of solutions, it is possible to discriminate what are the resources necessary for the solutions to be operationalized and their journeys with the users of the elderly segment.Keywords: resilience, age-friendly design, service design, active aging
Procedia PDF Downloads 975148 Integration of Fuzzy Logic in the Representation of Knowledge: Application in the Building Domain
Authors: Hafida Bouarfa, Mohamed Abed
Abstract:
The main object of our work is the development and the validation of a system indicated Fuzzy Vulnerability. Fuzzy Vulnerability uses a fuzzy representation in order to tolerate the imprecision during the description of construction. At the the second phase, we evaluated the similarity between the vulnerability of a new construction and those of the whole of the historical cases. This similarity is evaluated on two levels: 1) individual similarity: bases on the fuzzy techniques of aggregation; 2) Global similarity: uses the increasing monotonous linguistic quantifiers (RIM) to combine the various individual similarities between two constructions. The third phase of the process of Fuzzy Vulnerability consists in using vulnerabilities of historical constructions narrowly similar to current construction to deduce its estimate vulnerability. We validated our system by using 50 cases. We evaluated the performances of Fuzzy Vulnerability on the basis of two basic criteria, the precision of the estimates and the tolerance of the imprecision along the process of estimation. The comparison was done with estimates made by tiresome and long models. The results are satisfactory.Keywords: case based reasoning, fuzzy logic, fuzzy case based reasoning, seismic vulnerability
Procedia PDF Downloads 2935147 Object-Scene: Deep Convolutional Representation for Scene Classification
Authors: Yanjun Chen, Chuanping Hu, Jie Shao, Lin Mei, Chongyang Zhang
Abstract:
Traditional image classification is based on encoding scheme (e.g. Fisher Vector, Vector of Locally Aggregated Descriptor) with low-level image features (e.g. SIFT, HoG). Compared to these low-level local features, deep convolutional features obtained at the mid-level layer of convolutional neural networks (CNN) have richer information but lack of geometric invariance. For scene classification, there are scattered objects with different size, category, layout, number and so on. It is crucial to find the distinctive objects in scene as well as their co-occurrence relationship. In this paper, we propose a method to take advantage of both deep convolutional features and the traditional encoding scheme while taking object-centric and scene-centric information into consideration. First, to exploit the object-centric and scene-centric information, two CNNs that trained on ImageNet and Places dataset separately are used as the pre-trained models to extract deep convolutional features at multiple scales. This produces dense local activations. By analyzing the performance of different CNNs at multiple scales, it is found that each CNN works better in different scale ranges. A scale-wise CNN adaption is reasonable since objects in scene are at its own specific scale. Second, a fisher kernel is applied to aggregate a global representation at each scale and then to merge into a single vector by using a post-processing method called scale-wise normalization. The essence of Fisher Vector lies on the accumulation of the first and second order differences. Hence, the scale-wise normalization followed by average pooling would balance the influence of each scale since different amount of features are extracted. Third, the Fisher vector representation based on the deep convolutional features is followed by a linear Supported Vector Machine, which is a simple yet efficient way to classify the scene categories. Experimental results show that the scale-specific feature extraction and normalization with CNNs trained on object-centric and scene-centric datasets can boost the results from 74.03% up to 79.43% on MIT Indoor67 when only two scales are used (compared to results at single scale). The result is comparable to state-of-art performance which proves that the representation can be applied to other visual recognition tasks.Keywords: deep convolutional features, Fisher Vector, multiple scales, scale-specific normalization
Procedia PDF Downloads 3315146 Existence Solutions for Three Point Boundary Value Problem for Differential Equations
Authors: Mohamed Houas, Maamar Benbachir
Abstract:
In this paper, under weak assumptions, we study the existence and uniqueness of solutions for a nonlinear fractional boundary value problem. New existence and uniqueness results are established using Banach contraction principle. Other existence results are obtained using scheafer and krasnoselskii's fixed point theorem. At the end, some illustrative examples are presented.Keywords: caputo derivative, boundary value problem, fixed point theorem, local conditions
Procedia PDF Downloads 4295145 A Critical Discourse Analysis of the Construction of Artists' Reputation by Online Art Magazines
Authors: Thomas Soro, Tim Stott, Brendan O'Rourke
Abstract:
The construction of artistic reputation has been examined within sociology, philosophy, and economics but, baring a few noteworthy exceptions its discursive aspect has been largely ignored. This is particularly surprising given that contemporary artworks primarily rely on discourse to construct their ontological status. This paper contributes a discourse analytical perspective to the broad body of literature on artistic reputation by providing an understanding of how it is discursively constructed within the institutional context of online contemporary art magazines. This paper uses corpora compiled from the websites of e-flux and ARTnews, two leading online contemporary art magazines, to examine how these organisations discursively construct the reputation of artists. By constructing word-sketches of the term 'Artist', the paper identified the most significant modifiers attributed to artists and the most significant verbs which have 'artist' as an object or subject. The most significant results were analysed through concordances and demonstrated a somewhat surprising lack of evaluative representation. To examine this feature more closely, the paper then analysed three announcement texts from e-flux’s site and three review texts from ARTnews' site, comparing the use of modifiers and verbs in the representation of artists, artworks, and institutions. The results of this analysis support the corpus findings, suggesting that artists are rarely represented in evaluative terms. Based on the relatively high frequency of evaluation in the representation of artworks and institutions, these results suggest that there may be discursive norms at work in the field of online contemporary art magazines which regulate the use of verbs and modifiers in the evaluation of artists.Keywords: contemporary art, corpus linguistics, critical discourse analysis, symbolic capital
Procedia PDF Downloads 1655144 Analysis of Spamming Threats and Some Possible Solutions for Online Social Networking Sites (OSNS)
Authors: Dilip Singh Sisodia, Shrish Verma
Abstract:
Spamming is the most common issue seen nowadays in the Internet especially in Online Social Networking Sites (like Facebook, Twitter, and Google+ etc.). Spam messages keep wasting Internet bandwidth and the storage space of servers. On social network sites; spammers often disguise themselves by creating fake accounts and hijacking user’s accounts for personal gains. They behave like normal user and they continue to change their spamming strategy. To prevent this, most modern spam-filtering solutions are deployed on the receiver side; they are good at filtering spam for end users. In this paper we are presenting some spamming techniques their behaviour and possible solutions. We have analyzed how Spammers enters into online social networking sites (OSNSs) and how they target it and the techniques they use for it. The five discussed techniques of spamming techniques which are clickjacking, social engineered attacks, cross site scripting, URL shortening, and drive by download. We have used elgg framework for demonstration of some of spamming threats and respective implementation of solutions.Keywords: online social networking sites, spam, attacks, internet, clickjacking / likejacking, drive-by-download, URL shortening, networking, socially engineered attacks, elgg framework
Procedia PDF Downloads 3485143 A Study on Inverse Determination of Impact Force on a Honeycomb Composite Panel
Authors: Hamed Kalhori, Lin Ye
Abstract:
In this study, an inverse method was developed to reconstruct the magnitude and duration of impact forces exerted to a rectangular carbon fibre-epoxy composite honeycomb sandwich panel. The dynamic signals captured by Piezoelectric (PZT) sensors installed on the panel remotely from the impact locations were utilized to reconstruct the impact force generated by an instrumented hammer through an extended deconvolution approach. Two discretized forms of convolution integral are considered; the traditional one with an explicit transfer function and the modified one without an explicit transfer function. Deconvolution, usually applied to reconstruct the time history (e.g. magnitude) of a stochastic force at a defined location, is extended to identify both the location and magnitude of the impact force among a number of potential impact locations. It is assumed that a number of impact forces are simultaneously exerted to all potential locations, but the magnitude of all forces except one is zero, implicating that the impact occurs only at one location. The extended deconvolution is then applied to determine the magnitude as well as location (among the potential ones), incorporating the linear superposition of responses resulted from impact at each potential location. The problem can be categorized into under-determined (the number of sensors is less than that of impact locations), even-determined (the number of sensors equals that of impact locations), or over-determined (the number of sensors is greater than that of impact locations) cases. For an under-determined case, it comprises three potential impact locations and one PZT sensor for the rectangular carbon fibre-epoxy composite honeycomb sandwich panel. Assessments are conducted to evaluate the factors affecting the precision of the reconstructed force. Truncated Singular Value Decomposition (TSVD) and the Tikhonov regularization are independently chosen to regularize the problem to find the most suitable method for this system. The selection of optimal value of the regularization parameter is investigated through L-curve and Generalized Cross Validation (GCV) methods. In addition, the effect of different width of signal windows on the reconstructed force is examined. It is observed that the impact force generated by the instrumented impact hammer is sensitive to the impact locations of the structure, having a shape from a simple half-sine to a complicated one. The accuracy of the reconstructed impact force is evaluated using the correlation co-efficient between the reconstructed force and the actual one. Based on this criterion, it is concluded that the forces reconstructed by using the extended deconvolution without an explicit transfer function together with Tikhonov regularization match well with the actual forces in terms of magnitude and duration.Keywords: honeycomb composite panel, deconvolution, impact localization, force reconstruction
Procedia PDF Downloads 5355142 Post Liberal Perspective on Minorities Visibility in Contemporary Visual Culture: The Case of Mizrahi Jews
Authors: Merav Alush Levron, Sivan Rajuan Shtang
Abstract:
From as early as their emergence in Europe and the US, postmodern and post-colonial paradigm have formed the backbone of the visual culture field of study. The self-representation project of political minorities is studied, described and explained within the premises and perspectives drawn from these paradigms, addressing the key issues they had raised: modernism’s crisis of representation. The struggle for self-representation, agency and multicultural visibility sought to challenge the liberal pretense of universality and equality, hitting at its different blind spots, on issues such as class, gender, race, sex, and nationality. This struggle yielded subversive identity and hybrid performances, including reclaiming, mimicry and masquerading. These performances sought to defy the uniform, universal self, which forms the basis for the liberal, rational, enlightened subject. The argument of this research runs that this politics of representation itself is confined within liberal thought. Alongside post-colonialism and multiculturalism’s contribution in undermining oppressive structures of power, generating diversity in cultural visibility, and exposing the failure of liberal colorblindness, this subversion is constituted in the visual field by way of confrontation, flying in the face of the universal law and relying on its ongoing comparison and attribution to this law. Relying on Deleuze and Guattari, this research set out to draw theoretic and empiric attention to an alternative, post-liberal occurrence which has been taking place in the visual field in parallel to the contra-hegemonic phase and as a product of political reality in the aftermath of the crisis of representation. It is no longer a counter-representation; rather, it is a motion of organic minor desire, progressing in the form of flows and generating what Deleuze and Guattari termed deterritorialization of social structures. This discussion shall have its focus on current post-liberal performances of ‘Mizrahim’ (Jewish Israelis of Arab and Muslim extraction) in the visual field in Israel. In television, video art and photography, these performances challenge the issue of representation and generate concrete peripheral Mizrahiness, realized in the visual organization of the photographic frame. Mizrahiness then transforms from ‘confrontational’ representation into a 'presence', flooding the visual sphere in our plain sight, in a process of 'becoming'. The Mizrahi desire is exerted on the plains of sound, spoken language, the body and the space where they appear. It removes from these plains the coding and stratification engendered by European dominance and rational, liberal enlightenment. This stratification, adhering to the hegemonic surface, is flooded not by way of resisting false consciousness or employing hybridity, but by way of the Mizrahi identity’s own productive, material immanent yearning. The Mizrahi desire reverberates with Mizrahi peripheral 'worlds of meaning', where post-colonial interpretation almost invariably identifies a product of internalized oppression, and a recurrence thereof, rather than a source in itself - an ‘offshoot, never a wellspring’, as Nissim Mizrachi clarifies in his recent pioneering work. The peripheral Mizrahi performance ‘unhook itself’, in Deleuze and Guattari words, from the point of subjectification and interpretation and does not correspond with the partialness, absence, and split that mark post-colonial identities.Keywords: desire, minority, Mizrahi Jews, post-colonialism, post-liberalism, visibility, Deleuze and Guattari
Procedia PDF Downloads 3245141 Cross-Knowledge Graph Relation Completion for Non-Isomorphic Cross-Lingual Entity Alignment
Authors: Yuhong Zhang, Dan Lu, Chenyang Bu, Peipei Li, Kui Yu, Xindong Wu
Abstract:
The Cross-Lingual Entity Alignment (CLEA) task aims to find the aligned entities that refer to the same identity from two knowledge graphs (KGs) in different languages. It is an effective way to enhance the performance of data mining for KGs with scarce resources. In real-world applications, the neighborhood structures of the same entities in different KGs tend to be non-isomorphic, which makes the representation of entities contain diverse semantic information and then poses a great challenge for CLEA. In this paper, we try to address this challenge from two perspectives. On the one hand, the cross-KG relation completion rules are designed with the alignment constraint of entities and relations to improve the topology isomorphism of two KGs. On the other hand, a representation method combining isomorphic weights is designed to include more isomorphic semantics for counterpart entities, which will benefit the CLEA. Experiments show that our model can improve the isomorphism of two KGs and the alignment performance, especially for two non-isomorphic KGs.Keywords: knowledge graphs, cross-lingual entity alignment, non-isomorphic, relation completion
Procedia PDF Downloads 1245140 Deep Learning Based Unsupervised Sport Scene Recognition and Highlights Generation
Authors: Ksenia Meshkova
Abstract:
With increasing amount of multimedia data, it is very important to automate and speed up the process of obtaining meta. This process means not just recognition of some object or its movement, but recognition of the entire scene versus separate frames and having timeline segmentation as a final result. Labeling datasets is time consuming, besides, attributing characteristics to particular scenes is clearly difficult due to their nature. In this article, we will consider autoencoders application to unsupervised scene recognition and clusterization based on interpretable features. Further, we will focus on particular types of auto encoders that relevant to our study. We will take a look at the specificity of deep learning related to information theory and rate-distortion theory and describe the solutions empowering poor interpretability of deep learning in media content processing. As a conclusion, we will present the results of the work of custom framework, based on autoencoders, capable of scene recognition as was deeply studied above, with highlights generation resulted out of this recognition. We will not describe in detail the mathematical description of neural networks work but will clarify the necessary concepts and pay attention to important nuances.Keywords: neural networks, computer vision, representation learning, autoencoders
Procedia PDF Downloads 1275139 From Shallow Semantic Representation to Deeper One: Verb Decomposition Approach
Authors: Aliaksandr Huminski
Abstract:
Semantic Role Labeling (SRL) as shallow semantic parsing approach includes recognition and labeling arguments of a verb in a sentence. Verb participants are linked with specific semantic roles (Agent, Patient, Instrument, Location, etc.). Thus, SRL can answer on key questions such as ‘Who’, ‘When’, ‘What’, ‘Where’ in a text and it is widely applied in dialog systems, question-answering, named entity recognition, information retrieval, and other fields of NLP. However, SRL has the following flaw: Two sentences with identical (or almost identical) meaning can have different semantic role structures. Let consider 2 sentences: (1) John put butter on the bread. (2) John buttered the bread. SRL for (1) and (2) will be significantly different. For the verb put in (1) it is [Agent + Patient + Goal], but for the verb butter in (2) it is [Agent + Goal]. It happens because of one of the most interesting and intriguing features of a verb: Its ability to capture participants as in the case of the verb butter, or their features as, say, in the case of the verb drink where the participant’s feature being liquid is shared with the verb. This capture looks like a total fusion of meaning and cannot be decomposed in direct way (in comparison with compound verbs like babysit or breastfeed). From this perspective, SRL looks really shallow to represent semantic structure. If the key point in semantic representation is an opportunity to use it for making inferences and finding hidden reasons, it assumes by default that two different but semantically identical sentences must have the same semantic structure. Otherwise we will have different inferences from the same meaning. To overcome the above-mentioned flaw, the following approach is suggested. Assume that: P is a participant of relation; F is a feature of a participant; Vcp is a verb that captures a participant; Vcf is a verb that captures a feature of a participant; Vpr is a primitive verb or a verb that does not capture any participant and represents only a relation. In another word, a primitive verb is a verb whose meaning does not include meanings from its surroundings. Then Vcp and Vcf can be decomposed as: Vcp = Vpr +P; Vcf = Vpr +F. If all Vcp and Vcf will be represented this way, then primitive verbs Vpr can be considered as a canonical form for SRL. As a result of that, there will be no hidden participants caught by a verb since all participants will be explicitly unfolded. An obvious example of Vpr is the verb go, which represents pure movement. In this case the verb drink can be represented as man-made movement of liquid into specific direction. Extraction and using primitive verbs for SRL create a canonical representation unique for semantically identical sentences. It leads to the unification of semantic representation. In this case, the critical flaw related to SRL will be resolved.Keywords: decomposition, labeling, primitive verbs, semantic roles
Procedia PDF Downloads 3675138 Measuring Text-Based Semantics Relatedness Using WordNet
Authors: Madiha Khan, Sidrah Ramzan, Seemab Khan, Shahzad Hassan, Kamran Saeed
Abstract:
Measuring semantic similarity between texts is calculating semantic relatedness between texts using various techniques. Our web application (Measuring Relatedness of Concepts-MRC) allows user to input two text corpuses and get semantic similarity percentage between both using WordNet. Our application goes through five stages for the computation of semantic relatedness. Those stages are: Preprocessing (extracts keywords from content), Feature Extraction (classification of words into Parts-of-Speech), Synonyms Extraction (retrieves synonyms against each keyword), Measuring Similarity (using keywords and synonyms, similarity is measured) and Visualization (graphical representation of similarity measure). Hence the user can measure similarity on basis of features as well. The end result is a percentage score and the word(s) which form the basis of similarity between both texts with use of different tools on same platform. In future work we look forward for a Web as a live corpus application that provides a simpler and user friendly tool to compare documents and extract useful information.Keywords: Graphviz representation, semantic relatedness, similarity measurement, WordNet similarity
Procedia PDF Downloads 2385137 A Comparative Analysis Approach Based on Fuzzy AHP, TOPSIS and PROMETHEE for the Selection Problem of GSCM Solutions
Authors: Omar Boutkhoum, Mohamed Hanine, Abdessadek Bendarag
Abstract:
Sustainable economic growth is nowadays driving firms to extend toward the adoption of many green supply chain management (GSCM) solutions. However, the evaluation and selection of these solutions is a matter of concern that needs very serious decisions, involving complexity owing to the presence of various associated factors. To resolve this problem, a comparative analysis approach based on multi-criteria decision-making methods is proposed for adequate evaluation of sustainable supply chain management solutions. In the present paper, we propose an integrated decision-making model based on FAHP (Fuzzy Analytic Hierarchy Process), TOPSIS (Technique for Order of Preference by Similarity to Ideal Solution) and PROMETHEE (Preference Ranking Organisation METHod for Enrichment Evaluations) to contribute to a better understanding and development of new sustainable strategies for industrial organizations. Due to the varied importance of the selected criteria, FAHP is used to identify the evaluation criteria and assign the importance weights for each criterion, while TOPSIS and PROMETHEE methods employ these weighted criteria as inputs to evaluate and rank the alternatives. The main objective is to provide a comparative analysis based on TOPSIS and PROMETHEE processes to help make sound and reasoned decisions related to the selection problem of GSCM solution.Keywords: GSCM solutions, multi-criteria analysis, decision support system, TOPSIS, FAHP, PROMETHEE
Procedia PDF Downloads 1635136 Biogas Separation, Alcohol Amine Solutions
Authors: Jingxiao Liang, David Rooneyman
Abstract:
Biogas, which is a valuable renewable energy source, can be produced by anaerobic fermentation of agricultural waste, manure, municipal waste, plant material, sewage, green waste, or food waste. It is composed of methane (CH4) and carbon dioxide (CO2) but also contains significant quantities of undesirable compounds such as hydrogen sulfide (H2S), ammonia (NH3), and siloxanes. Since typical raw biogas contains 25–45% CO2, The requirements for biogas quality depend on its further application. Before biogas is being used more efficiently, CO2 should be removed. One of the existing options for biogas separation technologies is based on chemical absorbents, in particular, mono-, di- and tri-alcohol amine solutions. Such amine solutions have been applied as highly efficient CO2 capturing agents. The benchmark in this experiment is N-methyldiethanolamine (MDEA) with piperazine (PZ) as an activator, from CO2 absorption Isotherm curve, optimization conditions are collected, such as activator percentage, temperature etc. This experiment makes new alcohol amines, which could have the same CO2 absorbing ability as activated MDEA, using glycidol as one of reactant, the result is quite satisfying.Keywords: biogas, CO2, MDEA, separation
Procedia PDF Downloads 6355135 Existence of Positive Solutions for Second-Order Difference Equation with Discrete Boundary Value Problem
Authors: Thanin Sitthiwirattham, Jiraporn Reunsumrit
Abstract:
We study the existence of positive solutions to the three points difference summation boundary value problem. We show the existence of at least one positive solution if f is either superlinear or sublinear by applying the fixed point theorem due to Krasnoselskii in cones.Keywords: positive solution, boundary value problem, fixed point theorem, cone
Procedia PDF Downloads 4395134 Blockchain Solutions for IoT Challenges: Overview
Authors: Amir Ali Fatoorchi
Abstract:
Regardless of the advantage of LoT devices, they have limitations like storage, compute, and security problems. In recent years, a lot of Blockchain-based research in IoT published and presented. In this paper, we present the Security issues of LoT. IoT has three levels of security issues: Low-level, Intermediate-level, and High-level. We survey and compare blockchain-based solutions for high-level security issues and show how the underlying technology of bitcoin and Ethereum could solve IoT problems.Keywords: Blockchain, security, data security, IoT
Procedia PDF Downloads 2105133 Physically Informed Kernels for Wave Loading Prediction
Authors: Daniel James Pitchforth, Timothy James Rogers, Ulf Tyge Tygesen, Elizabeth Jane Cross
Abstract:
Wave loading is a primary cause of fatigue within offshore structures and its quantification presents a challenging and important subtask within the SHM framework. The accurate representation of physics in such environments is difficult, however, driving the development of data-driven techniques in recent years. Within many industrial applications, empirical laws remain the preferred method of wave loading prediction due to their low computational cost and ease of implementation. This paper aims to develop an approach that combines data-driven Gaussian process models with physical empirical solutions for wave loading, including Morison’s Equation. The aim here is to incorporate physics directly into the covariance function (kernel) of the Gaussian process, enforcing derived behaviors whilst still allowing enough flexibility to account for phenomena such as vortex shedding, which may not be represented within the empirical laws. The combined approach has a number of advantages, including improved performance over either component used independently and interpretable hyperparameters.Keywords: offshore structures, Gaussian processes, Physics informed machine learning, Kernel design
Procedia PDF Downloads 1945132 Disability Representation in Children’s Programs: A Critical Analysis of Nickelodeon’s Avatar
Authors: Jasmin Glock
Abstract:
Media plays a significant role in terms of shaping and influencing people’s perception of various themes, including disability. Although recent examples indicate progressive attitudes in society, programs across genres continue to portray disability in a negative and stereotypical way. Such a one-sided or stereotypical portrayal of disabled people can further reinforce their marginalized position by turning them into the other. The common trope of the blind or visually impaired woman, for example, marks the character as particularly vulnerable. These stereotypes are easily absorbed and left unquestioned, especially by younger audiences. As a result, the presentation of disability as problematic or painful can instill a subconscious fear of disability in viewers at a very young age. Now the question arises, how can disability be portrayed to children in a more positive way? This paper focuses on the portrayal of physical disability in children’s programming. Using disabled characters from Nickelodeon’s Avatar: The Last Airbender and Avatar: The Legend of Korra, the paper will show that the chosen animated characters have the potential to challenge and subvert disability-based bias and to contribute to the normalization of disability on screen. Analyzing blind protagonist Toph Beifong, recurring support character and wheelchair user Teo, and villain Ming Hua who has prosthetic limbs, this paper aims at highlighting that these disabled characters are far more than mere stereotyped tokens. Instead, they are crucial to the outcome of the story. They are strong and confident while still being allowed to express their insecurities in certain situations. The paper also focuses on how these characters can make disability issues relatable to disabled and non-disabled young audiences alike and how they can thereby contribute to the reduction of prejudice. Finally, they will serve as an example of what inclusive, nuanced, and even empowering disability representation in animated television series can look like.Keywords: Children, disability, representation, television
Procedia PDF Downloads 2065131 Multimodal Data Fusion Techniques in Audiovisual Speech Recognition
Authors: Hadeer M. Sayed, Hesham E. El Deeb, Shereen A. Taie
Abstract:
In the big data era, we are facing a diversity of datasets from different sources in different domains that describe a single life event. These datasets consist of multiple modalities, each of which has a different representation, distribution, scale, and density. Multimodal fusion is the concept of integrating information from multiple modalities in a joint representation with the goal of predicting an outcome through a classification task or regression task. In this paper, multimodal fusion techniques are classified into two main classes: model-agnostic techniques and model-based approaches. It provides a comprehensive study of recent research in each class and outlines the benefits and limitations of each of them. Furthermore, the audiovisual speech recognition task is expressed as a case study of multimodal data fusion approaches, and the open issues through the limitations of the current studies are presented. This paper can be considered a powerful guide for interested researchers in the field of multimodal data fusion and audiovisual speech recognition particularly.Keywords: multimodal data, data fusion, audio-visual speech recognition, neural networks
Procedia PDF Downloads 1125130 Transient Voltage Distribution on the Single Phase Transmission Line under Short Circuit Fault Effect
Authors: A. Kojah, A. Nacaroğlu
Abstract:
Single phase transmission lines are used to transfer data or energy between two users. Transient conditions such as switching operations and short circuit faults cause the generation of the fluctuation on the waveform to be transmitted. Spatial voltage distribution on the single phase transmission line may change owing to the position and duration of the short circuit fault in the system. In this paper, the state space representation of the single phase transmission line for short circuit fault and for various types of terminations is given. Since the transmission line is modeled in time domain using distributed parametric elements, the mathematical representation of the event is given in state space (time domain) differential equation form. It also makes easy to solve the problem because of the time and space dependent characteristics of the voltage variations on the distributed parametrically modeled transmission line.Keywords: energy transmission, transient effects, transmission line, transient voltage, RLC short circuit, single phase
Procedia PDF Downloads 2235129 A Sentence-to-Sentence Relation Network for Recognizing Textual Entailment
Authors: Isaac K. E. Ampomah, Seong-Bae Park, Sang-Jo Lee
Abstract:
Over the past decade, there have been promising developments in Natural Language Processing (NLP) with several investigations of approaches focusing on Recognizing Textual Entailment (RTE). These models include models based on lexical similarities, models based on formal reasoning, and most recently deep neural models. In this paper, we present a sentence encoding model that exploits the sentence-to-sentence relation information for RTE. In terms of sentence modeling, Convolutional neural network (CNN) and recurrent neural networks (RNNs) adopt different approaches. RNNs are known to be well suited for sequence modeling, whilst CNN is suited for the extraction of n-gram features through the filters and can learn ranges of relations via the pooling mechanism. We combine the strength of RNN and CNN as stated above to present a unified model for the RTE task. Our model basically combines relation vectors computed from the phrasal representation of each sentence and final encoded sentence representations. Firstly, we pass each sentence through a convolutional layer to extract a sequence of higher-level phrase representation for each sentence from which the first relation vector is computed. Secondly, the phrasal representation of each sentence from the convolutional layer is fed into a Bidirectional Long Short Term Memory (Bi-LSTM) to obtain the final sentence representations from which a second relation vector is computed. The relations vectors are combined and then used in then used in the same fashion as attention mechanism over the Bi-LSTM outputs to yield the final sentence representations for the classification. Experiment on the Stanford Natural Language Inference (SNLI) corpus suggests that this is a promising technique for RTE.Keywords: deep neural models, natural language inference, recognizing textual entailment (RTE), sentence-to-sentence relation
Procedia PDF Downloads 3485128 Fundamentals of Islamic Resistive Economy and Practical Solutions: A Study from Perspective of Infallible Imams
Authors: Abolfazl Alishahi Ghalehjoughi
Abstract:
Economic independence and security of Islamic world is the top priority. Economic dependence of Muslim countries on economies of non-Muslim imperialist countries results in political and cultural dependencies, and such dependencies will jeopardize the noble Islamic culture; because the will of a dependent country to implements the noble teachings of Islam would be faced with challenges. Solidarity of Muslim countries to achieve a uniformed and resistive economy-based Islamic economic system can improve ability of Islamic world to resist and counteract economic shocks produced by imperialists. Islam is the most complete religion in every aspect, from ideological and epistemological, to legislative and ethical, and economic aspect is no exception. Islam provides solutions to develop a flourishing economy for the whole Islamic nation. Knowledge of such solutions and identification of mechanisms to operationalise them in Islamic communities can highly contributed to establishment of the superior Islamic economy. Encourage of hard working, achievement and knowledge production, correction of consumption patterns, optimized management of import and export, avoiding Islamically prohibited income, economic discipline and equity, and promotion of interest free loan and the like are among the most important solutions to realize such resistive economy.Keywords: resistive economy, cultural independence, Islam, solidarity
Procedia PDF Downloads 3945127 Representation of Woman in Vagina Monologue: A Study of Feminism
Authors: Epata Puji Astuti
Abstract:
The Vagina Monologue is a play written by Eve Ensler, which is premiered at Off-Broadway, New York, in 1996. This play is quite different from the other play since it talks about the issue of t men's oppression toward women, and it is performed in monologue. The vagina becomes the main symbol of being discussed in the play. What did men do to women's vagina and how women view and treat her vagina reflects men's attitude toward women. Ensler had interviewed 200 women from various backgrounds to get their stories about the vagina. Ensler also has her own story about vagina. For the researcher, it is interesting to analyze how Ensler represented women in the symbol of vagina. What happened toward vagina reflected the reality about what happened toward women. How Ensler voices the issues of women, such as love, birth, rape, sex work, sexual harassment, etc. are interesting to be analyzed. This research tries to reveal how women are represented in the play. To understand about the representation of women, the researcher uses feminism theory. Textual analysis method is used to find out how women struggle for her own life and speak up for herself. Based on the analysis, it can be concluded that Ensler depicted vagina is not as dirty thing, vagina is a noble thing and men should honor it as they honor women. It reflected that women show their power and resistance toward men's oppression.Keywords: feminism, vagina, women, violence
Procedia PDF Downloads 1415126 Lesbian Stereotype Representation in Cinema in Turkey
Authors: Hasan Gürkan, Rengin Ozan
Abstract:
Cinema, as a popular mass media tool, affects the general perception of the society against sexual identity. By establishing on interaction relationship with cinema and social reality, the study also tries to answer what the importance of lesbian identity in social life in films in Turkey is. This article focus on representing the description of the women characters who call their selves lesbian in Turkey cinema. The study tries to answer these three questions: First, how the lesbian characters are represented in films in Turkey? Second, what is the reality of the lesbian sexual identity in the films? Third, what are the differences and similarities between the lesbian characters in films in Turkey before 2000s and after 2000s? The films are analysed by the sociological film interpretation in this study. When comparing the films before 2000 and after 2000, it is possible to say that there have been no lesbian characters in many films. Especially almost all of the films (Haremde Dört Kadın, Ver Elini İstanbul, Dul Bir Kadın, Gramofon Avrat, Lola and Billidikid), during 1960s, just threw looks indirect the lesbian sex identity. Just in the films Düş Gezginleri, İki Genç Kız and Nar, the women character (also called them as lesbian) are the leading role and the plot of the films is progressing over these characters.Keywords: cinema in Turkey, lesbian identity, representation, stereotype
Procedia PDF Downloads 3405125 A Comparison Between Different Discretization Techniques for the Doyle-Fuller-Newman Li+ Battery Model
Authors: Davide Gotti, Milan Prodanovic, Sergio Pinilla, David Muñoz-Torrero
Abstract:
Since its proposal, the Doyle-Fuller-Newman (DFN) lithium-ion battery model has gained popularity in the electrochemical field. In fact, this model provides the user with theoretical support for designing the lithium-ion battery parameters, such as the material particle or the diffusion coefficient adjustment direction. However, the model is mathematically complex as it is composed of several partial differential equations (PDEs) such as Fick’s law of diffusion, the MacInnes and Ohm’s equations, among other phenomena. Thus, to efficiently use the model in a time-domain simulation environment, the selection of the discretization technique is of a pivotal importance. There are several numerical methods available in the literature that can be used to carry out this task. In this study, a comparison between the explicit Euler, Crank-Nicolson, and Chebyshev discretization methods is proposed. These three methods are compared in terms of accuracy, stability, and computational times. Firstly, the explicit Euler discretization technique is analyzed. This method is straightforward to implement and is computationally fast. In this work, the accuracy of the method and its stability properties are shown for the electrolyte diffusion partial differential equation. Subsequently, the Crank-Nicolson method is considered. It represents a combination of the implicit and explicit Euler methods that has the advantage of being of the second order in time and is intrinsically stable, thus overcoming the disadvantages of the simpler Euler explicit method. As shown in the full paper, the Crank-Nicolson method provides accurate results when applied to the DFN model. Its stability does not depend on the integration time step, thus it is feasible for both short- and long-term tests. This last remark is particularly important as this discretization technique would allow the user to implement parameter estimation and optimization techniques such as system or genetic parameter identification methods using this model. Finally, the Chebyshev discretization technique is implemented in the DFN model. This discretization method features swift convergence properties and, as other spectral methods used to solve differential equations, achieves the same accuracy with a smaller number of discretization nodes. However, as shown in the literature, these methods are not suitable for handling sharp gradients, which are common during the first instants of the charge and discharge phases of the battery. The numerical results obtained and presented in this study aim to provide the guidelines on how to select the adequate discretization technique for the DFN model according to the type of application to be performed, highlighting the pros and cons of the three methods. Specifically, the non-eligibility of the simple Euler method for longterm tests will be presented. Afterwards, the Crank-Nicolson and the Chebyshev discretization methods will be compared in terms of accuracy and computational times under a wide range of battery operating scenarios. These include both long-term simulations for aging tests, and short- and mid-term battery charge/discharge cycles, typically relevant in battery applications like grid primary frequency and inertia control and electrical vehicle breaking and acceleration.Keywords: Doyle-Fuller-Newman battery model, partial differential equations, discretization, numerical methods
Procedia PDF Downloads 255124 Mixing Behaviors of Shear-Thinning Fluids in Serpentine-Channel Micromixers
Authors: Rei-Tang Tsai, Chih-Yang Wu, Chia-Yuan Chang, Ming-Ying Kuo
Abstract:
This study aims to investigate the mixing behaviors of deionized (DI) water and carboxymethyl cellulose (CMC) solutions in C-shaped serpentine micromixers over a wide range of flow conditions. The flow of CMC solutions exhibits shear-thinning behaviors. Numerical simulations are performed to investigate the effects of the mean flow speed, fluid properties and geometry parameters on flow and mixing in the micromixers with serpentine channel of the same overall channel length. From the results, we can find the following trends. When fluid mixing is dominated by convection, the curvature-induced vortices enhance fluid mixing effectively. The mixing efficiency of a micromixer consisting of semicircular C-shaped repeating units with a smaller center-line radius is better than that of a micromixer consisting of major-segment repeating units with a larger center-line radius. The viscosity of DI water is less than the overall average apparent viscosity of CMC solutions, and so the effect of curvature-induced vortices on fluid mixing in DI water is larger than that in CMC solutions for the cases with the same mean flow speed.Keywords: curved channel, microfluidics, mixing, non-newtonian fluids, vortex
Procedia PDF Downloads 4415123 3D Elasticity Analysis of Laminated Composite Plate Using State Space Method
Authors: Prathmesh Vikas Patil, Yashaswini Lomte Patil
Abstract:
Laminated composite materials have considerable attention in various engineering applications due to their exceptional strength-to-weight ratio and mechanical properties. The analysis of laminated composite plates in three-dimensional (3D) elasticity is a complex problem, as it requires accounting for the orthotropic anisotropic nature of the material and the interactions between multiple layers. Conventional approaches, such as the classical plate theory, provide simplified solutions but are limited in performing exact analysis of the plate. To address such a challenge, the state space method emerges as a powerful numerical technique for modeling the behavior of laminated composites in 3D. The state-space method involves transforming the governing equations of elasticity into a state-space representation, enabling the analysis of complex structural systems in a systematic manner. Here, an effort is made to perform a 3D elasticity analysis of plates with cross-ply and angle-ply laminates using the state space approach. The state space approach is used in this study as it is a mixed formulation technique that gives the displacements and stresses simultaneously with the same level of accuracy.Keywords: cross ply laminates, angle ply laminates, state space method, three-dimensional elasticity analysis
Procedia PDF Downloads 1115122 Viable Use of Natural Extract Solutions from Tuberous and Cereals to Enhance the Synthesis of Activated Carbon-Graphene Composite
Authors: Pamphile Ndagijimana, Xuejiao Liu, Zhiwei Li, Yin Wang
Abstract:
Enhancing the properties of activated carbon is very imperative for various applications. Indeed, the activated carbon has promising physicochemical properties desired for a considerable number of applications. In this regard, we are proposing an enhanced and green technology for increasing the efficiency and performance of the activated carbon to various applications. The technique poses on the use of natural extracts from tuberous and cereals based-solutions. These solutions showed high potentiality to be used in the synthesis of activated carbon-graphene composite with only 3 mL. The extracted liquid from tuberous sourcing was enough to induce precipitation within a fraction of a minute in contrast to that from cereal sourced. Using these extracts, a synthesis of activated carbon-graphene composite was successful. Different characterization techniques such as XRD, SEM, FTIR, BET, and Raman spectroscopy were performed to investigate the composite materials. The results confirmed a conjugation between activated carbon and graphene material.Keywords: activated carbon, cereals, extract solution, graphene, tuberous
Procedia PDF Downloads 1465121 The Use of Sustainability Criteria on Infrastructure Design to Encourage Sustainable Engineering Solutions on Infrastructure Projects
Authors: Shian Saroop, Dhiren Allopi
Abstract:
In order to stay competitive and to meet upcoming stricter environmental regulations and customer requirements, designers have a key role in designing civil infrastructure so that it is environmentally sustainable. There is an urgent need for engineers to apply technologies and methods that deliver better and more sustainable performance of civil infrastructure as well as a need to establish a standard of measurement for greener infrastructure, rather than merely use tradition solutions. However, there are no systems in place at the design stage that assesses the environmental impact of design decisions on township infrastructure projects. This paper identifies alternative eco-efficient civil infrastructure design solutions and developed sustainability criteria and a toolkit to analyse the eco efficiency of infrastructure projects. The proposed toolkit is aimed at promoting high-performance, eco-efficient, economical and environmentally friendly design decisions on stormwater, roads, water and sanitation related to township infrastructure projects. These green solutions would bring a whole new class of eco-friendly solutions to current infrastructure problems, while at the same time adding a fresh perspective to the traditional infrastructure design process. A variety of projects were evaluated using the green infrastructure toolkit and their results are compared to each other, to assess the results of using greener infrastructure verses the traditional method of designing infrastructure. The application of ‘green technology’ would ensure a sustainable design of township infrastructure services assisting the design to consider alternative resources, the environmental impacts of design decisions, ecological sensitivity issues, innovation, maintenance and materials, at the design stage of a project.Keywords: eco-efficiency, green infrastructure, infrastructure design, sustainable development
Procedia PDF Downloads 228