Search results for: real%20rational%20matrix%20transfer%20functions
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 5066

Search results for: real%20rational%20matrix%20transfer%20functions

1886 Rapid, Label-Free, Direct Detection and Quantification of Escherichia coli Bacteria Using Nonlinear Acoustic Aptasensor

Authors: Shilpa Khobragade, Carlos Da Silva Granja, Niklas Sandström, Igor Efimov, Victor P. Ostanin, Wouter van der Wijngaart, David Klenerman, Sourav K. Ghosh

Abstract:

Rapid, label-free and direct detection of pathogenic bacteria is critical for the prevention of disease outbreaks. This paper for the first time attempts to probe the nonlinear acoustic response of quartz crystal resonator (QCR) functionalized with specific DNA aptamers for direct detection and quantification of viable E. coli KCTC 2571 bacteria. DNA aptamers were immobilized through biotin and streptavidin conjugation, onto the gold surface of QCR to capture the target bacteria and the detection was accomplished by shift in amplitude of the peak 3f signal (3 times the drive frequency) upon binding, when driven near fundamental resonance frequency. The developed nonlinear acoustic aptasensor system demonstrated better reliability than conventional resonance frequency shift and energy dissipation monitoring that were recorded simultaneously. This sensing system could directly detect 10⁽⁵⁾ cells/mL target bacteria within 30 min or less and had high specificity towards E. coli KCTC 2571 bacteria as compared to the same concentration of S.typhi bacteria. Aptasensor response was observed for the bacterial suspensions ranging from 10⁽⁵⁾-10⁽⁸⁾ cells/mL. Conclusively, this nonlinear acoustic aptasensor is simple to use, gives real-time output, cost-effective and has the potential for rapid, specific, label-free direction detection of bacteria.

Keywords: acoustic, aptasensor, detection, nonlinear

Procedia PDF Downloads 544
1885 Development of an Auxetic Tissue Implant

Authors: Sukhwinder K. Bhullar, M. B. G. Jun

Abstract:

The developments in biomedical industry have demanded the development of biocompatible, high performance materials to meet higher engineering specifications. The general requirements of such materials are to provide a combination of high stiffness and strength with significant weight savings, resistance to corrosion, chemical resistance, low maintenance, and reduced costs. Auxetic materials which come under the category of smart materials offer huge potential through measured enhancements in mechanical properties. Unique deformation mechanism, providing cushioning on indentation, automatically adjustable with its strength and thickness in response to forces and having memory returns to its neutral state on dissipation of stresses make them good candidate in biomedical industry. As simple extension and compression of tissues is of fundamental importance in biomechanics, therefore, to study the elastic behaviour of auxetic soft tissues implant is targeted in this paper. Therefore development and characterization of auxetic soft tissue implant is studied in this paper. This represents a real life configuration where soft tissue such as meniscus in knee replacement, ligaments and tendons often are taken as transversely isotropic. Further, as composition of alternating polydisperse blocks of soft and stiff segments combined with excellent biocompatibility make polyurethanes one of the most promising synthetic biomaterials. Hence selecting auxetic polyurathylene foam functional characterization is performed and compared with conventional polyurathylene foam.

Keywords: auxetic materials, deformation mechanism, enhanced mechanical properties, soft tissues

Procedia PDF Downloads 449
1884 Progress in Combining Image Captioning and Visual Question Answering Tasks

Authors: Prathiksha Kamath, Pratibha Jamkhandi, Prateek Ghanti, Priyanshu Gupta, M. Lakshmi Neelima

Abstract:

Combining Image Captioning and Visual Question Answering (VQA) tasks have emerged as a new and exciting research area. The image captioning task involves generating a textual description that summarizes the content of the image. VQA aims to answer a natural language question about the image. Both these tasks include computer vision and natural language processing (NLP) and require a deep understanding of the content of the image and semantic relationship within the image and the ability to generate a response in natural language. There has been remarkable growth in both these tasks with rapid advancement in deep learning. In this paper, we present a comprehensive review of recent progress in combining image captioning and visual question-answering (VQA) tasks. We first discuss both image captioning and VQA tasks individually and then the various ways in which both these tasks can be integrated. We also analyze the challenges associated with these tasks and ways to overcome them. We finally discuss the various datasets and evaluation metrics used in these tasks. This paper concludes with the need for generating captions based on the context and captions that are able to answer the most likely asked questions about the image so as to aid the VQA task. Overall, this review highlights the significant progress made in combining image captioning and VQA, as well as the ongoing challenges and opportunities for further research in this exciting and rapidly evolving field, which has the potential to improve the performance of real-world applications such as autonomous vehicles, robotics, and image search.

Keywords: image captioning, visual question answering, deep learning, natural language processing

Procedia PDF Downloads 58
1883 Propeller Performance Modeling through a Computational Fluid Dynamics Analysis Method

Authors: Maxime Alex Junior Kuitche, Ruxandra Mihaela Botez, Jean-Chirstophe Maunand

Abstract:

The evolution of aircraft is closely linked to the study and improvement of propulsion systems. Determining the propulsion performance is a real challenge in aircraft modeling and design. In addition to theoretical methodologies, experimental procedures are used to obtain a good estimation of the propulsion performances. For piston-propeller propulsion, the propeller needs several experimental tests which could be extremely demanding in terms of time and money. This paper presents a new procedure to estimate the performance of a propeller from a numerical approach using computational fluid dynamic analysis. The propeller was initially scanned, and then, its 3D model was represented using CATIA. A structured meshing and Shear Stress Transition k-ω turbulence model were applied to describe accurately the flow pattern around the propeller. Thus, the Partial Differential Equations were solved using ANSYS FLUENT software. The method was applied on the UAS-S45’s propeller designed and manufactured by Hydra Technologies in Mexico. An extensive investigation was performed for several flight conditions in terms of altitudes and airspeeds with the aim to determine thrust coefficients, power coefficients and efficiency of the propeller. The Computational Fluid Dynamics results were compared with experimental data acquired from wind tunnel tests performed at the LARCASE Price-Paidoussis wind tunnel. The results of this comparison have demonstrated that our approach was highly accurate.

Keywords: CFD analysis, propeller performance, unmanned aerial system propeller, UAS-S45

Procedia PDF Downloads 338
1882 Refined Edge Detection Network

Authors: Omar Elharrouss, Youssef Hmamouche, Assia Kamal Idrissi, Btissam El Khamlichi, Amal El Fallah-Seghrouchni

Abstract:

Edge detection is represented as one of the most challenging tasks in computer vision, due to the complexity of detecting the edges or boundaries in real-world images that contains objects of different types and scales like trees, building as well as various backgrounds. Edge detection is represented also as a key task for many computer vision applications. Using a set of backbones as well as attention modules, deep-learning-based methods improved the detection of edges compared with the traditional methods like Sobel and Canny. However, images of complex scenes still represent a challenge for these methods. Also, the detected edges using the existing approaches suffer from non-refined results while the image output contains many erroneous edges. To overcome this, n this paper, by using the mechanism of residual learning, a refined edge detection network is proposed (RED-Net). By maintaining the high resolution of edges during the training process, and conserving the resolution of the edge image during the network stage, we make the pooling outputs at each stage connected with the output of the previous layer. Also, after each layer, we use an affined batch normalization layer as an erosion operation for the homogeneous region in the image. The proposed methods are evaluated using the most challenging datasets including BSDS500, NYUD, and Multicue. The obtained results outperform the designed edge detection networks in terms of performance metrics and quality of output images.

Keywords: edge detection, convolutional neural networks, deep learning, scale-representation, backbone

Procedia PDF Downloads 87
1881 Review of Theories and Applications of Genetic Programing in Sediment Yield Modeling

Authors: Adesoji Tunbosun Jaiyeola, Josiah Adeyemo

Abstract:

Sediment yield can be considered to be the total sediment load that leaves a drainage basin. The knowledge of the quantity of sediments present in a river at a particular time can lead to better flood capacity in reservoirs and consequently help to control over-bane flooding. Furthermore, as sediment accumulates in the reservoir, it gradually loses its ability to store water for the purposes for which it was built. The development of hydrological models to forecast the quantity of sediment present in a reservoir helps planners and managers of water resources systems, to understand the system better in terms of its problems and alternative ways to address them. The application of artificial intelligence models and technique to such real-life situations have proven to be an effective approach of solving complex problems. This paper makes an extensive review of literature relevant to the theories and applications of evolutionary algorithms, and most especially genetic programming. The successful applications of genetic programming as a soft computing technique were reviewed in sediment modelling and other branches of knowledge. Some fundamental issues such as benchmark, generalization ability, bloat and over-fitting and other open issues relating to the working principles of GP, which needs to be addressed by the GP community were also highlighted. This review aim to give GP theoreticians, researchers and the general community of GP enough research direction, valuable guide and also keep all stakeholders abreast of the issues which need attention during the next decade for the advancement of GP.

Keywords: benchmark, bloat, generalization, genetic programming, over-fitting, sediment yield

Procedia PDF Downloads 428
1880 A Mathematical Agent-Based Model to Examine Two Patterns of Language Change

Authors: Gareth Baxter

Abstract:

We use a mathematical model of language change to examine two recently observed patterns of language change: one in which most speakers change gradually, following the mean of the community change, and one in which most individuals use predominantly one variant or another, and change rapidly if they change at all. The model is based on Croft’s Utterance Selection account of language change, which views language change as an evolutionary process, in which different variants (different ‘ways of saying the same thing’) compete for usage in a population of speakers. Language change occurs when a new variant replaces an older one as the convention within a given population. The present model extends a previous simpler model to include effects related to speaker aging and interspeaker variation in behaviour. The two patterns of individual change (one more centralized and the other more polarized) were recently observed in historical language changes, and it was further observed that slower changes were more associated with the centralized pattern, while quicker changes were more polarized. Our model suggests that the two patterns of change can be explained by different balances between the preference of speakers to use one variant over another and the degree of accommodation to (propensity to adapt towards) other speakers. The correlation with the rate of change appears naturally in our model, and results from the fact that both differential weighting of variants and the degree of accommodation affect the time for change to occur, while also determining the patterns of change. This work represents part of an ongoing effort to examine phenomena in language change through the use of mathematical models. This offers another way to evaluate qualitative explanations that cannot be practically tested (or cannot be tested at all) in a real-world, large-scale speech community.

Keywords: agent based modeling, cultural evolution, language change, social behavior modeling, social influence

Procedia PDF Downloads 219
1879 Spatial Object-Oriented Template Matching Algorithm Using Normalized Cross-Correlation Criterion for Tracking Aerial Image Scene

Authors: Jigg Pelayo, Ricardo Villar

Abstract:

Leaning on the development of aerial laser scanning in the Philippine geospatial industry, researches about remote sensing and machine vision technology became a trend. Object detection via template matching is one of its application which characterized to be fast and in real time. The paper purposely attempts to provide application for robust pattern matching algorithm based on the normalized cross correlation (NCC) criterion function subjected in Object-based image analysis (OBIA) utilizing high-resolution aerial imagery and low density LiDAR data. The height information from laser scanning provides effective partitioning order, thus improving the hierarchal class feature pattern which allows to skip unnecessary calculation. Since detection is executed in the object-oriented platform, mathematical morphology and multi-level filter algorithms were established to effectively avoid the influence of noise, small distortion and fluctuating image saturation that affect the rate of recognition of features. Furthermore, the scheme is evaluated to recognized the performance in different situations and inspect the computational complexities of the algorithms. Its effectiveness is demonstrated in areas of Misamis Oriental province, achieving an overall accuracy of 91% above. Also, the garnered results portray the potential and efficiency of the implemented algorithm under different lighting conditions.

Keywords: algorithm, LiDAR, object recognition, OBIA

Procedia PDF Downloads 228
1878 Classification of Hyperspectral Image Using Mathematical Morphological Operator-Based Distance Metric

Authors: Geetika Barman, B. S. Daya Sagar

Abstract:

In this article, we proposed a pixel-wise classification of hyperspectral images using a mathematical morphology operator-based distance metric called “dilation distance” and “erosion distance”. This method involves measuring the spatial distance between the spectral features of a hyperspectral image across the bands. The key concept of the proposed approach is that the “dilation distance” is the maximum distance a pixel can be moved without changing its classification, whereas the “erosion distance” is the maximum distance that a pixel can be moved before changing its classification. The spectral signature of the hyperspectral image carries unique class information and shape for each class. This article demonstrates how easily the dilation and erosion distance can measure spatial distance compared to other approaches. This property is used to calculate the spatial distance between hyperspectral image feature vectors across the bands. The dissimilarity matrix is then constructed using both measures extracted from the feature spaces. The measured distance metric is used to distinguish between the spectral features of various classes and precisely distinguish between each class. This is illustrated using both toy data and real datasets. Furthermore, we investigated the role of flat vs. non-flat structuring elements in capturing the spatial features of each class in the hyperspectral image. In order to validate, we compared the proposed approach to other existing methods and demonstrated empirically that mathematical operator-based distance metric classification provided competitive results and outperformed some of them.

Keywords: dilation distance, erosion distance, hyperspectral image classification, mathematical morphology

Procedia PDF Downloads 69
1877 Development of Automatic Laser Scanning Measurement Instrument

Authors: Chien-Hung Liu, Yu-Fen Chen

Abstract:

This study used triangular laser probe and three-axial direction mobile platform for surface measurement, programmed it and applied it to real-time analytic statistics of different measured data. This structure was used to design a system integration program: using triangular laser probe for scattering or reflection non-contact measurement, transferring the captured signals to the computer through RS-232, and using RS-485 to control the three-axis platform for a wide range of measurement. The data captured by the laser probe are formed into a 3D surface. This study constructed an optical measurement application program in the concept of visual programming language. First, the signals are transmitted to the computer through RS-232/RS-485, and then the signals are stored and recorded in graphic interface timely. This programming concept analyzes various messages, and makes proper presentation graphs and data processing to provide the users with friendly graphic interfaces and data processing state monitoring, and identifies whether the present data are normal in graphic concept. The major functions of the measurement system developed by this study are thickness measurement, SPC, surface smoothness analysis, and analytical calculation of trend line. A result report can be made and printed promptly. This study measured different heights and surfaces successfully, performed on-line data analysis and processing effectively, and developed a man-machine interface for users to operate.

Keywords: laser probe, non-contact measurement, triangulation measurement principle, statistical process control, labVIEW

Procedia PDF Downloads 351
1876 Analysis of Accurate Direct-Estimation of the Maximum Power Point and Thermal Characteristics of High Concentration Photovoltaic Modules

Authors: Yan-Wen Wang, Chu-Yang Chou, Jen-Cheng Wang, Min-Sheng Liao, Hsuan-Hsiang Hsu, Cheng-Ying Chou, Chen-Kang Huang, Kun-Chang Kuo, Joe-Air Jiang

Abstract:

Performance-related parameters of high concentration photovoltaic (HCPV) modules (e.g. current and voltage) are required when estimating the maximum power point using numerical and approximation methods. The maximum power point on the characteristic curve for a photovoltaic module varies when temperature or solar radiation is different. It is also difficult to estimate the output performance and maximum power point (MPP) due to the special characteristics of HCPV modules. Based on the p-n junction semiconductor theory, a brand new and simple method is presented in this study to directly evaluate the MPP of HCPV modules. The MPP of HCPV modules can be determined from an irradiated I-V characteristic curve, because there is a non-linear relationship between the temperature of a solar cell and solar radiation. Numerical simulations and field tests are conducted to examine the characteristics of HCPV modules during maximum output power tracking. The performance of the presented method is evaluated by examining the dependence of temperature and irradiation intensity on the MPP characteristics of HCPV modules. These results show that the presented method allows HCPV modules to achieve their maximum power and perform power tracking under various operation conditions. A 0.1% error is found between the estimated and the real maximum power point.

Keywords: energy performance, high concentrated photovoltaic, maximum power point, p-n junction semiconductor

Procedia PDF Downloads 561
1875 Rapid and Efficient Removal of Lead from Water Using Chitosan/Magnetite Nanoparticles

Authors: Othman M. Hakami, Abdul Jabbar Al-Rajab

Abstract:

Occurrence of heavy metals in water resources increased in the recent years albeit at low concentrations. Lead (PbII) is among the most important inorganic pollutants in ground and surface water. However, removal of this toxic metal efficiently from water is of public and scientific concern. In this study, we developed a rapid and efficient removal method of lead from water using chitosan/magnetite nanoparticles. A simple and effective process has been used to prepare chitosan/magnetite nanoparticles (NPs) (CS/Mag NPs) with effect on saturation magnetization value; the particles were strongly responsive to an external magnetic field making separation from solution possible in less than 2 minutes using a permanent magnet and the total Fe in solution was below the detection limit of ICP-OES (<0.19 mg L-1). The hydrodynamic particle size distribution increased from an average diameter of ~60 nm for Fe3O4 NPs to ~75 nm after chitosan coating. The feasibility of the prepared NPs for the adsorption and desorption of Pb(II) from water were evaluated using Chitosan/Magnetite NPs which showed a high removal efficiency for Pb(II) uptake, with 90% of Pb(II) removed during the first 5 minutes and equilibrium in less than 10 minutes. Maximum adsorption capacities for Pb(II) occurred at pH 6.0 and under room temperature were as high as 85.5 mg g-1, according to Langmuir isotherm model. Desorption of adsorbed Pb on CS/Mag NPs was evaluated using deionized water at different pH values ranged from 1 to 7 which was an effective eluent and did not result the destruction of NPs, then, they could subsequently be reused without any loss of their activity in further adsorption tests. Overall, our results showed the high efficiency of chitosan/magnetite nanoparticles (NPs) in lead removal from water in controlled conditions, and further studies should be realized in real field conditions.

Keywords: chitosan, magnetite, water, treatment

Procedia PDF Downloads 386
1874 Investigating Dynamic Transition Process of Issues Using Unstructured Text Analysis

Authors: Myungsu Lim, William Xiu Shun Wong, Yoonjin Hyun, Chen Liu, Seongi Choi, Dasom Kim, Namgyu Kim

Abstract:

The amount of real-time data generated through various mass media has been increasing rapidly. In this study, we had performed topic analysis by using the unstructured text data that is distributed through news article. As one of the most prevalent applications of topic analysis, the issue tracking technique investigates the changes of the social issues that identified through topic analysis. Currently, traditional issue tracking is conducted by identifying the main topics of documents that cover an entire period at the same time and analyzing the occurrence of each topic by the period of occurrence. However, this traditional issue tracking approach has limitation that it cannot discover dynamic mutation process of complex social issues. The purpose of this study is to overcome the limitations of the existing issue tracking method. We first derived core issues of each period, and then discover the dynamic mutation process of various issues. In this study, we further analyze the mutation process from the perspective of the issues categories, in order to figure out the pattern of issue flow, including the frequency and reliability of the pattern. In other words, this study allows us to understand the components of the complex issues by tracking the dynamic history of issues. This methodology can facilitate a clearer understanding of complex social phenomena by providing mutation history and related category information of the phenomena.

Keywords: Data Mining, Issue Tracking, Text Mining, topic Analysis, topic Detection, Trend Detection

Procedia PDF Downloads 386
1873 Highly Glazed Office Spaces: Simulated Visual Comfort vs Real User Experiences

Authors: Zahra Hamedani, Ebrahim Solgi, Henry Skates, Gillian Isoardi

Abstract:

Daylighting plays a pivotal role in promoting productivity and user satisfaction in office spaces. There is an ongoing trend in designing office buildings with a high proportion of glazing which relatively increases the risk of high visual discomfort. Providing a more realistic lighting analysis can be of high value at the early stages of building design when necessary changes can be made at a very low cost. This holistic approach can be achieved by incorporating subjective evaluation and user behaviour in computer simulation and provide a comprehensive lighting analysis. In this research, a detailed computer simulation model has been made using Radiance and Daysim. Afterwards, this model was validated by measurements and user feedback. The case study building is the school of science at Griffith University, Gold Coast, Queensland, which features highly glazed office spaces. In this paper, the visual comfort predicted by the model is compared with a preliminary survey of the building users to evaluate how user behaviour such as desk position, orientation selection, and user movement caused by daylight changes and other visual variations can inform perceptions of visual comfort. This work supports preliminary design analysis of visual comfort incorporating the effects of gaze shift patterns and views with the goal of designing effective layout for office spaces.

Keywords: lighting simulation, office buildings, user behaviour, validation, visual comfort

Procedia PDF Downloads 191
1872 Health Exposure Assessment of Sulfur Loading Operation

Authors: Ayman M. Arfaj, Jose Lauro M. Llamas, Saleh Y Qahtani

Abstract:

Sulfur Loading Operation (SLO) is an operation that poses risk of exposure to toxic gases such as Hydrogen Sulfid and Sulfur Dioxide during molten sulfur loading operation. In this operation molten sulfur is loaded into a truck tanker in a liquid state and the temperature of the tanker must maintain liquid sulfur within a 43-degree range — between 266 degrees and 309 degrees Fahrenheit in order for safe loading and unloading to occur. Accordingly, in this study, the e potential risk of occupational exposure to the airborne toxic gases was assessed at three sulfur loading facilities. The concentrations of toxic airborne substances such as Hydrogen Sulfide (H2S) and Sulfur Dioxide (SO2), were monitored during operations at the different locations within the sulfur loading operation facilities. In addition to extensive real-time monitoring, over one hundred and fifty samples were collected and analysed at internationally accredited laboratories. The concentrations of H2S, and SO2 were all found to be well below their respective occupational exposure limits. Very low levels of H2S account for the odours observed intermittingly during mixing and application operations but do not pose a considerable health risk and hence these levels are considered a nuisance. These results were comparable to those reported internationally. Aside from observing the usual general safe work practices such as wearing safety glasses, there are no specific occupational health related concerns at the examined sulfur loading facilities.

Keywords: exposure assessment, sulfur loading operation, health risk study, molten sulfur, toxic airborne substances, air contaminants monitoring

Procedia PDF Downloads 62
1871 Time and Cost Prediction Models for Language Classification Over a Large Corpus on Spark

Authors: Jairson Barbosa Rodrigues, Paulo Romero Martins Maciel, Germano Crispim Vasconcelos

Abstract:

This paper presents an investigation of the performance impacts regarding the variation of five factors (input data size, node number, cores, memory, and disks) when applying a distributed implementation of Naïve Bayes for text classification of a large Corpus on the Spark big data processing framework. Problem: The algorithm's performance depends on multiple factors, and knowing before-hand the effects of each factor becomes especially critical as hardware is priced by time slice in cloud environments. Objectives: To explain the functional relationship between factors and performance and to develop linear predictor models for time and cost. Methods: the solid statistical principles of Design of Experiments (DoE), particularly the randomized two-level fractional factorial design with replications. This research involved 48 real clusters with different hardware arrangements. The metrics were analyzed using linear models for screening, ranking, and measurement of each factor's impact. Results: Our findings include prediction models and show some non-intuitive results about the small influence of cores and the neutrality of memory and disks on total execution time, and the non-significant impact of data input scale on costs, although notably impacts the execution time.

Keywords: big data, design of experiments, distributed machine learning, natural language processing, spark

Procedia PDF Downloads 96
1870 Hand Gesture Recognition for Sign Language: A New Higher Order Fuzzy HMM Approach

Authors: Saad M. Darwish, Magda M. Madbouly, Murad B. Khorsheed

Abstract:

Sign Languages (SL) are the most accomplished forms of gestural communication. Therefore, their automatic analysis is a real challenge, which is interestingly implied to their lexical and syntactic organization levels. Hidden Markov models (HMM’s) have been used prominently and successfully in speech recognition and, more recently, in handwriting recognition. Consequently, they seem ideal for visual recognition of complex, structured hand gestures such as are found in sign language. In this paper, several results concerning static hand gesture recognition using an algorithm based on Type-2 Fuzzy HMM (T2FHMM) are presented. The features used as observables in the training as well as in the recognition phases are based on Singular Value Decomposition (SVD). SVD is an extension of Eigen decomposition to suit non-square matrices to reduce multi attribute hand gesture data to feature vectors. SVD optimally exposes the geometric structure of a matrix. In our approach, we replace the basic HMM arithmetic operators by some adequate Type-2 fuzzy operators that permits us to relax the additive constraint of probability measures. Therefore, T2FHMMs are able to handle both random and fuzzy uncertainties existing universally in the sequential data. Experimental results show that T2FHMMs can effectively handle noise and dialect uncertainties in hand signals besides a better classification performance than the classical HMMs. The recognition rate of the proposed system is 100% for uniform hand images and 86.21% for cluttered hand images.

Keywords: hand gesture recognition, hand detection, type-2 fuzzy logic, hidden Markov Model

Procedia PDF Downloads 446
1869 Metaphorical Devices in Political Cartoons with Reference to Political Confrontation in Pakistan after Panama Leaks

Authors: Ayesha Ashfaq, Muhammad Ajmal Ashfaq

Abstract:

It has been assumed that metaphorical and symbolic contests are waged with metaphors, captions, and signs in political cartoons that play a significant role in image construction of political actors, situations or events in the political arena. This paper is an effort to explore the metaphorical devices in political cartoons related to the political confrontation in Pakistan between the ruling party Pakistan Muslim League Nawaz (PMLN) and opposition parties especially after Panama leaks. For this purpose, political cartoons sketched by five renowned political cartoonists on the basis of their belongings to the most highly circulated mainstream English newspapers of Pakistan and their professional experiences in their genre, were selected. The cartoons were analyzed through the Barthes’s model of Semiotics under the umbrella of the first level of agenda setting theory ‘framing’. It was observed that metaphorical devices in political cartoons are one of the key weapons of cartoonists’ armory. These devices are used to attack the candidates and contribute to the image and character building. It was found that all the selected political cartoonists used different forms of metaphors including situational metaphors and embodying metaphors. Not only the physical stature but also the debates and their activities were depicted metaphorically in the cartoons that create the scenario of comparison between the cartoons and their real political confrontation. It was examined that both forms of metaphors shed light on cartoonist’s perception and newspaper’s policy about political candidates, political parties and particular events. In addition, it was found that zoomorphic metaphors and metaphors of diminishments were also predominantly used to depict the conflict between two said political actors.

Keywords: metaphor, Panama leaks, political cartoons, political communication

Procedia PDF Downloads 277
1868 Development of GIS-Based Geotechnical Guidance Maps for Prediction of Soil Bearing Capacity

Authors: Q. Toufeeq, R. Kauser, U. R. Jamil, N. Sohaib

Abstract:

Foundation design of a structure needs soil investigation to avoid failures due to settlements. This soil investigation is expensive and time-consuming. Developments of new residential societies involve huge leveling of large sites that is accompanied by heavy land filling. Poor practices of land fill for deep depths cause differential settlements and consolidations of underneath soil that sometimes result in the collapse of structures. The extent of filling remains unknown to the individual developer unless soil investigation is carried out. Soil investigation cannot be performed on each available site due to involved costs. However, fair estimate of bearing capacity can be made if such tests are already done in the surrounding areas. The geotechnical guidance maps can provide a fair assessment of soil properties. Previously, GIS-based approaches have been used to develop maps using extrapolation and interpolations techniques for bearing capacities, underground recharge, soil classification, geological hazards, landslide hazards, socio-economic, and soil liquefaction mapping. Standard penetration test (SPT) data of surrounding sites were already available. Google Earth is used for digitization of collected data. Few points were considered for data calibration and validation. Resultant Geographic information system (GIS)-based guidance maps are helpful to anticipate the bearing capacity in the real estate industry.

Keywords: bearing capacity, soil classification, geographical information system, inverse distance weighted, radial basis function

Procedia PDF Downloads 118
1867 Impact of the Electricity Market Prices during the COVID-19 Pandemic on Energy Storage Operation

Authors: Marin Mandić, Elis Sutlović, Tonći Modrić, Luka Stanić

Abstract:

With the restructuring and deregulation of the power system, storage owners, generation companies or private producers can offer their multiple services on various power markets and earn income in different types of markets, such as the day-ahead, real-time, ancillary services market, etc. During the COVID-19 pandemic, electricity prices, as well as ancillary services prices, increased significantly. The optimization of the energy storage operation was performed using a suitable model for simulating the operation of a pumped storage hydropower plant under market conditions. The objective function maximizes the income earned through energy arbitration, regulation-up, regulation-down and spinning reserve services. The optimization technique used for solving the objective function is mixed integer linear programming (MILP). In numerical examples, the pumped storage hydropower plant operation has been optimized considering the already achieved hourly electricity market prices from Nord Pool for the pre-pandemic (2019) and the pandemic (2020 and 2021) years. The impact of the electricity market prices during the COVID-19 pandemic on energy storage operation is shown through the analysis of income, operating hours, reserved capacity and consumed energy for each service. The results indicate the role of energy storage during a significant fluctuation in electricity and services prices.

Keywords: electrical market prices, electricity market, energy storage optimization, mixed integer linear programming (MILP) optimization

Procedia PDF Downloads 156
1866 Mining Riding Patterns in Bike-Sharing System Connecting with Public Transportation

Authors: Chong Zhang, Guoming Tang, Bin Ge, Jiuyang Tang

Abstract:

With the fast growing road traffic and increasingly severe traffic congestion, more and more citizens choose to use the public transportation for daily travelling. Meanwhile, the shared bike provides a convenient option for the first and last mile to the public transit. As of 2016, over one thousand cities around the world have deployed the bike-sharing system. The combination of these two transportations have stimulated the development of each other and made significant contribution to the reduction of carbon footprint. A lot of work has been done on mining the riding behaviors in various bike-sharing systems. Most of them, however, treated the bike-sharing system as an isolated system and thus their results provide little reference for the public transit construction and optimization. In this work, we treat the bike-sharing and public transit as a whole and investigate the customers’ bike-and-ride behaviors. Specifically, we develop a spatio-temporal traffic delivery model to study the riding patterns between the two transportation systems and explore the traffic characteristics (e.g., distributions of customer arrival/departure and traffic peak hours) from the time and space dimensions. During the model construction and evaluation, we make use of large open datasets from real-world bike-sharing systems (the CitiBike in New York, GoBike in San Francisco and BIXI in Montreal) along with corresponding public transit information. The developed two-dimension traffic model, as well as the mined bike-and-ride behaviors, can provide great help to the deployment of next-generation intelligent transportation systems.

Keywords: riding pattern mining, bike-sharing system, public transportation, bike-and-ride behavior

Procedia PDF Downloads 755
1865 Predicting of Hydrate Deposition in Loading and Offloading Flowlines of Marine CNG Systems

Authors: Esam I. Jassim

Abstract:

The main aim of this paper is to demonstrate the prediction of the model capability of predicting the nucleation process, the growth rate, and the deposition potential of second phase particles in gas flowlines. The primary objective of the research is to predict the risk hazards involved in the marine transportation of compressed natural gas. However, the proposed model can be equally used for other applications including production and transportation of natural gas in any high-pressure flow-line. The proposed model employs the following three main components to approach the problem: computational fluid dynamics (CFD) technique is used to configure the flow field; the nucleation model is developed and incorporated in the simulation to predict the incipient hydrate particles size and growth rate; and the deposition of the gas/particle flow is proposed using the concept of the particle deposition velocity. These components are integrated in a comprehended model to locate the hydrate deposition in natural gas flowlines. The present research is prepared to foresee the deposition location of solid particles that could occur in a real application in Compressed Natural Gas loading and offloading. A pipeline with 120 m length and different sizes carried a natural gas is taken in the study. The location of particle deposition formed as a result of restriction is determined based on the procedure mentioned earlier and the effect of water content and downstream pressure is studied. The critical flow speed that prevents such particle to accumulate in the certain pipe length is also addressed.

Keywords: hydrate deposition, compressed natural gas, marine transportation, oceanography

Procedia PDF Downloads 471
1864 Role of Interleukin 6 on Cell Differentiations in Stem Cells Isolated from Human Exfoliated Deciduous Teeth

Authors: Nunthawan Nowwarote, Waleerat Sukarawan, Prasit Pavasant, Thanaphum Osathanon

Abstract:

Interleukin 6 (IL-6) is a multifunctional cytokine, regulating various biological responses in several tissues. A Recent study shows that IL-6 plays a role in stemness maintenance in stem cells isolated from human exfoliated deciduous teeth (SHEDs). However, the role of IL-6 on cell differentiation in SHEDs remains unknown. The present study investigated the effect of IL-6 on SHEDs differentiation. Cells were isolated from dental pulp tissues of human deciduous teeth. Flow cytometry was used to determined mesenchymal stem cell marker expression, and the multipotential differentiation (osteogenic, adipogenic and neurogenic lineage ) was also determined. The mRNA was determined using real-time quantitative polymerase chain reaction, and the phenotypes were confirmed by chemical and immunofluorescence staining. Results demonstrated that SHEDs expressed CD44, CD73, CD90, CD105 but not CD45. Further, the up-regulation of osteogenic, adipogenic and neurogenic marker genes was observed upon maintaining cells in osteogenic, adipogenic and neurogenic induction medium, respectively. The addition of IL-6 induced osteogenic by up-regulated osteogenic marker gene also increased in vitro mineralization. Under neurogenic medium supplement with IL-6, up-regulated neurogenic marker. Whereas, an addition of IL-6 attenuated adipogenic differentiation by SHEDs. In conclusion, this evidence implies that IL-6 may participate in cells differentiation ability of SHEDs.

Keywords: SHEDs, IL-6, cell differentiations, dental pulp

Procedia PDF Downloads 157
1863 Conceptual Synthesis as a Platform for Psychotherapy Integration: The Case of Transference and Overgeneralization

Authors: Merav Rabinovich

Abstract:

Background: Psychoanalytic and cognitive therapy attend problems from a different point of view. At the recent decade the integrating movement gaining momentum. However only little has been studied regarding the theoretical interrelationship among these therapy approaches. Method: 33 transference case-studies that were published in peer-reviewed academic journals were coded by Luborsky's Core Conflictual Relationship Theme (CCRT) method (components of wish, response from other – real or imaginal - and the response of self). CCRT analysis was conducted through tailor-made method, a valid tool to identify transference patterns. Rabinovich and Kacen's (2010, 2013) Relationship Between Categories (RBC) method was used to analyze the relationship among these transference patterns with cognitive and behavior components appearing at those psychoanalytic case-studies. Result: 30 of 33 cases (90%) were found to connect the transference themes with cognitive overgeneralization. In these cases, overgeneralizations were organized around Luborsky's transference themes of response from other and response of self. Additionally, overgeneralization was found to be an antithesis of the wish component, and the tension between them found to be linked with powerful behavioral and emotional reactions. Conclusion: The findings indicate that thinking distortions of overgeneralization (cognitive therapy) are the actual expressions of transference patterns. These findings point to a theoretical junction, a platform for clinical integration. Awareness to this junction can help therapists to promote well psychotherapy outcomes relying on the accumulative wisdom of the different therapies.

Keywords: transference, overgeneralization, theoretical integration, case-study metasynthesis, CCRT method, RBC method

Procedia PDF Downloads 125
1862 Using Cyclic Structure to Improve Inference on Network Community Structure

Authors: Behnaz Moradijamei, Michael Higgins

Abstract:

Identifying community structure is a critical task in analyzing social media data sets often modeled by networks. Statistical models such as the stochastic block model have proven to explain the structure of communities in real-world network data. In this work, we develop a goodness-of-fit test to examine community structure's existence by using a distinguishing property in networks: cyclic structures are more prevalent within communities than across them. To better understand how communities are shaped by the cyclic structure of the network rather than just the number of edges, we introduce a novel method for deciding on the existence of communities. We utilize these structures by using renewal non-backtracking random walk (RNBRW) to the existing goodness-of-fit test. RNBRW is an important variant of random walk in which the walk is prohibited from returning back to a node in exactly two steps and terminates and restarts once it completes a cycle. We investigate the use of RNBRW to improve the performance of existing goodness-of-fit tests for community detection algorithms based on the spectral properties of the adjacency matrix. Our proposed test on community structure is based on the probability distribution of eigenvalues of the normalized retracing probability matrix derived by RNBRW. We attempt to make the best use of asymptotic results on such a distribution when there is no community structure, i.e., asymptotic distribution under the null hypothesis. Moreover, we provide a theoretical foundation for our statistic by obtaining the true mean and a tight lower bound for RNBRW edge weights variance.

Keywords: hypothesis testing, RNBRW, network inference, community structure

Procedia PDF Downloads 134
1861 Robust Shrinkage Principal Component Parameter Estimator for Combating Multicollinearity and Outliers’ Problems in a Poisson Regression Model

Authors: Arum Kingsley Chinedu, Ugwuowo Fidelis Ifeanyi, Oranye Henrietta Ebele

Abstract:

The Poisson regression model (PRM) is a nonlinear model that belongs to the exponential family of distribution. PRM is suitable for studying count variables using appropriate covariates and sometimes experiences the problem of multicollinearity in the explanatory variables and outliers on the response variable. This study aims to address the problem of multicollinearity and outliers jointly in a Poisson regression model. We developed an estimator called the robust modified jackknife PCKL parameter estimator by combining the principal component estimator, modified jackknife KL and transformed M-estimator estimator to address both problems in a PRM. The superiority conditions for this estimator were established, and the properties of the estimator were also derived. The estimator inherits the characteristics of the combined estimators, thereby making it efficient in addressing both problems. And will also be of immediate interest to the research community and advance this study in terms of novelty compared to other studies undertaken in this area. The performance of the estimator (robust modified jackknife PCKL) with other existing estimators was compared using mean squared error (MSE) as a performance evaluation criterion through a Monte Carlo simulation study and the use of real-life data. The results of the analytical study show that the estimator outperformed other existing estimators compared with by having the smallest MSE across all sample sizes, different levels of correlation, percentages of outliers and different numbers of explanatory variables.

Keywords: jackknife modified KL, outliers, multicollinearity, principal component, transformed M-estimator.

Procedia PDF Downloads 44
1860 Non-Destructive Testing of Selective Laser Melting Products

Authors: Luca Collini, Michele Antolotti, Diego Schiavi

Abstract:

At present, complex geometries within production time shrinkage, rapidly increasing demand, and high-quality standard requirement make the non-destructive (ND) control of additively manufactured components indispensable means. On the other hand, a technology gap and the lack of standards regulating the methods and the acceptance criteria indicate the NDT of these components a stimulating field to be still fully explored. Up to date, penetrant testing, acoustic wave, tomography, radiography, and semi-automated ultrasound methods have been tested on metal powder based products so far. External defects, distortion, surface porosity, roughness, texture, internal porosity, and inclusions are the typical defects in the focus of testing. Detection of density and layers compactness are also been tried on stainless steels by the ultrasonic scattering method. In this work, the authors want to present and discuss the radiographic and the ultrasound ND testing on additively manufactured Ti₆Al₄V and inconel parts obtained by the selective laser melting (SLM) technology. In order to test the possibilities given by the radiographic method, both X-Rays and γ-Rays are tried on a set of specifically designed specimens realized by the SLM. The specimens contain a family of defectology, which represent the most commonly found, as cracks and lack of fusion. The tests are also applied to real parts of various complexity and thickness. A set of practical indications and of acceptance criteria is finally drawn.

Keywords: non-destructive testing, selective laser melting, radiography, UT method

Procedia PDF Downloads 129
1859 Effects of Alternative Opportunities and Compensation on Turnover Intention of Singapore PMET

Authors: Han Guan Chew, Keith Yong Ngee Ng, Shan-Wei Fan

Abstract:

In Singapore, talent retention is one of the most persistent and real issue companies have to grapple with due to the tight labour market. Being resource-scarce, Singapore depends solely on its talented pool of high quality human resource to sustain its competitive advantage in the global economy. But the complex and multifaceted nature of turnover phenomenon makes the prescription of effective talent retention strategies in such a competitive labour market very challenging, especially when it comes to monetary incentives, companies struggle to answer the question of “How much is enough?” By examining the interactive effects of perceived alternative employment opportunities, annual salary and satisfaction with compensation on the turnover intention of 102 Singapore Professionals, Managers, Executives and Technicians (PMET) through correlation analyses and multiple regressions, important insights into the psyche of the Singapore talent pool can be drawn. It is found that annual salary influence turnover intention indirectly through mediation and moderation effects on PMET’s satisfaction on compensation. PMET are also found to be heavily swayed by better external opportunities. This implies that talent retention strategies should not adopt a purely monetary based blanket approach but rather a comprehensive and holistic one that considers the dynamics of prevailing market conditions.

Keywords: employee turnover, high performers, knowledge workers, perceived alternative employment opportunities salary, satisfaction on compensation, Singapore PMET, talent retention

Procedia PDF Downloads 266
1858 An Orphan Software Engineering Course: Supportive Ways toward a True Software Engineer

Authors: Haya Sammana

Abstract:

A well-defined curricula must be adopted to meet the increasing complexity and diversity in the software applications. In reality, some IT majors such as computer science and computer engineering receive the software engineering education in a single course which is considered as a big challenged for the instructors and universities. Also, it requires students to gain the most of practical experiences that simulate the real work in software companies. Furthermore, we have noticed that there is no consensus on how, when and what to teach in that introductory course to gain the practical experiences that are required by the software companies. Because all of software engineering disciplines will not fit in just one course, so the course needs reasonable choices in selecting its topics. This arises an important question which is an essential one to ask: Is this course has the ability to formulate a true software engineer that meets the needs of industry? This question arises a big challenge in selecting the appropriate topics. So answering this question is very important for the next undergraduate students. During teaching this course in the curricula, the feedbacks from an undergraduate students and the keynotes of the annual meeting for an advisory committee from industrial side provide a probable answer for the proposed question: it is impossible to build a true software engineer who possesses all the essential elements of software engineering education such teamwork, communications skills, project management skills and contemporary industrial practice from one course and it is impossible to have a one course covering all software engineering topics. Besides the used teaching approach, the author proposes an implemented three supportive ways aiming for mitigating the expected risks and increasing the opportunity to build a true software engineer.

Keywords: software engineering course, software engineering education, software experience, supportive approach

Procedia PDF Downloads 340
1857 Power and Representation in Female Autobiographies

Authors: Shafag Dadashova

Abstract:

The study discusses relativity of perception and interpretation of power, its interdependence with conformity level of an individual. It describes an autobiography as a form of epiphany. It is suggested that life-writing helps the author analyze the past and define the borders of his personal power and sources of empowerment. As all life-writings deal with behaviors, values, attitudes, relationships and emotions, their investigation requires qualitative methods to understand social norms, gender roles, religion, and their role in empowerment and disempowerment of the author. The study consists of two parts. The first part is theoretical and interrogates the notion of personal power and how writing the own life can bring to conscious empowerment. The second part presents two autobiographies by female authors from two different Muslim cultures who negotiate between the larger nationalist agenda and their own personal concerns. These autobiographies (Tehmina Durrani, Pakistani author ‘My Feudal Lord’, Banine, Azerbaijani writer 'Caucasian days' and 'Parisian days') are the end of their authors’ long silence, their revolt against the conventional norms, their decision to have an agency to confess and protest. These autobiographies are the authors’ attempts to break the established matrix of perceptions, imposed norms, and gain power to build the real picture of their identity. The study sums up with the conclusion that in spite of very similar motifs of female authors to get empowered through self-analysis, different cultures and time create specific subjectivities associated with particular historical events and geographical location.

Keywords: conformity level, empowerment, female autobiography, self-identity

Procedia PDF Downloads 243