Search results for: normalized subband adaptive filter (NSAF)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 2209

Search results for: normalized subband adaptive filter (NSAF)

1279 The Evaluation of the Performance of Different Filtering Approaches in Tracking Problem and the Effect of Noise Variance

Authors: Mohammad Javad Mollakazemi, Farhad Asadi, Aref Ghafouri

Abstract:

Performance of different filtering approaches depends on modeling of dynamical system and algorithm structure. For modeling and smoothing the data the evaluation of posterior distribution in different filtering approach should be chosen carefully. In this paper different filtering approaches like filter KALMAN, EKF, UKF, EKS and smoother RTS is simulated in some trajectory tracking of path and accuracy and limitation of these approaches are explained. Then probability of model with different filters is compered and finally the effect of the noise variance to estimation is described with simulations results.

Keywords: Gaussian approximation, Kalman smoother, parameter estimation, noise variance

Procedia PDF Downloads 440
1278 The Formation of the Diminutive in Colloquial Jordanian Arabic

Authors: Yousef Barahmeh

Abstract:

This paper is a linguistic and pragmatic analysis of the use of the diminutive in Colloquial Jordanian Arabic (CJA). It demonstrates a peculiar form of the diminutive in CJA inflected by means of feminine plural ends with -aat suffix. The analysis shows that the pragmatic function(s) of the diminutive in CJA refers primarily to ‘littleness’ while the morphological inflection conveys the message of ‘the plethora’. Examples of this linguistic phenomenon are intelligible and often include a large number of words that are culture-specific to the rural dialect in the north of Jordan. In both cases, the diminutive in CJA is an adaptive strategy relative to its pragmatic and social contexts.

Keywords: Colloquial Jordanian Arabic, diminutive, morphology, pragmatics

Procedia PDF Downloads 267
1277 Greywater Water Reuse in South Africa

Authors: Onyeka Nkwonta, Christopher Iheukwumere

Abstract:

It is a waste to irrigate with great quantities of drinking water when plants thrive on used water containing small bits of compost. Unlike a lot of ecological stopgap measures, greywater reuse is a part of the fundamental solution to many ecological problems and will probably remain essentially unchanged in the distant future. Water is abused and wasted by both the wealthy and the poor. Education about water conservation is also needed. This study gives an outline of the sources of grey water in our home and provides a process of creating awareness on the importance of re-using grey water in our home, in order to achieve the 7th aim of the millennium development goals by 2015, which is ensuring environmental sustainability.

Keywords: tickling filter, education, grey water, environmental sustainability

Procedia PDF Downloads 373
1276 Rare Earth Element (REE) Geochemistry of Tepeköy Sandstones (Central Anatolia, Turkey)

Authors: Mehmet Yavuz Hüseyinca, Şuayip Küpeli

Abstract:

Sandstones from Upper Eocene - Oligocene Tepeköy formation (Member of Mezgit Group) that exposed on the eastern edge of Tuz Gölü (Salt Lake) were analyzed for their rare earth element (REE) contents. Average concentrations of ΣREE, ΣLREE (Total light rare earth elements) and ΣHREE (Total heavy rare earth elements) were determined as 31.37, 26.47 and 4.55 ppm respectively. These values are lower than UCC (Upper continental crust) which indicates grain size and/or CaO dilution effect. The chondrite-normalized REE pattern is characterized by the average ratios of (La/Yb)cn = 6.20, (La/Sm)cn = 4.06, (Gd/Lu)cn = 1.10, Eu/Eu* = 0.99 and Ce/Ce* = 0.94. Lower values of ΣLREE/ΣHREE (Average 5.97) and (La/Yb)cn suggest lower fractionation of overall REE. Moreover (La/Sm)cn and (Gd/Lu)cn ratios define less inclined LREE and almost flat HREE pattern when compared with UCC. Almost no Ce anomaly (Ce/Ce*) emphasizes that REE were originated from terrigenous material. Also depleted LREE and no Eu anomaly (Eu/Eu*) suggest an undifferentiated mafic provenance for the sandstones.

Keywords: central Anatolia, provenance, rare earth elements, REE, Tepeköy sandstone

Procedia PDF Downloads 476
1275 Efficient Filtering of Graph Based Data Using Graph Partitioning

Authors: Nileshkumar Vaishnav, Aditya Tatu

Abstract:

An algebraic framework for processing graph signals axiomatically designates the graph adjacency matrix as the shift operator. In this setup, we often encounter a problem wherein we know the filtered output and the filter coefficients, and need to find out the input graph signal. Solution to this problem using direct approach requires O(N3) operations, where N is the number of vertices in graph. In this paper, we adapt the spectral graph partitioning method for partitioning of graphs and use it to reduce the computational cost of the filtering problem. We use the example of denoising of the temperature data to illustrate the efficacy of the approach.

Keywords: graph signal processing, graph partitioning, inverse filtering on graphs, algebraic signal processing

Procedia PDF Downloads 313
1274 Adaptive Process Monitoring for Time-Varying Situations Using Statistical Learning Algorithms

Authors: Seulki Lee, Seoung Bum Kim

Abstract:

Statistical process control (SPC) is a practical and effective method for quality control. The most important and widely used technique in SPC is a control chart. The main goal of a control chart is to detect any assignable changes that affect the quality output. Most conventional control charts, such as Hotelling’s T2 charts, are commonly based on the assumption that the quality characteristics follow a multivariate normal distribution. However, in modern complicated manufacturing systems, appropriate control chart techniques that can efficiently handle the nonnormal processes are required. To overcome the shortcomings of conventional control charts for nonnormal processes, several methods have been proposed to combine statistical learning algorithms and multivariate control charts. Statistical learning-based control charts, such as support vector data description (SVDD)-based charts, k-nearest neighbors-based charts, have proven their improved performance in nonnormal situations compared to that of the T2 chart. Beside the nonnormal property, time-varying operations are also quite common in real manufacturing fields because of various factors such as product and set-point changes, seasonal variations, catalyst degradation, and sensor drifting. However, traditional control charts cannot accommodate future condition changes of the process because they are formulated based on the data information recorded in the early stage of the process. In the present paper, we propose a SVDD algorithm-based control chart, which is capable of adaptively monitoring time-varying and nonnormal processes. We reformulated the SVDD algorithm into a time-adaptive SVDD algorithm by adding a weighting factor that reflects time-varying situations. Moreover, we defined the updating region for the efficient model-updating structure of the control chart. The proposed control chart simultaneously allows efficient model updates and timely detection of out-of-control signals. The effectiveness and applicability of the proposed chart were demonstrated through experiments with the simulated data and the real data from the metal frame process in mobile device manufacturing.

Keywords: multivariate control chart, nonparametric method, support vector data description, time-varying process

Procedia PDF Downloads 300
1273 Valuation on MEMS Pressure Sensors and Device Applications

Authors: Nurul Amziah Md Yunus, Izhal Abdul Halin, Nasri Sulaiman, Noor Faezah Ismail, Ong Kai Sheng

Abstract:

The MEMS pressure sensor has been introduced and presented in this paper. The types of pressure sensor and its theory of operation are also included. The latest MEMS technology, the fabrication processes of pressure sensor are explored and discussed. Besides, various device applications of pressure sensor such as tire pressure monitoring system, diesel particulate filter and others are explained. Due to further miniaturization of the device nowadays, the pressure sensor with nanotechnology (NEMS) is also reviewed. The NEMS pressure sensor is expected to have better performance as well as lower in its cost. It has gained an excellent popularity in many applications.

Keywords: pressure sensor, diaphragm, MEMS, automotive application, biomedical application, NEMS

Procedia PDF Downloads 672
1272 Challenging Convections: Rethinking Literature Review Beyond Citations

Authors: Hassan Younis

Abstract:

Purpose: The objective of this study is to review influential papers in the sustainability and supply chain studies domain, leveraging insights from this review to develop a structured framework for academics and researchers. This framework aims to assist scholars in identifying the most impactful publications for their scholarly pursuits. Subsequently, the study will apply and trial the developed framework on selected scholarly articles within the sustainability and supply chain studies domain to evaluate its efficacy, practicality, and reliability. Design/Methodology/Approach: Utilizing the "Publish or Perish" tool, a search was conducted to locate papers incorporating "sustainability" and "supply chain" in their titles. After rigorous filtering steps, a panel of university professors identified five crucial criteria for evaluating research robustness: average yearly citation counts (25%), scholarly contribution (25%), alignment of findings with objectives (15%), methodological rigor (20%), and journal impact factor (15%). These five evaluation criteria are abbreviated as “ACMAJ" framework. Each paper then received a tiered score (1-3) for each criterion, normalized within its category, and summed using weighted averages to calculate a Final Normalized Score (FNS). This systematic approach allows for objective comparison and ranking of the research based on its impact, novelty, rigor, and publication venue. Findings: The study's findings highlight the lack of structured frameworks for assessing influential sustainability research in supply chain management, which often results in a dependence on citation counts. A complete model that incorporates five essential criteria has been suggested as a response. By conducting a methodical trial on specific academic articles in the field of sustainability and supply chain studies, the model demonstrated its effectiveness as a tool for identifying and selecting influential research papers that warrant additional attention. This work aims to fill a significant deficiency in existing techniques by providing a more comprehensive approach to identifying and ranking influential papers in the field. Practical Implications: The developed framework helps scholars identify the most influential sustainability and supply chain publications. Its validation serves the academic community by offering a credible tool and helping researchers, students, and practitioners find and choose influential papers. This approach aids field literature reviews and study suggestions. Analysis of major trends and topics deepens our grasp of this critical study area's changing terrain. Originality/Value: The framework stands as a unique contribution to academia, offering scholars an important and new tool to identify and validate influential publications. Its distinctive capacity to efficiently guide scholars, learners, and professionals in selecting noteworthy publications, coupled with the examination of key patterns and themes, adds depth to our understanding of the evolving landscape in this critical field of study.

Keywords: supply chain management, sustainability, framework, model

Procedia PDF Downloads 52
1271 Myoelectric Analysis for the Assessment of Muscle Functions and Fatigue Monitoring of Upper Extremity for Stroke Patients Performing Robot-Assisted Bilateral Training

Authors: Hsiao-Lung Chan, Ching-Yi Wu, Yan-Zou Lin, Yo Chiao, Ya-Ju Chang

Abstract:

Robot-assisted bilateral arm training has demonstrated useful to improve motor control in stroke patients and save human resources. In clinics, the efficiency of this treatment is mostly performed by comparing functional scales before and after rehabilitation. However, most of these assessments are based on behavior evaluation. The underlying improvement of muscle activation and coordination is unknown. Moreover, stroke patients are easier to have muscle fatigue under robot-assisted rehabilitation due to the weakness of muscles. This safety issue is still less studied. In this study, EMG analysis was applied during training. Our preliminary results showed the co-contraction index and co-contraction area index can delineate the improved muscle coordination of biceps brachii vs. flexor carpiradialis. Moreover, the smoothed, normalized cycle-by-cycle median frequency of left and right extensor carpiradialis decreased as the training progress, implying the occurrence of muscle fatigue.

Keywords: robot-assisted rehabilitation, strokes, muscle coordination, muscle fatigue

Procedia PDF Downloads 475
1270 Solution of the Nonrelativistic Radial Wave Equation of Hydrogen Atom Using the Green's Function Approach

Authors: F. U. Rahman, R. Q. Zhang

Abstract:

This work aims to develop a systematic numerical technique which can be easily extended to many-body problem. The Lippmann Schwinger equation (integral form of the Schrodinger wave equation) is solved for the nonrelativistic radial wave of hydrogen atom using iterative integration scheme. As the unknown wave function appears on both sides of the Lippmann Schwinger equation, therefore an approximate wave function is used in order to solve the equation. The Green’s function is obtained by the method of Laplace transform for the radial wave equation with excluded potential term. Using the Lippmann Schwinger equation, the product of approximate wave function, the Green’s function and the potential term is integrated iteratively. Finally, the wave function is normalized and plotted against the standard radial wave for comparison. The outcome wave function converges to the standard wave function with the increasing number of iteration. Results are verified for the first fifteen states of hydrogen atom. The method is efficient and consistent and can be applied to complex systems in future.

Keywords: Green’s function, hydrogen atom, Lippmann Schwinger equation, radial wave

Procedia PDF Downloads 395
1269 Regenerating Historic Buildings: Policy Gaps

Authors: Joseph Falzon, Margaret Nelson

Abstract:

Background: Policy makers at European Union (EU) and national levels address the re-use of historic buildings calling for sustainable practices and approaches. Implementation stages of policy are crucial so that EU and national strategic objectives for historic building sustainability are achieved. Governance remains one of the key objectives to ensure resource sustainability. Objective: The aim of the research was to critically examine policies for the regeneration and adaptive re-use of historic buildings in the EU and national level, and to analyse gaps between EU and national legislation and policies, taking Malta as a case study. The impact of policies on regeneration and re-use of historic buildings was also studied. Research Design: Six semi-structured interviews with stakeholders including architects, investors and community representatives informed the research. All interviews were audio recorded and transcribed in the English language. Thematic analysis utilising Atlas.ti was conducted for the semi-structured interviews. All phases of the study were governed by research ethics. Findings: Findings were grouped in main themes: resources, experiences and governance. Other key issues included identification of gaps in policies, key lessons and quality of regeneration. Abandonment of heritage buildings was discussed, for which main reasons had been attributed to governance related issues both from the policy making perspective as well as the attitudes of certain officials representing the authorities. The role of authorities, co-ordination between government entities, fairness in decision making, enforcement and management brought high criticism from stakeholders along with time factors due to the lengthy procedures taken by authorities. Policies presented an array from different perspectives of same stakeholder groups. Rather than policy, it is the interpretation of policy that presented certain gaps. Interpretations depend highly on the stakeholders putting forward certain arguments. All stakeholders acknowledged the value of heritage in regeneration. Conclusion: Active stakeholder involvement is essential in policy framework development. Research informed policies and streamlining of policies are necessary. National authorities need to shift from a segmented approach to a holistic approach.

Keywords: adaptive re-use, historic buildings, policy, sustainable

Procedia PDF Downloads 395
1268 A Resilience-Based Approach for Assessing Social Vulnerability in New Zealand's Coastal Areas

Authors: Javad Jozaei, Rob G. Bell, Paula Blackett, Scott A. Stephens

Abstract:

In the last few decades, Social Vulnerability Assessment (SVA) has been a favoured means in evaluating the susceptibility of social systems to drivers of change, including climate change and natural disasters. However, the application of SVA to inform responsive and practical strategies to deal with uncertain climate change impacts has always been challenging, and typically agencies resort back to conventional risk/vulnerability assessment. These challenges include complex nature of social vulnerability concepts which influence its applicability, complications in identifying and measuring social vulnerability determinants, the transitory social dynamics in a changing environment, and unpredictability of the scenarios of change that impacts the regime of vulnerability (including contention of when these impacts might emerge). Research suggests that the conventional quantitative approaches in SVA could not appropriately address these problems; hence, the outcomes could potentially be misleading and not fit for addressing the ongoing uncertain rise in risk. The second phase of New Zealand’s Resilience to Nature’s Challenges (RNC2) is developing a forward-looking vulnerability assessment framework and methodology that informs the decision-making and policy development in dealing with the changing coastal systems and accounts for complex dynamics of New Zealand’s coastal systems (including socio-economic, environmental and cultural). Also, RNC2 requires the new methodology to consider plausible drivers of incremental and unknowable changes, create mechanisms to enhance social and community resilience; and fits the New Zealand’s multi-layer governance system. This paper aims to analyse the conventional approaches and methodologies in SVA and offer recommendations for more responsive approaches that inform adaptive decision-making and policy development in practice. The research adopts a qualitative research design to examine different aspects of the conventional SVA processes, and the methods to achieve the research objectives include a systematic review of the literature and case study methods. We found that the conventional quantitative, reductionist and deterministic mindset in the SVA processes -with a focus the impacts of rapid stressors (i.e. tsunamis, floods)- show some deficiencies to account for complex dynamics of social-ecological systems (SES), and the uncertain, long-term impacts of incremental drivers. The paper will focus on addressing the links between resilience and vulnerability; and suggests how resilience theory and its underpinning notions such as the adaptive cycle, panarchy, and system transformability could address these issues, therefore, influence the perception of vulnerability regime and its assessment processes. In this regard, it will be argued that how a shift of paradigm from ‘specific resilience’, which focuses on adaptive capacity associated with the notion of ‘bouncing back’, to ‘general resilience’, which accounts for system transformability, regime shift, ‘bouncing forward’, can deliver more effective strategies in an era characterised by ongoing change and deep uncertainty.

Keywords: complexity, social vulnerability, resilience, transformation, uncertain risks

Procedia PDF Downloads 104
1267 Inhibitory Action of Fatty Acid Salts against Cladosporium cladosporioides and Dermatophagoides farinae

Authors: Yui Okuno, Mariko Era, Takayoshi Kawahara, Takahide Kanyama, Hiroshi Morita

Abstract:

Introduction: Fungus and mite are known as allergens that cause an allergic disease for example asthma bronchiale and allergic rhinitis. Cladosporium cladosporioides is one of the most often detected fungi in the indoor environment and causes pollution and deterioration. Dermatophagoides farinae is major mite allergens indoors. Therefore, the creation of antifungal agents with high safety and the antifungal effect is required. Fatty acid salts are known that have antibacterial activities. This report describes the effects of fatty acid salts against Cladosporium cladosporioides NBRC 30314 and Dermatophagoides farinae. Methods: Potassium salts of 9 fatty acids (C4:0, C6:0, C8:0, C10:0, C12:0, C14:0, C18:1, C18:2, C18:3) were prepared by mixing the fatty acid with the appropriate amount of KOH solution to a concentration of 175 mM and pH 10.5. The antifungal method, the spore suspension (3.0×104 spores/mL) was mixed with a sample of fatty acid potassium (final concentration of 175 mM). Samples were counted at 0, 10, 60, 180 min by plating (100 µL) on PDA. Fungal colonies were counted after incubation for 3 days at 30 °C. The MIC (minimum inhibitory concentration) against the fungi was determined by the two-fold dilution method. Each fatty acid salts were inoculated separately with 400 µL of C. cladosporioides at 3.0 × 104 spores/mL. The mixtures were incubated at the respective temperature for each organism for 10 min. The tubes were then contacted with the fungi incubated at 30 °C for 7 days and examined for growth of spores on PDA. The acaricidal method, twenty D. farinae adult females were used and each adult was covered completely with 2 µL fatty acid potassium for 1 min. The adults were then dried with filter paper. The filter paper was folded and fixed by two clips and kept at 25 °C and 64 % RH. Mortalities were determained 48 h after treatment under the microscope. D. farina was considered to be dead if appendages did not move when prodded with a pin. Results and Conclusions: The results show that C8K, C10K, C12K, C14K was effective to decrease survival rate (4 log unit) of the fatty acids potassium incubated time for 10 min against C. cladosporioides. C18:3K was effective to decrease 4 log unit of the fatty acids potassium incubated time for 60 min. Especially, C12K was the highest antifungal activity and the MIC of C12K was 0.7 mM. On the other hand, the fatty acids potassium showed no acaricidal effects against D. farinae. The activity of D. farinae was not adversely affected after 48 hours. These results indicate that C12K has high antifungal activity against C. cladosporioides and suggest the fatty acid potassium will be used as an antifungal agent.

Keywords: fatty acid salts, antifungal effects, acaricidal effects, Cladosporium cladosporioides, Dermatophagoides farinae

Procedia PDF Downloads 273
1266 Post-Exercise Recovery Tracking Based on Electrocardiography-Derived Features

Authors: Pavel Bulai, Taras Pitlik, Tatsiana Kulahava, Timofei Lipski

Abstract:

The method of Electrocardiography (ECG) interpretation for post-exercise recovery tracking was developed. Metabolic indices (aerobic and anaerobic) were designed using ECG-derived features. This study reports the associations between aerobic and anaerobic indices and classical parameters of the person’s physiological state, including blood biochemistry, glycogen concentration and VO2max changes. During the study 9 participants, healthy, physically active medium trained men and women, which trained 2-4 times per week for at least 9 weeks, fulfilled (i) ECG monitoring using Apple Watch Series 4 (AWS4); (ii) blood biochemical analysis; (iii) maximal oxygen consumption (VO2max) test, (iv) bioimpedance analysis (BIA). ECG signals from a single-lead wrist-wearable device were processed with detection of QRS-complex. Aerobic index (AI) was derived as the normalized slope of QR segment. Anaerobic index (ANI) was derived as the normalized slope of SJ segment. Biochemical parameters, glycogen content and VO2max were evaluated eight times within 3-60 hours after training. ECGs were recorded 5 times per day, plus before and after training, cycloergometry and BIA. The negative correlation between AI and blood markers of the muscles functional status including creatine phosphokinase (r=-0.238, p < 0.008), aspartate aminotransferase (r=-0.249, p < 0.004) and uric acid (r = -0.293, p<0.004) were observed. ANI was also correlated with creatine phosphokinase (r= -0.265, p < 0.003), aspartate aminotransferase (r = -0.292, p < 0.001), lactate dehydrogenase (LDH) (r = -0.190, p < 0.050). So, when the level of muscular enzymes increases during post-exercise fatigue, AI and ANI decrease. During recovery, the level of metabolites is restored, and metabolic indices rising is registered. It can be concluded that AI and ANI adequately reflect the physiology of the muscles during recovery. One of the markers of an athlete’s physiological state is the ratio between testosterone and cortisol (TCR). TCR provides a relative indication of anabolic-catabolic balance and is considered to be more sensitive to training stress than measuring testosterone and cortisol separately. AI shows a strong negative correlation with TCR (r=-0.437, p < 0.001) and correctly represents post-exercise physiology. In order to reveal the relation between the ECG-derived metabolic indices and the state of the cardiorespiratory system, direct measurements of VO2max were carried out at various time points after training sessions. The negative correlation between AI and VO2max (r = -0.342, p < 0.001) was obtained. These data testifying VO2max rising during fatigue are controversial. However, some studies have revealed increased stroke volume after training, that agrees with findings. It is important to note that post-exercise increase in VO2max does not mean an athlete’s readiness for the next training session, because the recovery of the cardiovascular system occurs over a substantially longer period. Negative correlations registered for ANI with glycogen (r = -0.303, p < 0.001), albumin (r = -0.205, p < 0.021) and creatinine (r = -0.268, p < 0.002) reflect the dehydration status of participants after training. Correlations between designed metabolic indices and physiological parameters revealed in this study can be considered as the sufficient evidence to use these indices for assessing the state of person’s aerobic and anaerobic metabolic systems after training during fatigue, recovery and supercompensation.

Keywords: aerobic index, anaerobic index, electrocardiography, supercompensation

Procedia PDF Downloads 115
1265 A DOE Study of Ultrasound Intensified Removal of Phenol

Authors: P. R. Rahul, A. Kannan

Abstract:

Ultrasound-aided adsorption of phenol by Granular Activated Carbon (GAC) was investigated at different frequencies ranging from 35 kHz, 58 kHz, and 192 kHz. Other factors influencing adsorption such as Adsorbent dosage (g/L), the initial concentration of the phenol solution (ppm) and RPM was also considered along with the frequency variable. However, this study involved calorimetric measurements which helped is determining the effect of frequency on the % removal of phenol from the power dissipated to the system was normalized. It was found that low frequency (35 kHz) cavitation effects had a profound influence on the % removal of phenol per unit power. This study also had cavitation mapping of the ultrasonic baths, and it showed that the effect of cavitation on the adsorption system is irrespective of the position of the vessel. Hence, the vessel was placed at the center of the bath. In this study, novel temperature control and monitoring system to make sure that the system is under proper condition while operations. From the BET studies, it was found that there was only 5% increase in the surface area and hence it was concluded that ultrasound doesn’t profoundly alter the equilibrium value of the adsorption system. DOE studies indicated that adsorbent dosage has a higher influence on the % removal in comparison with other factors.

Keywords: ultrasound, adsorption, granulated activated carbon, phenol

Procedia PDF Downloads 284
1264 Exact Energy Spectrum and Expectation Values of the Inverse Square Root Potential Model

Authors: Benedict Ita, Peter Okoi

Abstract:

In this work, the concept of the extended Nikiforov-Uvarov technique is discussed and employed to obtain the exact bound state energy eigenvalues and the corresponding normalized eigenfunctions of the inverse square root potential. With expressions for the exact energy eigenvalues and corresponding eigenfunctions, the expressions for the expectation values of the inverse separation-squared, kinetic energy, and the momentum-squared of the potential are presented using the Hellmann Feynman theorem. For visualization, algorithms written and implemented in Python language are used to generate tables and plots for l-states of the energy eigenvalues and some expectation values. The results obtained here may find suitable applications in areas like atomic and molecular physics, chemical physics, nuclear physics, and solid-state physics.

Keywords: Schrodinger equation, Nikoforov-Uvarov method, inverse square root potential, diatomic molecules, Python programming, Hellmann-Feynman theorem, second order differential equation, matrix algebra

Procedia PDF Downloads 24
1263 Learning from Flood: A Case Study of a Frequently Flooded Village in Hubei, China

Authors: Da Kuang

Abstract:

Resilience is a hotly debated topic in many research fields (e.g., engineering, ecology, society, psychology). In flood management studies, we are experiencing the paradigm shift from flood resistance to flood resilience. Flood resilience refers to tolerate flooding through adaptation or transformation. It is increasingly argued that our city as a social-ecological system holds the ability to learn from experience and adapt to flood rather than simply resist it. This research aims to investigate what kinds of adaptation knowledge the frequently flooded village learned from past experience and its advantages and limitations in coping with floods. The study area – Xinnongcun village, located in the west of Wuhan city, is a linear village and continuously suffered from both flash flood and drainage flood during the past 30 years. We have a field trip to the site in June 2017 and conducted semi-structured interviews with local residents. Our research summarizes two types of adaptation knowledge that people learned from the past floods. Firstly, at the village scale, it has formed a collective urban form which could help people live during both flood and dry season. All houses and front yards were elevated about 2m higher than the road. All the front yards in the village are linked and there is no barrier. During flooding time, people walk to neighbors through houses yards and boat to outside village on the lower road. Secondly, at individual scale, local people learned tacit knowledge of preparedness and emergency response to flood. Regarding the advantages and limitations, the adaptation knowledge could effectively help people to live with flood and reduce the chances of getting injuries. However, it cannot reduce local farmers’ losses on their agricultural land. After flood, it is impossible for local people to recover to the pre-disaster state as flood emerges during June and July will result in no harvest. Therefore, we argue that learning from past flood experience could increase people’s adaptive capacity. However, once the adaptive capacity cannot reduce people’s losses, it requires a transformation to a better regime.

Keywords: adaptation, flood resilience, tacit knowledge, transformation

Procedia PDF Downloads 334
1262 Technical Option Brought Solution for Safe Waste Water Management in Urban Public Toilet and Improved Ground Water Table

Authors: Chandan Kumar

Abstract:

Background and Context: Population growth and rapid urbanization resulted nearly 2 Lacs migrants along with families moving to Delhi each year in search of jobs. Most of these poor migrant families end up living in slums and constitute an estimated population of 1.87 lacs every year. Further, more than half (52 per cent) of Delhi’s population resides in places such as unauthorized and resettled colonies. Slum population is fully dependent on public toilet to defecate. In Public toilets, manholes either connected with Sewer line or septic tank. Septic tank connected public toilet faces major challenges to dispose of waste water. They have to dispose of waste water in outside open drain and waste water struck out side of public toilet complex and near to the slum area. As a result, outbreak diseases such as Malaria, Dengue and Chikungunya in slum area due to stagnated waste water. Intervention and Innovation took place by Save the Children in 21 Public Toilet Complexes of South Delhi and North Delhi. These public toilet complexes were facing same waste water disposal problem. They were disposing of minimum 1800 liters waste water every day in open drain. Which caused stagnated water-borne diseases among the nearest community. Construction of Soak Well: Construction of soak well in urban context was an innovative approach to minimizing the problem of waste water management and increased water table of existing borewell in toilet complex. This technique made solution in Ground water recharging system, and additional water was utilized in vegetable gardening within the complex premises. Soak well had constructed with multiple filter media with inlet and safeguarding bed on surrounding surface. After construction, soak well started exhausting 2000 liters of waste water to raise ground water level through different filter media. Finally, we brought a change in the communities by constructing soak well and with zero maintenance system. These Public Toilet Complexes were empowered by safe disposing waste water mechanism and reduced stagnated water-borne diseases.

Keywords: diseases, ground water recharging system, soak well, toilet complex, waste water

Procedia PDF Downloads 552
1261 Development of Adaptive Proportional-Integral-Derivative Feeding Mechanism for Robotic Additive Manufacturing System

Authors: Andy Alubaidy

Abstract:

In this work, a robotic additive manufacturing system (RAMS) that is capable of three-dimensional (3D) printing in six degrees of freedom (DOF) with very high accuracy and virtually on any surface has been designed and built. One of the major shortcomings in existing 3D printer technology is the limitation to three DOF, which results in prolonged fabrication time. Depending on the techniques used, it usually takes at least two hours to print small objects and several hours for larger objects. Another drawback is the size of the printed objects, which is constrained by the physical dimensions of most low-cost 3D printers, which are typically small. In such cases, large objects are produced by dividing them into smaller components that fit the printer’s workable area. They are then glued, bonded or otherwise attached to create the required object. Another shortcoming is material constraints and the need to fabricate a single part using different materials. With the flexibility of a six-DOF robot, the RAMS has been designed to overcome these problems. A feeding mechanism using an adaptive Proportional-Integral-Derivative (PID) controller is utilized along with a national instrument compactRIO (NI cRIO), an ABB robot, and off-the-shelf sensors. The RAMS have the ability to 3D print virtually anywhere in six degrees of freedom with very high accuracy. It is equipped with an ABB IRB 120 robot to achieve this level of accuracy. In order to convert computer-aided design (CAD) files to digital format that is acceptable to the robot, Hypertherm Robotic Software Inc.’s state-of-the-art slicing software called “ADDMAN” is used. ADDMAN is capable of converting any CAD file into RAPID code (the programing language for ABB robots). The robot uses the generated code to perform the 3D printing. To control the entire process, National Instrument (NI) compactRIO (cRio 9074), is connected and communicated with the robot and a feeding mechanism that is designed and fabricated. The feeding mechanism consists of two major parts, cold-end and hot-end. The cold-end consists of what is conventionally known as an extruder. Typically, a stepper-motor is used to control the push on the material, however, for optimum control, a DC motor is used instead. The hot-end consists of a melt-zone, nozzle, and heat-brake. The melt zone ensures a thorough melting effect and consistent output from the nozzle. Nozzles are made of brass for thermo-conductivity while the melt-zone is comprised of a heating block and a ceramic heating cartridge to transfer heat to the block. The heat-brake ensures that there is no heat creep-up effect as this would swell the material and prevent consistent extrusion. A control system embedded in the cRio is developed using NI Labview which utilizes adaptive PID to govern the heating cartridge in conjunction with a thermistor. The thermistor sends temperature feedback to the cRio, which will issue heat increase or decrease based on the system output. Since different materials have different melting points, our system will allow us to adjust the temperature and vary the material.

Keywords: robotic, additive manufacturing, PID controller, cRIO, 3D printing

Procedia PDF Downloads 218
1260 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms

Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang

Abstract:

Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.

Keywords: bioassay, machine learning, preprocessing, virtual screen

Procedia PDF Downloads 276
1259 Prioritizing Biodiversity Conservation Areas based on the Vulnerability and the Irreplaceability Framework in Mexico

Authors: Alma Mendoza-Ponce, Rogelio Corona-Núñez, Florian Kraxner

Abstract:

Mexico is a megadiverse country and it has nearly halved its natural vegetation in the last century due to agricultural and livestock expansion. Impacts of land use cover change and climate change are unevenly distributed and spatial prioritization to minimize the affectations on biodiversity is crucial. Global and national efforts for prioritizing biodiversity conservation show that ~33% to 45% of Mexico should be protected. The width of these targets makes difficult to lead resources. We use a framework based on vulnerability and irreplaceability to prioritize conservation efforts in Mexico. Vulnerability considered exposure, sensitivity and adaptive capacity under two scenarios (business as usual, BAU based, on the SSP2 and RCP 4.5 and a Green scenario, based on the SSP1 and the RCP 2.6). Exposure to land use is the magnitude of change from natural vegetation to anthropogenic covers while exposure to climate change is the difference between current and future values for both scenarios. Sensitivity was considered as the number of endemic species of terrestrial vertebrates which are critically endangered and endangered. Adaptive capacity is used as the ration between the percentage of converted area (natural to anthropogenic) and the percentage of protected area at municipality level. The results suggest that by 2050, between 11.6 and 13.9% of Mexico show vulnerability ≥ 50%, and by 2070, between 12.0 and 14.8%, in the Green and BAU scenario, respectively. From an ecosystem perspective cloud forests, followed by tropical dry forests, natural grasslands and temperate forests will be the most vulnerable (≥ 50%). Amphibians are the most threatened vertebrates; 62% of the endemic amphibians are critically endangered or endangered while 39%, 12% and 9% of the mammals, birds, and reptiles, respectively. However, the distribution of these amphibians counts for only 3.3% of the country, while mammals, birds, and reptiles in these categories represent 10%, 16% and 29% of Mexico. There are 5 municipalities out of the 2,457 that Mexico has that represent 31% of the most vulnerable areas (70%).These municipalities account for 0.05% of Mexico. This multiscale approach can be used to address resources to conservation targets as ecosystems, municipalities or species considering land use cover change, climate change and biodiversity uniqueness.

Keywords: biodiversity, climate change, land use change, Mexico, vulnerability

Procedia PDF Downloads 168
1258 Grid Pattern Recognition and Suppression in Computed Radiographic Images

Authors: Igor Belykh

Abstract:

Anti-scatter grids used in radiographic imaging for the contrast enhancement leave specific artifacts. Those artifacts may be visible or may cause Moiré effect when a digital image is resized on a diagnostic monitor. In this paper, we propose an automated grid artifacts detection and suppression algorithm which is still an actual problem. Grid artifacts detection is based on statistical approach in spatial domain. Grid artifacts suppression is based on Kaiser bandstop filter transfer function design and application avoiding ringing artifacts. Experimental results are discussed and concluded with description of advantages over existing approaches.

Keywords: grid, computed radiography, pattern recognition, image processing, filtering

Procedia PDF Downloads 283
1257 Single Cell Analysis of Circulating Monocytes in Prostate Cancer Patients

Authors: Leander Van Neste, Kirk Wojno

Abstract:

The innate immune system reacts to foreign insult in several unique ways, one of which is phagocytosis of perceived threats such as cancer, bacteria, and viruses. The goal of this study was to look for evidence of phagocytosed RNA from tumor cells in circulating monocytes. While all monocytes possess phagocytic capabilities, the non-classical CD14+/FCGR3A+ monocytes and the intermediate CD14++/FCGR3A+ monocytes most actively remove threatening ‘external’ cellular materials. Purified CD14-positive monocyte samples from fourteen patients recently diagnosed with clinically localized prostate cancer (PCa) were investigated by single-cell RNA sequencing using the 10X Genomics protocol followed by paired-end sequencing on Illumina’s NovaSeq. Similarly, samples were processed and used as controls, i.e., one patient underwent biopsy but was found not to harbor prostate cancer (benign), three young, healthy men, and three men previously diagnosed with prostate cancer that recently underwent (curative) radical prostatectomy (post-RP). Sequencing data were mapped using 10X Genomics’ CellRanger software and viable cells were subsequently identified using CellBender, removing technical artifacts such as doublets and non-cellular RNA. Next, data analysis was performed in R, using the Seurat package. Because the main goal was to identify differences between PCa patients and ‘control’ patients, rather than exploring differences between individual subjects, the individual Seurat objects of all 21 patients were merged into one Seurat object per Seurat’s recommendation. Finally, the single-cell dataset was normalized as a whole prior to further analysis. Cell identity was assessed using the SingleR and cell dex packages. The Monaco Immune Data was selected as the reference dataset, consisting of bulk RNA-seq data of sorted human immune cells. The Monaco classification was supplemented with normalized PCa data obtained from The Cancer Genome Atlas (TCGA), which consists of bulk RNA sequencing data from 499 prostate tumor tissues (including 1 metastatic) and 52 (adjacent) normal prostate tissues. SingleR was subsequently run on the combined immune cell and PCa datasets. As expected, the vast majority of cells were labeled as having a monocytic origin (~90%), with the most noticeable difference being the larger number of intermediate monocytes in the PCa patients (13.6% versus 7.1%; p<.001). In men harboring PCa, 0.60% of all purified monocytes were classified as harboring PCa signals when the TCGA data were included. This was 3-fold, 7.5-fold, and 4-fold higher compared to post-RP, benign, and young men, respectively (all p<.001). In addition, with 7.91%, the number of unclassified cells, i.e., cells with pruned labels due to high uncertainty of the assigned label, was also highest in men with PCa, compared to 3.51%, 2.67%, and 5.51% of cells in post-RP, benign, and young men, respectively (all p<.001). It can be postulated that actively phagocytosing cells are hardest to classify due to their dual immune cell and foreign cell nature. Hence, the higher number of unclassified cells and intermediate monocytes in PCa patients might reflect higher phagocytic activity due to tumor burden. This also illustrates that small numbers (~1%) of circulating peripheral blood monocytes that have interacted with tumor cells might still possess detectable phagocytosed tumor RNA.

Keywords: circulating monocytes, phagocytic cells, prostate cancer, tumor immune response

Procedia PDF Downloads 162
1256 Backward-Facing Step Measurements at Different Reynolds Numbers Using Acoustic Doppler Velocimetry

Authors: Maria Amelia V. C. Araujo, Billy J. Araujo, Brian Greenwood

Abstract:

The flow over a backward-facing step is characterized by the presence of flow separation, recirculation and reattachment, for a simple geometry. This type of fluid behaviour takes place in many practical engineering applications, hence the reason for being investigated. Historically, fluid flows over a backward-facing step have been examined in many experiments using a variety of measuring techniques such as laser Doppler velocimetry (LDV), hot-wire anemometry, particle image velocimetry or hot-film sensors. However, some of these techniques cannot conveniently be used in separated flows or are too complicated and expensive. In this work, the applicability of the acoustic Doppler velocimetry (ADV) technique is investigated to such type of flows, at various Reynolds numbers corresponding to different flow regimes. The use of this measuring technique in separated flows is very difficult to find in literature. Besides, most of the situations where the Reynolds number effect is evaluated in separated flows are in numerical modelling. The ADV technique has the advantage in providing nearly non-invasive measurements, which is important in resolving turbulence. The ADV Nortek Vectrino+ was used to characterize the flow, in a recirculating laboratory flume, at various Reynolds Numbers (Reh = 3738, 5452, 7908 and 17388) based on the step height (h), in order to capture different flow regimes, and the results compared to those obtained using other measuring techniques. To compare results with other researchers, the step height, expansion ratio and the positions upstream and downstream the step were reproduced. The post-processing of the AVD records was performed using a customized numerical code, which implements several filtering techniques. Subsequently, the Vectrino noise level was evaluated by computing the power spectral density for the stream-wise horizontal velocity component. The normalized mean stream-wise velocity profiles, skin-friction coefficients and reattachment lengths were obtained for each Reh. Turbulent kinetic energy, Reynolds shear stresses and normal Reynolds stresses were determined for Reh = 7908. An uncertainty analysis was carried out, for the measured variables, using the moving block bootstrap technique. Low noise levels were obtained after implementing the post-processing techniques, showing their effectiveness. Besides, the errors obtained in the uncertainty analysis were relatively low, in general. For Reh = 7908, the normalized mean stream-wise velocity and turbulence profiles were compared directly with those acquired by other researchers using the LDV technique and a good agreement was found. The ADV technique proved to be able to characterize the flow properly over a backward-facing step, although additional caution should be taken for measurements very close to the bottom. The ADV measurements showed reliable results regarding: a) the stream-wise velocity profiles; b) the turbulent shear stress; c) the reattachment length; d) the identification of the transition from transitional to turbulent flows. Despite being a relatively inexpensive technique, acoustic Doppler velocimetry can be used with confidence in separated flows and thus very useful for numerical model validation. However, it is very important to perform adequate post-processing of the acquired data, to obtain low noise levels, thus decreasing the uncertainty.

Keywords: ADV, experimental data, multiple Reynolds number, post-processing

Procedia PDF Downloads 149
1255 Determination of Verapamil Hydrochloride in the Tablet and Injection Solution by the Verapamil-Sensitive Electrode and Possibilities of Application in Pharmaceutical Analysis

Authors: Faisal A. Salih, V. V. Egorov

Abstract:

Verapamil is a drug used in medicine for arrhythmia, angina, and hypertension as a calcium channel blocker. In this study, a Verapamil-selective electrode was prepared, and the concentrations of the components in the membrane were as follows: PVC (32.8 wt %), O-NPhOE (66.6 wt %), and KTPClPB (0.6 wt % or approximately 0.01 M). The inner solution containing verapamil hydrochloride 1 x 10⁻³ M was introduced, and the electrodes were conditioned overnight in 1 x 10⁻³ M verapamil hydrochloride solution in 1 x 10⁻³ M orthophosphoric acid. These studies have demonstrated that O-NPhOE and KTPClPB are the best plasticizers and ion exchangers, while both direct potentiometry and potentiometric titration methods can be used for the determination of verapamil hydrochloride in tablets and injection solutions. Normalized weights of verapamil per tablet (80.4±0.2, 80.7±0.2, 81.0±0.4 mg) were determined by direct potentiometry and potentiometric titration, respectively. Weights of verapamil per average tablet weight determined by the methods of direct potentiometry and potentiometric titration were" 80.4±0.2, 80.7±0.2 mg determined for the same set of tablets, respectively. The masses of verapamil in solutions for injection, determined by direct potentiometry for two ampoules from one set, were (5.00±0.015, 5.004±0.006) mg. In all cases, good reproducibility and excellent correspondence with the declared quantities were observed.

Keywords: verapamil, potentiometry, ion-selective electrode, lipophilic physiologically active amines

Procedia PDF Downloads 86
1254 Gestalt in Music and Brain: A Non-Linear Chaos Based Study with Detrended/Adaptive Fractal Analysis

Authors: Shankha Sanyal, Archi Banerjee, Sayan Biswas, Sourya Sengupta, Sayan Nag, Ranjan Sengupta, Dipak Ghosh

Abstract:

The term ‘gestalt’ has been widely used in the field of psychology which defined the perception of human mind to group any object not in part but as a 'unified' whole. Music, in general, is polyphonic - i.e. a combination of a number of pure tones (frequencies) mixed together in a manner that sounds harmonious. The study of human brain response due to different frequency groups of the acoustic signal can give us an excellent insight regarding the neural and functional architecture of brain functions. Hence, the study of music cognition using neuro-biosensors is becoming a rapidly emerging field of research. In this work, we have tried to analyze the effect of different frequency bands of music on the various frequency rhythms of human brain obtained from EEG data. Four widely popular Rabindrasangeet clips were subjected to Wavelet Transform method for extracting five resonant frequency bands from the original music signal. These frequency bands were initially analyzed with Detrended/Adaptive Fractal analysis (DFA/AFA) methods. A listening test was conducted on a pool of 100 respondents to assess the frequency band in which the music becomes non-recognizable. Next, these resonant frequency bands were presented to 20 subjects as auditory stimulus and EEG signals recorded simultaneously in 19 different locations of the brain. The recorded EEG signals were noise cleaned and subjected again to DFA/AFA technique on the alpha, theta and gamma frequency range. Thus, we obtained the scaling exponents from the two methods in alpha, theta and gamma EEG rhythms corresponding to different frequency bands of music. From the analysis of music signal, it is seen that loss of recognition is proportional to the loss of long range correlation in the signal. From the EEG signal analysis, we obtain frequency specific arousal based response in different lobes of brain as well as in specific EEG bands corresponding to musical stimuli. In this way, we look to identify a specific frequency band beyond which the music becomes non-recognizable and below which in spite of the absence of other bands the music is perceivable to the audience. This revelation can be of immense importance when it comes to the field of cognitive music therapy and researchers of creativity.

Keywords: AFA, DFA, EEG, gestalt in music, Hurst exponent

Procedia PDF Downloads 332
1253 Computational Approach to Identify Novel Chemotherapeutic Agents against Multiple Sclerosis

Authors: Syed Asif Hassan, Tabrej Khan

Abstract:

Multiple sclerosis (MS) is a chronic demyelinating autoimmune disorder, of the central nervous system (CNS). In the present scenario, the current therapies either do not halt the progression of the disease or have side effects which limit the usage of current Disease Modifying Therapies (DMTs) for a longer period of time. Therefore, keeping the current treatment failure schema, we are focusing on screening novel analogues of the available DMTs that specifically bind and inhibit the Sphingosine1-phosphate receptor1 (S1PR1) thereby hindering the lymphocyte propagation toward CNS. The novel drug-like analogs molecule will decrease the frequency of relapses (recurrence of the symptoms associated with MS) with higher efficacy and lower toxicity to human system. In this study, an integrated approach involving ligand-based virtual screening protocol (Ultrafast Shape Recognition with CREDO Atom Types (USRCAT)) to identify the non-toxic drug like analogs of the approved DMTs were employed. The potency of the drug-like analog molecules to cross the Blood Brain Barrier (BBB) was estimated. Besides, molecular docking and simulation using Auto Dock Vina 1.1.2 and GOLD 3.01 were performed using the X-ray crystal structure of Mtb LprG protein to calculate the affinity and specificity of the analogs with the given LprG protein. The docking results were further confirmed by DSX (DrugScore eXtented), a robust program to evaluate the binding energy of ligands bound to the ligand binding domain of the Mtb LprG lipoprotein. The ligand, which has a higher hypothetical affinity, also has greater negative value. Further, the non-specific ligands were screened out using the structural filter proposed by Baell and Holloway. Based on the USRCAT, Lipinski’s values, toxicity and BBB analysis, the drug-like analogs of fingolimod and BG-12 showed that RTL and CHEMBL1771640, respectively are non-toxic and permeable to BBB. The successful docking and DSX analysis showed that RTL and CHEMBL1771640 could bind to the binding pocket of S1PR1 receptor protein of human with greater affinity than as compared to their parent compound (Fingolimod). In this study, we also found that all the drug-like analogs of the standard MS drugs passed the Bell and Holloway filter.

Keywords: antagonist, binding affinity, chemotherapeutics, drug-like, multiple sclerosis, S1PR1 receptor protein

Procedia PDF Downloads 256
1252 The Predictive Value of Serum Bilirubin in the Post-Transplant De Novo Malignancy: A Data Mining Approach

Authors: Nasim Nosoudi, Amir Zadeh, Hunter White, Joshua Conrad, Joon W. Shim

Abstract:

De novo Malignancy has become one of the major causes of death after transplantation, so early cancer diagnosis and detection can drastically improve survival rates post-transplantation. Most previous work focuses on using artificial intelligence (AI) to predict transplant success or failure outcomes. In this work, we focused on predicting de novo malignancy after liver transplantation using AI. We chose the patients that had malignancy after liver transplantation with no history of malignancy pre-transplant. Their donors were cancer-free as well. We analyzed 254,200 patient profiles with post-transplant malignancy from the US Organ Procurement and Transplantation Network (OPTN). Several popular data mining methods were applied to the resultant dataset to build predictive models to characterize de novo malignancy after liver transplantation. Recipient's bilirubin, creatinine, weight, gender, number of days recipient was on the transplant waiting list, Epstein Barr Virus (EBV), International normalized ratio (INR), and ascites are among the most important factors affecting de novo malignancy after liver transplantation

Keywords: De novo malignancy, bilirubin, data mining, transplantation

Procedia PDF Downloads 105
1251 Duo Lingo: Learning Languages through Play

Authors: Yara Bajnaid, Malak Zaidan, Eman Dakkak

Abstract:

This research explores the use of Artificial Intelligence in Duolingo, a popular mobile application for language learning. Duolingo's success hinges on its gamified approach and adaptive learning system, both heavily reliant on AI functionalities. The research also analyzes user feedback regarding Duolingo's AI functionalities. While a significant majority (70%) consider Duolingo a reliable tool for language learning, there's room for improvement. Overall, AI plays a vital role in personalizing the learning journey and delivering interactive exercises. However, continuous improvement based on user feedback can further enhance the effectiveness of Duolingo's AI functionalities.

Keywords: AI, Duolingo, language learning, application

Procedia PDF Downloads 55
1250 The Conflict between Empowerment and Exploitation: The Hypersexualization of Women in the Media

Authors: Seung Won Park

Abstract:

Pornographic images are becoming increasingly normalized as innovations in media technology arise, the porn industry explosively grows, and transnational capitalism spreads due to government deregulation and privatization of media. As the media evolves, pornography has become more and more violent and non-consensual; this growth of ‘raunch culture’ reifies the traditional power balance between men and women in which men are dominant, and women are submissive. This male domination objectifies and commodifies women, reducing them to merely sexual objects for the gratification of men. Women are exposed to pornographic images at younger and younger ages, providing unhealthy sexual role models and teaching them lessons on sexual behavior before the onset of puberty. The increasingly sexualized depiction of women in particular positions them as appropriately desirable and available to men. As a result, women are not only viewed as sexual prey but also end up treating themselves primarily as sexual objects, basing their worth off of their sexuality alone. Although many scholars are aware of and have written on the great lack of agency exercised by women in these representations, the general public tends to view some of these women as being empowered, rather than exploited. Scholarly discourse is constrained by the popular misconception that the construction of women’s sexuality in the media is controlled by women themselves.

Keywords: construction of gender, hypersexualization, media, objectification

Procedia PDF Downloads 299