Search results for: background noise statistical modeling
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 12785

Search results for: background noise statistical modeling

10415 Modeling of CREB Pathway Induced Gene Induction: From Stimulation to Repression

Authors: K. Julia Rose Mary, Victor Arokia Doss

Abstract:

Electrical and chemical stimulations up-regulate phosphorylaion of CREB, a transcriptional factor that induces its target gene production for memory consolidation and Late Long-Term Potentiation (L-LTP) in CA1 region of the hippocampus. L-LTP requires complex interactions among second-messenger signaling cascade molecules such as cAMP, CAMKII, CAMKIV, MAPK, RSK, PKA, all of which converge to phosphorylate CREB which along with CBP induces the transcription of target genes involved in memory consolidation. A differential equation based model for L-LTP representing stimulus-mediated activation of downstream mediators which confirms the steep, supralinear stimulus-response effects of activation and inhibition was used. The same was extended to accommodate the inhibitory effect of the Inducible cAMP Early Repressor (ICER). ICER is the natural inducible CREB antagonist represses CRE-Mediated gene transcription involved in long-term plasticity for learning and memory. After verifying the sensitivity and robustness of the model, we had simulated it with various empirical levels of repressor concentration to analyse their effect on the gene induction. The model appears to predict the regulatory dynamics of repression on the L-LTP and agrees with the experimental values. The flux data obtained in the simulations demonstrate various aspects of equilibrium between the gene induction and repression.

Keywords: CREB, L-LTP, mathematical modeling, simulation

Procedia PDF Downloads 296
10414 Pyramidal Lucas-Kanade Optical Flow Based Moving Object Detection in Dynamic Scenes

Authors: Hyojin Lim, Cuong Nguyen Khac, Yeongyu Choi, Ho-Youl Jung

Abstract:

In this paper, we propose a simple moving object detection, which is based on motion vectors obtained from pyramidal Lucas-Kanade optical flow. The proposed method detects moving objects such as pedestrians, the other vehicles and some obstacles at the front-side of the host vehicle, and it can provide the warning to the driver. Motion vectors are obtained by using pyramidal Lucas-Kanade optical flow, and some outliers are eliminated by comparing the amplitude of each vector with the pre-defined threshold value. The background model is obtained by calculating the mean and the variance of the amplitude of recent motion vectors in the rectangular shaped local region called the cell. The model is applied as the reference to classify motion vectors of moving objects and those of background. Motion vectors are clustered to rectangular regions by using the unsupervised clustering K-means algorithm. Labeling method is applied to label groups which is close to each other, using by distance between each center points of rectangular. Through the simulations tested on four kinds of scenarios such as approaching motorbike, vehicle, and pedestrians to host vehicle, we prove that the proposed is simple but efficient for moving object detection in parking lots.

Keywords: moving object detection, dynamic scene, optical flow, pyramidal optical flow

Procedia PDF Downloads 354
10413 A Brief History of Kampo Extract Formulations for Prescription in Japan

Authors: Kazunari Ozaki, Mitsuru Kageyama, Kenki Miyazawa, Yoshio Nakamura

Abstract:

Background: Kampo (Japanese Traditional medicine) is a medicine traditionally practiced in Japan, based on ancient Chinese medicine. Most Kampo doctors have used decoction of crude drug pieces for treatment. 93% of the Kampo drugs sold in Japan are Kampo products nowadays. Of all Kampo products, 81% of them are Kampo extract formulations for prescription, which is prepared in powdered or granulated form from medicinal crude drug extracts mixed with appropriate excipient. Physicians with medical license for Western medicine prescribe these Kampo extract formulations for prescription in Japan. Objectives: Our study aims at presenting a brief history of Kampo extract formulations for prescription in Japan. Methods: Systematic searches for relevant studies were conducted using not only printed journals but also electronic journals from the bibliographic databases, such as PubMed/Medline, Ichushi-Web, and university/institutional websites, as well as search engines, such as Google and Google Scholar. Results: The first commercialization of Kampo extract formulations for general use (or OTC (over-the-counter) Kampo extract formulation) was achieved after 1957. The number of drugs has been subsequentially increased, reaching 148 Kampo extract formulation for prescription currently. Conclusion: We provide a history of Kampo extract formulations for prescription in Japan. The originality of this research is that it analyzes the background history of Kampo in parallel with relevant transitions in the government and insurance systems.

Keywords: health insurance system, history, Kampo, Kampo extract formulation for prescription, OTC Kampo extract formulation, pattern corresponding prescription (Ho-sho-so-tai) system

Procedia PDF Downloads 289
10412 Analysis of Differences between Public and Experts’ Views Regarding Sustainable Development of Developing Cities: A Case Study in the Iraqi Capital Baghdad

Authors: Marwah Mohsin, Thomas Beach, Alan Kwan, Mahdi Ismail

Abstract:

This paper describes the differences in views on sustainable development between the general public and experts in a developing country, Iraq. This paper will answer the question: How do the views of the public differ from the generally accepted view of experts in the context of sustainable urban development in Iraq? In order to answer this question, the views of both the public and the experts will be analysed. These results are taken from a public survey and a Delphi questionnaire. These will be analysed using statistical methods in order to identify the significant differences. This will enable investigation of the different perceptions between the public perceptions and the experts’ views towards urban sustainable development factors. This is important due to the fact that different viewpoints between policy-makers and the public will impact on the acceptance by the public of any future sustainable development work that is undertaken. The brief findings of the statistical analysis show that the views of both the public and the experts are considered different in most of the variables except six variables show no differences. Those variables are ‘The importance of establishing sustainable cities in Iraq’, ‘Mitigate traffic congestion’, ‘Waste recycling and separating’, ‘Use wastewater recycling’, ‘Parks and green spaces’, and ‘Promote investment’.

Keywords: urban sustainability, experts views, public views, principle component analysis, PCA

Procedia PDF Downloads 134
10411 Automatic Tuning for a Systemic Model of Banking Originated Losses (SYMBOL) Tool on Multicore

Authors: Ronal Muresano, Andrea Pagano

Abstract:

Nowadays, the mathematical/statistical applications are developed with more complexity and accuracy. However, these precisions and complexities have brought as result that applications need more computational power in order to be executed faster. In this sense, the multicore environments are playing an important role to improve and to optimize the execution time of these applications. These environments allow us the inclusion of more parallelism inside the node. However, to take advantage of this parallelism is not an easy task, because we have to deal with some problems such as: cores communications, data locality, memory sizes (cache and RAM), synchronizations, data dependencies on the model, etc. These issues are becoming more important when we wish to improve the application’s performance and scalability. Hence, this paper describes an optimization method developed for Systemic Model of Banking Originated Losses (SYMBOL) tool developed by the European Commission, which is based on analyzing the application's weakness in order to exploit the advantages of the multicore. All these improvements are done in an automatic and transparent manner with the aim of improving the performance metrics of our tool. Finally, experimental evaluations show the effectiveness of our new optimized version, in which we have achieved a considerable improvement on the execution time. The time has been reduced around 96% for the best case tested, between the original serial version and the automatic parallel version.

Keywords: algorithm optimization, bank failures, OpenMP, parallel techniques, statistical tool

Procedia PDF Downloads 372
10410 Domain-Specific Languages Evaluation: A Literature Review and Experience Report

Authors: Sofia Meacham

Abstract:

In this abstract paper, the Domain-Specific Languages (DSL) evaluation will be presented based on existing literature and years of experience developing DSLs for several domains. The domains we worked on ranged from AI, business applications, and finances/accounting to health. In general, DSLs have been utilised in many domains to provide tailored and efficient solutions to address specific problems. Although they are a reputable method among highly technical circles and have also been used by non-technical experts with success, according to our knowledge, there isn’t a commonly accepted method for evaluating them. There are some methods that define criteria that are adaptations from the general software engineering quality criteria. Other literature focuses on the DSL usability aspect of evaluation and applies methods such as Human-Computer Interaction (HCI) and goal modeling. All these approaches are either hard to introduce, such as the goal modeling, or seem to ignore the domain-specific focus of the DSLs. From our experience, the DSLs have domain-specificity in their core, and consequently, the methods to evaluate them should also include domain-specific criteria in their core. The domain-specific criteria would require synergy between the domain experts and the DSL developers in the same way that DSLs cannot be developed without domain-experts involvement. Methods from agile and other software engineering practices, such as co-creation workshops, should be further emphasised and explored to facilitate this direction. Concluding, our latest experience and plans for DSLs evaluation will be presented and open for discussion.

Keywords: domain-specific languages, DSL evaluation, DSL usability, DSL quality metrics

Procedia PDF Downloads 105
10409 Validity and Reliability of Assessment of Language-Related Functional Activities: Evidence from Arab Aphasics

Authors: Sadeq Al Yaari, Nassr Almaflehi, Ayman Al Yaari, Adham Al Yaari, Montaha Al Yaari, Aayah Al Yaari, Sajedah Al Yaari

Abstract:

Background: Assessment of language-related functional activities (ALFA) is of vital importance in assessing aphasics’ performance of both sexes. However, the validity and reliability of this language therapeutic test has never been validated in the Arabic medical literature. Purpose: The aim of this study was to validate the test by assessing the language-related functional activities of 100 gender aphasics based in a medical faculty. Design: ALFA Pre-and-posttest was administered twice in three weeks to test the language-related functional activities of 100 gender aphasics. Settings: Al Khars hospital in Al Ahsa’a, Kingdom of Saudi Arabia (KSA). Participants: Sixteen to eight-year-old participants (N = 100 men and women) were enrolled in this experiment. Again, the purpose was to assess their language-related functional activities using ALFA. Procedures: The first step was to translate the English version of ALFA test into the mother tongue of the patients (Arabic). Secondly, the translated text is reviewed and edited by three specialists of Arabic language. Having the test standardized, the third step was to assess language-related functional activities of the participants in natural environment. Assessment took place in three weeks. In the first week, a pre-test was administered to the participants at hand and after two weeks, a post-test was administered to identify whether or not significant differences between the two tests (pre-and-posttest) could be observed. Interventions: Outcomes of the results obtained from the analyses were broadly discussed. Linguistic and statistical comparisons were held to illustrate the findings of this study. Main outcomes and Results: The analysis of the obtained results indicated that the performance of the aphasic participants in the post-test did not differ from that of the pre-test (, respectively). Conclusions & Implications: ALFA was proved to be a valid and reliable test. Moreover, outlined results pointed out the importance of assessing not only gender aphasics’ language, but also their language-related functional activities. Further research is needed to explore how gender aphasics’ verbal and non-verbal performances interact.

Keywords: ALFA, language test, Arab aphasics, validity, reliability, psychoneurolinguistics.

Procedia PDF Downloads 57
10408 New Result for Optical OFDM in Code Division Multiple Access Systems Using Direct Detection

Authors: Cherifi Abdelhamid

Abstract:

In optical communication systems, OFDM has received increased attention as a means to overcome various limitations of optical transmission systems such as modal dispersion, relative intensity noise, chromatic dispersion, polarization mode dispersion and self-phase modulation. The multipath dispersion limits the maximum transmission data rates. In this paper we investigate OFDM system where multipath induced intersymbol interference (ISI) is reduced and we increase the number of users by combining OFDM system with OCDMA system using direct detection Incorporate OOC (orthogonal optical code) for minimize a bit error rate.

Keywords: OFDM, OCDMA, OOC (orthogonal optical code), (ISI), prim codes (Pc)

Procedia PDF Downloads 655
10407 Enhancement of Capacity in a MC-CDMA based Cognitive Radio Network Using Non-Cooperative Game Model

Authors: Kalyani Kulkarni, Bharat Chaudhari

Abstract:

This paper addresses the issue of resource allocation in the emerging cognitive technology. Focusing the quality of service (QoS) of primary users (PU), a novel method is proposed for the resource allocation of secondary users (SU). In this paper, we propose the unique utility function in the game theoretic model of Cognitive Radio which can be maximized to increase the capacity of the cognitive radio network (CRN) and to minimize the interference scenario. The utility function is formulated to cater the need of PUs by observing Signal to Noise ratio. The existence of Nash equilibrium is for the postulated game is established.

Keywords: cognitive networks, game theory, Nash equilibrium, resource allocation

Procedia PDF Downloads 485
10406 Mathematics Bridging Theory and Applications for a Data-Driven World

Authors: Zahid Ullah, Atlas Khan

Abstract:

In today's data-driven world, the role of mathematics in bridging the gap between theory and applications is becoming increasingly vital. This abstract highlights the significance of mathematics as a powerful tool for analyzing, interpreting, and extracting meaningful insights from vast amounts of data. By integrating mathematical principles with real-world applications, researchers can unlock the full potential of data-driven decision-making processes. This abstract delves into the various ways mathematics acts as a bridge connecting theoretical frameworks to practical applications. It explores the utilization of mathematical models, algorithms, and statistical techniques to uncover hidden patterns, trends, and correlations within complex datasets. Furthermore, it investigates the role of mathematics in enhancing predictive modeling, optimization, and risk assessment methodologies for improved decision-making in diverse fields such as finance, healthcare, engineering, and social sciences. The abstract also emphasizes the need for interdisciplinary collaboration between mathematicians, statisticians, computer scientists, and domain experts to tackle the challenges posed by the data-driven landscape. By fostering synergies between these disciplines, novel approaches can be developed to address complex problems and make data-driven insights accessible and actionable. Moreover, this abstract underscores the importance of robust mathematical foundations for ensuring the reliability and validity of data analysis. Rigorous mathematical frameworks not only provide a solid basis for understanding and interpreting results but also contribute to the development of innovative methodologies and techniques. In summary, this abstract advocates for the pivotal role of mathematics in bridging theory and applications in a data-driven world. By harnessing mathematical principles, researchers can unlock the transformative potential of data analysis, paving the way for evidence-based decision-making, optimized processes, and innovative solutions to the challenges of our rapidly evolving society.

Keywords: mathematics, bridging theory and applications, data-driven world, mathematical models

Procedia PDF Downloads 81
10405 Socioeconomic Impacts of Innovative Housing Construction Technologies in Slum Upgrading: Case of Mathare Valley Nairobi, Kenya

Authors: Edmund M. Muthigani

Abstract:

Background: Adequate, decent housing is a universal human right integral component. Resources’ costs and intensified rural-urban migration have increased the demand for affordable housing in urban areas. Modern knowledge-based economy uses innovation. The construction industry uses product and process innovation to provide adequate and decent low-cost housing. Kenya adopted innovation practices in slum upgrading that used cost-effectively locally available building materials. This study objectively looked at the outcomes, social and economic impacts of innovative housing technologies construction in the Mathare valley slums upgrading project. Methods: This post-occupancy study used an exploratory-descriptive research design. Random sampling was used to sample 384 users of low-cost housing projects in Mathare Valley, Nairobi County. Research instruments included semi-structured questionnaires and interview guides. Pilot study, validity and reliability tests ensured the quality of a study. Ethical considerations included university approval and consent. Statistical package for social sciences (SPSS) software version 21 was applied to compute the descriptive and inferential statistics. Findings: Slum-upgrading had a significant-positive outcome on improved houses and community. Social impacts included communal facilities, assurance of security of tenure, and retained frameworks of establishments. Economic impacts included employment; affordable and durable units (p values <0.05). The upgrading process didn’t influence rent fees, was corrupt and led to the displacement of residents. Conclusion: Slum upgrading process impacted positively. Similar projects should consider residents in decision-making.

Keywords: innovation, technologies, slum upgrading, Mathare valley slum, social impact, economic impact

Procedia PDF Downloads 171
10404 Increasing the Resilience of Cyber Physical Systems in Smart Grid Environments using Dynamic Cells

Authors: Andrea Tundis, Carlos García Cordero, Rolf Egert, Alfredo Garro, Max Mühlhäuser

Abstract:

Resilience is an important system property that relies on the ability of a system to automatically recover from a degraded state so as to continue providing its services. Resilient systems have the means of detecting faults and failures with the added capability of automatically restoring their normal operations. Mastering resilience in the domain of Cyber-Physical Systems is challenging due to the interdependence of hybrid hardware and software components, along with physical limitations, laws, regulations and standards, among others. In order to overcome these challenges, this paper presents a modeling approach, based on the concept of Dynamic Cells, tailored to the management of Smart Grids. Additionally, a heuristic algorithm that works on top of the proposed modeling approach, to find resilient configurations, has been defined and implemented. More specifically, the model supports a flexible representation of Smart Grids and the algorithm is able to manage, at different abstraction levels, the resource consumption of individual grid elements on the presence of failures and faults. Finally, the proposal is evaluated in a test scenario where the effectiveness of such approach, when dealing with complex scenarios where adequate solutions are difficult to find, is shown.

Keywords: cyber-physical systems, energy management, optimization, smart grids, self-healing, resilience, security

Procedia PDF Downloads 331
10403 Analysis of Structural Modeling on Digital English Learning Strategy Use

Authors: Gyoomi Kim, Jiyoung Bae

Abstract:

The purpose of this study was to propose a framework that verifies the structural relationships among students’ use of digital English learning strategy (DELS), affective domains, and their individual variables. The study developed a hypothetical model based on previous studies on language learning strategy use as well as digital language learning. The participants were 720 Korean high school students and 430 university students. The instrument was a self-response questionnaire that contained 70 question items based on Oxford’s SILL (Strategy Inventory for Language Learning) as well as the previous studies on language learning strategies in digital learning environment in order to measure DELS and affective domains. The collected data were analyzed through structural equation modeling (SEM). This study used quantitative data analysis procedures: Explanatory factor analysis (EFA) and confirmatory factor analysis (CFA). Firstly, the EFA was conducted in order to verify the hypothetical model; the factor analysis was conducted preferentially to identify the underlying relationships between measured variables of DELS and the affective domain in the EFA process. The hypothetical model was established with six indicators of learning strategies (memory, cognitive, compensation, metacognitive, affective, and social strategies) under the latent variable of the use of DELS. In addition, the model included four indicators (self-confidence, interests, self-regulation, and attitude toward digital learning) under the latent variable of learners’ affective domain. Secondly, the CFA was used to determine the suitability of data and research models, so all data from the present study was used to assess model fits. Lastly, the model also included individual learner factors as covariates and five constructs selected were learners’ gender, the level of English proficiency, the duration of English learning, the period of using digital devices, and previous experience of digital English learning. The results verified from SEM analysis proposed a theoretical model that showed the structural relationships between Korean students’ use of DELS and their affective domains. Therefore, the results of this study help ESL/EFL teachers understand how learners use and develop appropriate learning strategies in digital learning contexts. The pedagogical implication and suggestions for the further study will be also presented.

Keywords: Digital English Learning Strategy, DELS, individual variables, learners' affective domains, Structural Equation Modeling, SEM

Procedia PDF Downloads 128
10402 Cultural Influence on Social Cognition in Social and Educational Psychology

Authors: Mbah Fidelix Njong, Sabi Emile Forkwa

Abstract:

Social cognition is an aspect of social psychology that focuses on how people process, store and apply information about others and social situations. It lay emphasis on how cognitive processes play in our social interactions. In this article, we try to show how culture can influence our ways of thinking about others, how we feel and interact with the world around us. Social cognitive processes involve perceiving people and how we learn about the people around us. It concerns the mental processes of remembering, thinking and attending to other people with different cultural backgrounds and how we attend to certain information about the world. Especially in an educational setting, students’ learning processes are most often than not influenced by their cultural background. We can also talk of social schemas. That’s people’s mental representation of social patterns and norms. This involves information about the societal role and the expectations of individuals within a group. These cognitive processes can also be influence by culture. There are important cultural differences in social cognition. In any social situation, two individuals may have different interpretations. Each person brings in a unique background of experiences, knowledge, social influence, feelings and cultural variations. Cultural differences can also affect how people interpret social situations. The same social behavior in one cultural setting might have completely different meaning and interpretation if observed or applied in another culture. However, as people interpret behaviors and bring out meaning from the interpretations, they act based on their beliefs about situations they are confronted with. This helps to reinforce and reproduce the cultural norms that influence their social cognition.

Keywords: social cognition, social schema, cultural influence, psychology

Procedia PDF Downloads 98
10401 Low Plastic Deformation Energy to Induce High Superficial Strain on AZ31 Magnesium Alloy Sheet

Authors: Emigdio Mendoza, Patricia Fernandez, Cristian Gomez

Abstract:

Magnesium alloys have generated great interest for several industrial applications because their high specific strength and low density make them a very attractive alternative for the manufacture of various components; however, these alloys present a limitation with their hexagonal crystal structure that limits the deformation mechanisms at room temperature likewise the molding components alternatives, it is for this reason that severe plastic deformation processes have taken a huge relevance recently because these, allow high deformation rates to be applied that induce microstructural changes where the deficiency in the sliding systems is compensated with crystallographic grains reorientations or crystal twinning. The present study reports a statistical analysis of process temperature, number of passes and shear angle with respect to the shear stress in severe plastic deformation process denominated 'Equal Channel Angular Sheet Drawing (ECASD)' applied to the magnesium alloy AZ31B through Python Statsmodels libraries, additionally a Post-Hoc range test is performed using the Tukey statistical test. Statistical results show that each variable has a p-value lower than 0.05, which allows comparing the average values of shear stresses obtained, which are in the range of 7.37 MPa to 12.23 MPa, lower values in comparison to others severe plastic deformation processes reported in the literature, considering a value of 157.53 MPa as the average creep stress for AZ31B alloy. However, a higher stress level is required when the sheets are processed using a shear angle of 150°, due to a higher level of adjustment applied for the shear die of 150°. Temperature and shear passes are important variables as well, but there is no significant impact on the level of stress applied during the ECASD process. In the processing of AZ31B magnesium alloy sheets, ECASD technique is evidenced as a viable alternative in the modification of the elasto-plastic properties of this alloy, promoting the weakening of the basal texture, which means, a better response to deformation, whereby, during the manufacture of parts by drawing or stamping processes the formation of cracks on the surface can be reduced, presenting an adequate mechanical performance.

Keywords: plastic deformation, strain, sheet drawing, magnesium

Procedia PDF Downloads 115
10400 Effect of Aquatic Seed Extract of (Cichorium intybus L.) and Metformin on Nitric Oxide in Type 2 Diabetic Rats

Authors: Lotfollah Rezagholizadeh

Abstract:

Background and Aim: Diabetes mellitus is related to high mortality and morbidity caused by the early development of atherosclerosis correlated to diabetic macroangiopathy. The endothelium-derived vasodilator, nitric oxide (NO) has been implicated in the development of vascular complications via the regulation of blood flow, and various antiatherosclerotic actions. Patients with type 2 diabetes (T2D) have a decreased level of endothelial nitric oxide release. In this study we aimed to examine the effect of aquatic seed extract of Cichorium intybus L. (chicory) and metformin (a known prescription drug for diabetes) on NO levels in T2D rats. Methods: Five groups of adult male Wistar rats were used (n=6): Non-diabetic controls without extract treatment (Control), Non-diabetic controls with extract treatment (Chicory-control), T2D rats without extract treatment (NIA/STZ), T2D rats treated with the extract (Chicory-NIA/STZ), and T2D groups that received metformin (100 mg/kg) but no extract (Metformin-NIA/STZ). T2D was induced with intraperitoneal (i.p) injection of niacinamide (NIA, 200 mg/kg), 15 min after an i.p administration of streptozotocin (STZ, 55 mg/kg). Lyophilized chicory extract (125 mg/kg) was dissolved in 0.2 ml normal saline and administered one dose a day. The experiments lasted for 3 weeks after the diabetes induction. NO analysis was performed by assay based on the Griess reaction. Data were reported as the mean ± SD and statistical analysis was performed by ANOVA. Results: Serum nitric oxide levels decreased significantly in NIA/STZ group compared with Control and Chicory-control. Treatment with chicory extract caused a significant increase in serum levels of NO in Chicory-NIA/STZ group compare to NIA/STZ group (p<05). Metformin-NIA/STZ group did not show considerable difference when compared with NIA/STZ, with respect to NO levels. In a group of rats made diabetic by STZ alone (type 1 diabetic rats, T1D), chicory did not have a significant ameliorating effect. Conclusion: In this study, we clearly showed a relationship between low serum nitric oxide levels and diabetes mellitus in rats. The increase in serum nitric oxide by chicory extract is an indication of antiatherogenic effect of this plant. Chicory seed extract was more efficient than metformin in improving the NO levels in NO-deficient T2D diabetic rats.

Keywords: type 2 diabetes mellitus, nitric oxide, chicory, metformin

Procedia PDF Downloads 337
10399 Knowledge of Risk Factors and Health Implications of Fast Food Consumption among Undergraduate in Nigerian Polytechnic

Authors: Adebusoye Michael, Anthony Gloria, Fasan Temitope, Jacob Anayo

Abstract:

Background: The culture of fast food consumption has gradually become a common lifestyle in Nigeria especially among young people in urban areas, in spite of the associated adverse health consequences. The adolescent pattern of fast foods consumption and their perception of this practice, as a risk factor for Non-Communicable Diseases (NCDs), have not been fully explored. This study was designed to assess fast food consumption pattern and the perception of it as a risk factor for NCDs among undergraduates of Federal Polytechnic, Bauchi. Methodology: The study was descriptive cross-sectional in design. One hundred and eighty-five students were recruited using systematic random sampling method from the two halls of residence. A structured questionnaire was used to assess the consumption pattern of fast foods. Data collected from the questionnaires were analysed using statistical package for the social sciences (SPSS) version 16. Simple descriptive statistics, such as frequency counts and percentages were used to interpret the data. Results: The age range of respondents was 18-34 years, 58.4% were males, 93.5% singles and 51.4% of their parents were employed. The majority (100%) were aware of fast foods and (75%) agreed to its implications as NCD. Fast foods consumption distribution included meat pie (4.9%), beef roll/ sausage (2.7%), egg roll (13.5%), doughnut (16.2%), noodles(18%) and carbonated drinks (3.8%). 30.3% consumed thrice in a week and 71% attached workload to high consumption of fast food. Conclusion: It was revealed that a higher social pressure from peers, time constraints, class pressure and school programme had the strong influence on high percentages of higher institutions’ students consume fast foods and therefore nutrition educational campaigns for campus food outlets or vendors and behavioural change communication on healthy nutrition and lifestyles among young people are hereby advocated.

Keywords: fast food consumption, Nigerian polytechnic, risk factors, undergraduate

Procedia PDF Downloads 471
10398 The Relationship between Personal, Psycho-Social and Occupational Risk Factors with Low Back Pain Severity in Industrial Workers

Authors: Omid Giahi, Ebrahim Darvishi, Mahdi Akbarzadeh

Abstract:

Introduction: Occupational low back pain (LBP) is one of the most prevalent work-related musculoskeletal disorders in which a lot of risk factors are involved that. The present study focuses on the relation between personal, psycho-social and occupational risk factors and LBP severity in industrial workers. Materials and Methods: This research was a case-control study which was conducted in Kurdistan province. 100 workers (Mean Age ± SD of 39.9 ± 10.45) with LBP were selected as the case group, and 100 workers (Mean Age ± SD of 37.2 ± 8.5) without LBP were assigned into the control group. All participants were selected from various industrial units, and they had similar occupational conditions. The required data including demographic information (BMI, smoking, alcohol, and family history), occupational (posture, mental workload (MWL), force, vibration and repetition), and psychosocial factors (stress, occupational satisfaction and security) of the participants were collected via consultation with occupational medicine specialists, interview, and the related questionnaires and also the NASA-TLX software and REBA worksheet. Chi-square test, logistic regression and structural equation modeling (SEM) were used to analyze the data. For analysis of data, IBM Statistics SPSS 24 and Mplus6 software have been used. Results: 114 (77%) of the individuals were male and 86 were (23%) female. Mean Career length of the Case Group and Control Group were 10.90 ± 5.92, 9.22 ± 4.24, respectively. The statistical analysis of the data revealed that there was a significant correlation between the Posture, Smoking, Stress, Satisfaction, and MWL with occupational LBP. The odds ratios (95% confidence intervals) derived from a logistic regression model were 2.7 (1.27-2.24) and 2.5 (2.26-5.17) and 3.22 (2.47-3.24) for Stress, MWL, and Posture, respectively. Also, the SEM analysis of the personal, psycho-social and occupational factors with LBP revealed that there was a significant correlation. Conclusion: All three broad categories of risk factors simultaneously increase the risk of occupational LBP in the workplace. But, the risks of Posture, Stress, and MWL have a major role in LBP severity. Therefore, prevention strategies for persons in jobs with high risks for LBP are required to decrease the risk of occupational LBP.

Keywords: industrial workers occupational, low back pain, occupational risk factors, psychosocial factors

Procedia PDF Downloads 259
10397 Times2D: A Time-Frequency Method for Time Series Forecasting

Authors: Reza Nematirad, Anil Pahwa, Balasubramaniam Natarajan

Abstract:

Time series data consist of successive data points collected over a period of time. Accurate prediction of future values is essential for informed decision-making in several real-world applications, including electricity load demand forecasting, lifetime estimation of industrial machinery, traffic planning, weather prediction, and the stock market. Due to their critical relevance and wide application, there has been considerable interest in time series forecasting in recent years. However, the proliferation of sensors and IoT devices, real-time monitoring systems, and high-frequency trading data introduce significant intricate temporal variations, rapid changes, noise, and non-linearities, making time series forecasting more challenging. Classical methods such as Autoregressive integrated moving average (ARIMA) and Exponential Smoothing aim to extract pre-defined temporal variations, such as trends and seasonality. While these methods are effective for capturing well-defined seasonal patterns and trends, they often struggle with more complex, non-linear patterns present in real-world time series data. In recent years, deep learning has made significant contributions to time series forecasting. Recurrent Neural Networks (RNNs) and their variants, such as Long short-term memory (LSTMs) and Gated Recurrent Units (GRUs), have been widely adopted for modeling sequential data. However, they often suffer from the locality, making it difficult to capture local trends and rapid fluctuations. Convolutional Neural Networks (CNNs), particularly Temporal Convolutional Networks (TCNs), leverage convolutional layers to capture temporal dependencies by applying convolutional filters along the temporal dimension. Despite their advantages, TCNs struggle with capturing relationships between distant time points due to the locality of one-dimensional convolution kernels. Transformers have revolutionized time series forecasting with their powerful attention mechanisms, effectively capturing long-term dependencies and relationships between distant time points. However, the attention mechanism may struggle to discern dependencies directly from scattered time points due to intricate temporal patterns. Lastly, Multi-Layer Perceptrons (MLPs) have also been employed, with models like N-BEATS and LightTS demonstrating success. Despite this, MLPs often face high volatility and computational complexity challenges in long-horizon forecasting. To address intricate temporal variations in time series data, this study introduces Times2D, a novel framework that parallelly integrates 2D spectrogram and derivative heatmap techniques. The spectrogram focuses on the frequency domain, capturing periodicity, while the derivative patterns emphasize the time domain, highlighting sharp fluctuations and turning points. This 2D transformation enables the utilization of powerful computer vision techniques to capture various intricate temporal variations. To evaluate the performance of Times2D, extensive experiments were conducted on standard time series datasets and compared with various state-of-the-art algorithms, including DLinear (2023), TimesNet (2023), Non-stationary Transformer (2022), PatchTST (2023), N-HiTS (2023), Crossformer (2023), MICN (2023), LightTS (2022), FEDformer (2022), FiLM (2022), SCINet (2022a), Autoformer (2021), and Informer (2021) under the same modeling conditions. The initial results demonstrated that Times2D achieves consistent state-of-the-art performance in both short-term and long-term forecasting tasks. Furthermore, the generality of the Times2D framework allows it to be applied to various tasks such as time series imputation, clustering, classification, and anomaly detection, offering potential benefits in any domain that involves sequential data analysis.

Keywords: derivative patterns, spectrogram, time series forecasting, times2D, 2D representation

Procedia PDF Downloads 48
10396 The Impact of Total Parenteral Nutrition on Pediatric Stem Cell Transplantation and Its Complications

Authors: R. Alramyan, S. Alsalamah, R. Alrashed, R. Alakel, F. Altheyeb, M. Alessa

Abstract:

Background: Nutritional support with total parenteral nutrition (TPN) is usually commenced with hematopoietic stem cell transplantation (HSCT) patients. However, it has its benefits and risks. Complications related to central venous catheter such as infections, and metabolic disturbances, including abnormal liver function, is usually of concern in such patients. Methods: A retrospective charts review of all pediatric patients who underwent HSCT between the period 2015-2018 in a tertiary hospital in Riyadh, Saudi Arabia. Patients' demographics, types of conditioning, type of nutrition, and patients' outcomes were collected. Statistical analysis was conducted using SPSS version 22. Frequencies and percentages were used to describe categorical variables. Mean, and standard deviation were used for continuous variables. A P value of less than 0.05 was considered as statically significant. Results: a total of 162 HSCTs were identified during the period mentioned. Indication of allogenic transplant included hemoglobinopathy in 50 patients (31%), acute lymphoblastic leukemia in 21 patients (13%). TPN was used in 96 patients (59.30%) for a median of 14 days, nasogastric tube feeding (NGT) in 16 (9.90%) patients for a median of 11 days, and 71 of patients (43.80%) were able to tolerate oral feeding. Out of the 96 patients (59.30%) who were dependent on TPN, 64 patients (66.7%) had severe mucositis in comparison to 17 patients (25.8%) who were either on NGT or tolerated oral intake. (P-value= 0.00). Sinusoidal obstruction syndrome (SOS) was seen in 14 patients (14.6%) who were receiving TPN compared to none in non-TPN patients (P=value 0.001). Moreover, majority of patients who had SOS received myeloablative conditioning therapy for non-malignant disease (hemoglobinopathy). However, there were no statistically significant differences in Graft-vs-Host Disease (both acute and chronic), bacteremia, and patient outcome between both groups. Conclusions: Nutritional support using TPN is used in majority of patients, especially post-myeloablative conditioning associated with severe mucositis. TPN was associated with VOD, especially in hemoglobinopathy patients who received myeloablative therapy. This may emphasize on use of preventative measures such as fluid restriction, use of diuretics, or defibrotide in high-risk patients.

Keywords: hematopoeitic stem cell transplant, HSCT, stem cell transplant, sinusoidal obstruction syndrome, total parenteral nutrition

Procedia PDF Downloads 162
10395 Fold and Thrust Belts Seismic Imaging and Interpretation

Authors: Sunjay

Abstract:

Plate tectonics is of very great significance as it represents the spatial relationships of volcanic rock suites at plate margins, the distribution in space and time of the conditions of different metamorphic facies, the scheme of deformation in mountain belts, or orogens, and the association of different types of economic deposit. Orogenic belts are characterized by extensive thrust faulting, movements along large strike-slip fault zones, and extensional deformation that occur deep within continental interiors. Within oceanic areas there also are regions of crustal extension and accretion in the backarc basins that are located on the landward sides of many destructive plate margins.Collisional orogens develop where a continent or island arc collides with a continental margin as a result of subduction. collisional and noncollisional orogens can be explained by differences in the strength and rheology of the continental lithosphere and by processes that influence these properties during orogenesis.Seismic Imaging Difficulties-In triangle zones, several factors reduce the effectiveness of seismic methods. The topography in the central part of the triangle zone is usually rugged and is associated with near-surface velocity inversions which degrade the quality of the seismic image. These characteristics lead to low signal-to-noise ratio, inadequate penetration of energy through overburden, poor geophone coupling with the surface and wave scattering. Depth Seismic Imaging Techniques-Seismic processing relates to the process of altering the seismic data to suppress noise, enhancing the desired signal (higher signal-to-noise ratio) and migrating seismic events to their appropriate location in space and depth. Processing steps generally include analysis of velocities, static corrections, moveout corrections, stacking and migration. Exploration seismology Bow-tie effect -Shadow Zones-areas with no reflections (dead areas). These are called shadow zones and are common in the vicinity of faults and other discontinuous areas in the subsurface. Shadow zones result when energy from a reflector is focused on receivers that produce other traces. As a result, reflectors are not shown in their true positions. Subsurface Discontinuities-Diffractions occur at discontinuities in the subsurface such as faults and velocity discontinuities (as at “bright spot” terminations). Bow-tie effect caused by the two deep-seated synclines. Seismic imaging of thrust faults and structural damage-deepwater thrust belts, Imaging deformation in submarine thrust belts using seismic attributes,Imaging thrust and fault zones using 3D seismic image processing techniques, Balanced structural cross sections seismic interpretation pitfalls checking, The seismic pitfalls can originate due to any or all of the limitations of data acquisition, processing, interpretation of the subsurface geology,Pitfalls and limitations in seismic attribute interpretation of tectonic features, Seismic attributes are routinely used to accelerate and quantify the interpretation of tectonic features in 3D seismic data. Coherence (or variance) cubes delineate the edges of megablocks and faulted strata, curvature delineates folds and flexures, while spectral components delineate lateral changes in thickness and lithology. Carbon capture and geological storage leakage surveillance because fault behave as a seal or a conduit for hydrocarbon transportation to a trap,etc.

Keywords: tectonics, seismic imaging, fold and thrust belts, seismic interpretation

Procedia PDF Downloads 73
10394 Impact of Welding Distortion on the Design of Fabricated T-Girders Using Finite Element Modeling

Authors: Ahmed Hammad, Yehia Abdel-Nasser, Mohamed Shamma

Abstract:

The main configuration of ship construction consists of standard and fabricated stiffening members which are commonly used in shipbuilding such as fabricated T-sections. During the welding process, the non-uniform heating and rapid cooling lead to the inevitable presence of out-of-plane distortion and welding induced residual stresses. Because of these imperfections, the fabricated structural members may not attain their design load to be carried. The removal of these imperfections will require extra man-hours. In the present work, controlling these imperfections has been investigated at both design and fabrication stages. A typical fabricated T-girder is selected to investigate the problem of these imperfections using double-side welding. A numerical simulation based on finite element (FE) modeling has been used to investigate the effect of different parameters of the selected fabricated T-girder such as geometrical properties and welding sequences on the magnitude of welding imperfections. FE results were compared with the results of experimental model of a double-side fillet weld. The present work concludes that: Firstly, in the design stage, the optimum geometry of the fabricated T- girder is determined based on minimum steel weight and out- of- plane distortion. Secondly, in the fabrication stage, the best welding sequence is determined on the basis of minimum welding out- of- plane distortion.

Keywords: fabricated T-girder, FEM, out-of-plane distortion, section modulus, welding residual stresses

Procedia PDF Downloads 128
10393 Competitive Adsorption of Heavy Metals onto Natural and Activated Clay: Equilibrium, Kinetics and Modeling

Authors: L. Khalfa, M. Bagane, M. L. Cervera, S. Najjar

Abstract:

The aim of this work is to present a low cost adsorbent for removing toxic heavy metals from aqueous solutions. Therefore, we are interested to investigate the efficiency of natural clay minerals collected from south Tunisia and their modified form using sulfuric acid in the removal of toxic metal ions: Zn(II) and Pb(II) from synthetic waste water solutions. The obtained results indicate that metal uptake is pH-dependent and maximum removal was detected to occur at pH 6. Adsorption equilibrium is very rapid and it was achieved after 90 min for both metal ions studied. The kinetics results show that the pseudo-second-order model describes the adsorption and the intraparticle diffusion models are the limiting step. The treatment of natural clay with sulfuric acid creates more active sites and increases the surface area, so it showed an increase of the adsorbed quantities of lead and zinc in single and binary systems. The competitive adsorption study showed that the uptake of lead was inhibited in the presence of 10 mg/L of zinc. An antagonistic binary adsorption mechanism was observed. These results revealed that clay is an effective natural material for removing lead and zinc in single and binary systems from aqueous solution.

Keywords: heavy metal, activated clay, kinetic study, competitive adsorption, modeling

Procedia PDF Downloads 226
10392 An Eigen-Approach for Estimating the Direction-of Arrival of Unknown Number of Signals

Authors: Dia I. Abu-Al-Nadi, M. J. Mismar, T. H. Ismail

Abstract:

A technique for estimating the direction-of-arrival (DOA) of unknown number of source signals is presented using the eigen-approach. The eigenvector corresponding to the minimum eigenvalue of the autocorrelation matrix yields the minimum output power of the array. Also, the array polynomial with this eigenvector possesses roots on the unit circle. Therefore, the pseudo-spectrum is found by perturbing the phases of the roots one by one and calculating the corresponding array output power. The results indicate that the DOAs and the number of source signals are estimated accurately in the presence of a wide range of input noise levels.

Keywords: array signal processing, direction-of-arrival, antenna arrays, Eigenvalues, Eigenvectors, Lagrange multiplier

Procedia PDF Downloads 336
10391 Finite Element Modeling of Friction Stir Welding of Dissimilar Alloys

Authors: Fadi Al-Badour, Nesar Merah, Abdelrahman Shuaib, Abdelaziz Bazoune

Abstract:

In the current work, a Coupled Eulerian Lagrangian (CEL) model is developed to simulate the friction stir welding (FSW) process of dissimilar Aluminum alloys (Al 6061-T6 with Al 5083-O). The model predicts volumetric defects, material flow, developed temperatures, and stresses in addition to tool reaction loads. Simulation of welding phase is performed by employing a control volume approach, whereas the welding speed is defined as inflow and outflow over Eulerian domain boundaries. Only material softening due to inelastic heat generation is considered and material behavior is assumed to obey Johnson-Cook’s Model. The model was validated using published experimentally measured temperatures, at similar welding conditions, and by qualitative comparison of dissimilar weld microstructure. The FE results showed that most of developed temperatures were below melting and that the bulk of the deformed material in solid state. The temperature gradient on AL6061-T6 side was found to be less than that of Al 5083-O. Changing the position Al 6061-T6 from retreating (Ret.) side to advancing (Adv.) side led to a decrease in maximum process temperature and strain rate. This could be due to the higher resistance of Al 6061-T6 to flow as compared to Al 5083-O.

Keywords: friction stir welding, dissimilar metals, finite element modeling, coupled Eulerian Lagrangian Analysis

Procedia PDF Downloads 333
10390 The Impact of Temperature on the Threshold Capillary Pressure of Fine-Grained Shales

Authors: Talal Al-Bazali, S. Mohammad

Abstract:

The threshold capillary pressure of shale caprocks is an important parameter in CO₂ storage modeling. A correct estimation of the threshold capillary pressure is not only essential for CO₂ storage modeling but also important to assess the overall economical and environmental impact of the design process. A standard step by step approach has to be used to measure the threshold capillary pressure of shale and non-wetting fluids at different temperatures. The objective of this work is to assess the impact of high temperature on the threshold capillary pressure of four different shales as they interacted with four different oil based muds, air, CO₂, N₂, and methane. This study shows that the threshold capillary pressure of shale and non-wetting fluid is highly impacted by temperature. An empirical correlation for the dependence of threshold capillary pressure on temperature when different shales interacted with oil based muds and gasses has been developed. This correlation shows that the threshold capillary pressure decreases exponentially as the temperature increases. In this correlation, an experimental constant (α) appears, and this constant may depend on the properties of shale and non-wetting fluid. The value for α factor was found to be higher for gasses than for oil based muds. This is consistent with our intuition since the interfacial tension for gasses is higher than those for oil based muds. The author believes that measured threshold capillary pressure at ambient temperature is misleading and could yield higher values than those encountered at in situ conditions. Therefore one must correct for the impact of temperature when measuring threshold capillary pressure of shale at ambient temperature.

Keywords: capillary pressure, shale, temperature, thresshold

Procedia PDF Downloads 375
10389 Logistic Regression Based Model for Predicting Students’ Academic Performance in Higher Institutions

Authors: Emmanuel Osaze Oshoiribhor, Adetokunbo MacGregor John-Otumu

Abstract:

In recent years, there has been a desire to forecast student academic achievement prior to graduation. This is to help them improve their grades, particularly for individuals with poor performance. The goal of this study is to employ supervised learning techniques to construct a predictive model for student academic achievement. Many academics have already constructed models that predict student academic achievement based on factors such as smoking, demography, culture, social media, parent educational background, parent finances, and family background, to name a few. This feature and the model employed may not have correctly classified the students in terms of their academic performance. This model is built using a logistic regression classifier with basic features such as the previous semester's course score, attendance to class, class participation, and the total number of course materials or resources the student is able to cover per semester as a prerequisite to predict if the student will perform well in future on related courses. The model outperformed other classifiers such as Naive bayes, Support vector machine (SVM), Decision Tree, Random forest, and Adaboost, returning a 96.7% accuracy. This model is available as a desktop application, allowing both instructors and students to benefit from user-friendly interfaces for predicting student academic achievement. As a result, it is recommended that both students and professors use this tool to better forecast outcomes.

Keywords: artificial intelligence, ML, logistic regression, performance, prediction

Procedia PDF Downloads 100
10388 Rhythm-Reading Success Using Conversational Solfege

Authors: Kelly Jo Hollingsworth

Abstract:

Conversational Solfege, a research-based, 12-step music literacy instructional method using the sound-before-sight approach, was used to teach rhythm-reading to 128-second grade students at a public school in the southeastern United States. For each step, multiple scripted techniques are supplied to teach each skill. Unit one was the focus of this study, which is quarter note and barred eighth note rhythms. During regular weekly music instruction, students completed method steps one through five, which includes aural discrimination, decoding familiar and unfamiliar rhythm patterns, and improvising rhythmic phrases using quarter notes and barred eighth notes. Intact classes were randomly assigned to two treatment groups for teaching steps six through eight, which was the visual presentation and identification of quarter notes and barred eighth notes, visually presenting and decoding familiar patterns, and visually presenting and decoding unfamiliar patterns using said notation. For three weeks, students practiced steps six through eight during regular weekly music class. One group spent five-minutes of class time on steps six through eight technique work, while the other group spends ten-minutes of class time practicing the same techniques. A pretest and posttest were administered, and ANOVA results reveal both the five-minute (p < .001) and ten-minute group (p < .001) reached statistical significance suggesting Conversational Solfege is an efficient, effective approach to teach rhythm-reading to second grade students. After two weeks of no instruction, students were retested to measure retention. Using a repeated-measures ANOVA, both groups reached statistical significance (p < .001) on the second posttest, suggesting both the five-minute and ten-minute group retained rhythm-reading skill after two weeks of no instruction. Statistical significance was not reached between groups (p=.252), suggesting five-minutes is equally as effective as ten-minutes of rhythm-reading practice using Conversational Solfege techniques. Future research includes replicating the study with other grades and units in the text.

Keywords: conversational solfege, length of instructional time, rhythm-reading, rhythm instruction

Procedia PDF Downloads 163
10387 Validating the Micro-Dynamic Rule in Opinion Dynamics Models

Authors: Dino Carpentras, Paul Maher, Caoimhe O'Reilly, Michael Quayle

Abstract:

Opinion dynamics is dedicated to modeling the dynamic evolution of people's opinions. Models in this field are based on a micro-dynamic rule, which determines how people update their opinion when interacting. Despite the high number of new models (many of them based on new rules), little research has been dedicated to experimentally validate the rule. A few studies started bridging this literature gap by experimentally testing the rule. However, in these studies, participants are forced to express their opinion as a number instead of using natural language. Furthermore, some of these studies average data from experimental questions, without testing if differences existed between them. Indeed, it is possible that different topics could show different dynamics. For example, people may be more prone to accepting someone's else opinion regarding less polarized topics. In this work, we collected data from 200 participants on 5 unpolarized topics. Participants expressed their opinions using natural language ('agree' or 'disagree') and the certainty of their answer, expressed as a number between 1 and 10. To keep the interaction based on natural language, certainty was not shown to other participants. We then showed to the participant someone else's opinion on the same topic and, after a distraction task, we repeated the measurement. To produce data compatible with standard opinion dynamics models, we multiplied the opinion (encoded as agree=1 and disagree=-1) with the certainty to obtain a single 'continuous opinion' ranging from -10 to 10. By analyzing the topics independently, we observed that each one shows a different initial distribution. However, the dynamics (i.e., the properties of the opinion change) appear to be similar between all topics. This suggested that the same micro-dynamic rule could be applied to unpolarized topics. Another important result is that participants that change opinion tend to maintain similar levels of certainty. This is in contrast with typical micro-dynamics rules, where agents move to an average point instead of directly jumping to the opposite continuous opinion. As expected, in the data, we also observed the effect of social influence. This means that exposing someone with 'agree' or 'disagree' influenced participants to respectively higher or lower values of the continuous opinion. However, we also observed random variations whose effect was stronger than the social influence’s one. We even observed cases of people that changed from 'agree' to 'disagree,' even if they were exposed to 'agree.' This phenomenon is surprising, as, in the standard literature, the strength of the noise is usually smaller than the strength of social influence. Finally, we also built an opinion dynamics model from the data. The model was able to explain more than 80% of the data variance. Furthermore, by iterating the model, we were able to produce polarized states even starting from an unpolarized population. This experimental approach offers a way to test the micro-dynamic rule. This also allows us to build models which are directly grounded on experimental results.

Keywords: experimental validation, micro-dynamic rule, opinion dynamics, update rule

Procedia PDF Downloads 167
10386 Robust Processing of Antenna Array Signals under Local Scattering Environments

Authors: Ju-Hong Lee, Ching-Wei Liao

Abstract:

An adaptive array beamformer is designed for automatically preserving the desired signals while cancelling interference and noise. Providing robustness against model mismatches and tracking possible environment changes calls for robust adaptive beamforming techniques. The design criterion yields the well-known generalized sidelobe canceller (GSC) beamformer. In practice, the knowledge of the desired steering vector can be imprecise, which often occurs due to estimation errors in the DOA of the desired signal or imperfect array calibration. In these situations, the SOI is considered as interference, and the performance of the GSC beamformer is known to degrade. This undesired behavior results in a reduction of the array output signal-to-interference plus-noise-ratio (SINR). Therefore, it is worth developing robust techniques to deal with the problem due to local scattering environments. As to the implementation of adaptive beamforming, the required computational complexity is enormous when the array beamformer is equipped with massive antenna array sensors. To alleviate this difficulty, a generalized sidelobe canceller (GSC) with partially adaptivity for less adaptive degrees of freedom and faster adaptive response has been proposed in the literature. Unfortunately, it has been shown that the conventional GSC-based adaptive beamformers are usually very sensitive to the mismatch problems due to local scattering situations. In this paper, we present an effective GSC-based beamformer against the mismatch problems mentioned above. The proposed GSC-based array beamformer adaptively estimates the actual direction of the desired signal by using the presumed steering vector and the received array data snapshots. We utilize the predefined steering vector and a presumed angle tolerance range to carry out the required estimation for obtaining an appropriate steering vector. A matrix associated with the direction vector of signal sources is first created. Then projection matrices related to the matrix are generated and are utilized to iteratively estimate the actual direction vector of the desired signal. As a result, the quiescent weight vector and the required signal blocking matrix required for performing adaptive beamforming can be easily found. By utilizing the proposed GSC-based beamformer, we find that the performance degradation due to the considered local scattering environments can be effectively mitigated. To further enhance the beamforming performance, a signal subspace projection matrix is also introduced into the proposed GSC-based beamformer. Several computer simulation examples show that the proposed GSC-based beamformer outperforms the existing robust techniques.

Keywords: adaptive antenna beamforming, local scattering, signal blocking, steering mismatch

Procedia PDF Downloads 119