Search results for: adjusted structural number (SNP)
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 14170

Search results for: adjusted structural number (SNP)

12400 Feasibility of Building Structure Due to Decreased Concrete Quality of School Building in Banda Aceh City 19 Years after Tsunami

Authors: Rifqi Irvansyah, Abdullah Abdullah, Yunita Idris, Bunga Raihanda

Abstract:

Banda Aceh is particularly susceptible to heightened vulnerability during natural disasters due to its concentrated exposure to multi-hazard risks. Despite urgent priorities during the aftermath of natural disasters, such as the 2004 Indian Ocean earthquake and tsunami, several public facilities, including school buildings, sustained damage yet continued operations without adequate repairs, even though they were submerged by the tsunami. This research aims to evaluate the consequences of column damage induced by tsunami inundation on the structural integrity of buildings. The investigation employs interaction diagrams for columns to assess their capacity, taking into account factors such as rebar deterioration and corrosion. The analysis result shows that one-fourth of the K1 columns on the first floor fall outside of the column interaction diagram, indicating that the column structure cannot handle the load above it, as evidenced by the presence of Pu and Mu, which are greater than the column's design strength. This suggests that the five columns of K1 should be cause for concern, as the column's capacity is decreasing. These results indicate that the structure of the building cannot sustain the applied load because the column cross-section has deteriorated. In contrast, all K2 columns meet the design strength, indicating that the column structure can withstand the structural loads.

Keywords: tsunami inundation, column damage, column interaction diagram, mitigation effort

Procedia PDF Downloads 51
12399 Influence of Intra-Yarn Permeability on Mesoscale Permeability of Plain Weave and 3D Fabrics

Authors: Debabrata Adhikari, Mikhail Matveev, Louise Brown, Andy Long, Jan Kočí

Abstract:

A good understanding of mesoscale permeability of complex architectures in fibrous porous preforms is of particular interest in order to achieve efficient and cost-effective resin impregnation of liquid composite molding (LCM). Fabrics used in structural reinforcements are typically woven or stitched. However, 3D fabric reinforcement is of particular interest because of the versatility in the weaving pattern with the binder yarn and in-plain yarn arrangements to manufacture thick composite parts, overcome the limitation in delamination, improve toughness etc. To predict the permeability based on the available pore spaces between the inter yarn spaces, unit cell-based computational fluid dynamics models have been using the Stokes Darcy model. Typically, the preform consists of an arrangement of yarns with spacing in the order of mm, wherein each yarn consists of thousands of filaments with spacing in the order of μm. The fluid flow during infusion exchanges the mass between the intra and inter yarn channels, meaning there is no dead-end of flow between the mesopore in the inter yarn space and the micropore in the yarn. Several studies have employed the Brinkman equation to take into account the flow through dual-scale porosity reinforcement to estimate their permeability. Furthermore, to reduce the computational effort of dual scale flow, scale separation criteria based on the ratio between yarn permeability to the yarn spacing was also proposed to quantify the dual scale and negligible micro-scale flow regime for the prediction of mesoscale permeability. In the present work, the key parameter to identify the influence of intra yarn permeability on the mesoscale permeability has been investigated with the systematic study of weft and warp yarn spacing on the plane weave as well as the position of binder yarn and number of in-plane yarn layers on 3D weave fabric. The permeability tensor has been estimated using an OpenFOAM-based model for the various weave pattern with idealized geometry of yarn implemented using open-source software TexGen. Additionally, scale separation criterion has been established based on the various configuration of yarn permeability for the 3D fabric with both the isotropic and anisotropic yarn from Gebart’s model. It was observed that the variation of mesoscale permeability Kxx within 30% when the isotropic porous yarn is considered for a 3D fabric with binder yarn. Furthermore, the permeability model developed in this study will be used for multi-objective optimizations of the preform mesoscale geometry in terms of yarn spacing, binder pattern, and a number of layers with an aim to obtain improved permeability and reduced void content during the LCM process.

Keywords: permeability, 3D fabric, dual-scale flow, liquid composite molding

Procedia PDF Downloads 84
12398 Task Validity in Neuroimaging Studies: Perspectives from Applied Linguistics

Authors: L. Freeborn

Abstract:

Recent years have seen an increasing number of neuroimaging studies related to language learning as imaging techniques such as fMRI and EEG have become more widely accessible to researchers. By using a variety of structural and functional neuroimaging techniques, these studies have already made considerable progress in terms of our understanding of neural networks and processing related to first and second language acquisition. However, the methodological designs employed in neuroimaging studies to test language learning have been questioned by applied linguists working within the field of second language acquisition (SLA). One of the major criticisms is that tasks designed to measure language learning gains rarely have a communicative function, and seldom assess learners’ ability to use the language in authentic situations. This brings the validity of many neuroimaging tasks into question. The fundamental reason why people learn a language is to communicate, and it is well-known that both first and second language proficiency are developed through meaningful social interaction. With this in mind, the SLA field is in agreement that second language acquisition and proficiency should be measured through learners’ ability to communicate in authentic real-life situations. Whilst authenticity is not always possible to achieve in a classroom environment, the importance of task authenticity should be reflected in the design of language assessments, teaching materials, and curricula. Tasks that bear little relation to how language is used in real-life situations can be considered to lack construct validity. This paper first describes the typical tasks used in neuroimaging studies to measure language gains and proficiency, then analyses to what extent these tasks can validly assess these constructs.

Keywords: neuroimaging studies, research design, second language acquisition, task validity

Procedia PDF Downloads 118
12397 The Digital Transformation of Life Insurance Sales in Iran With the Emergence of Personal Financial Planning Robots; Opportunities and Challenges

Authors: Pedram Saadati, Zahra Nazari

Abstract:

Anticipating and identifying future opportunities and challenges facing industry activists for the emergence and entry of new knowledge and technologies of personal financial planning, and providing practical solutions is one of the goals of this research. For this purpose, a future research tool based on receiving opinions from the main players of the insurance industry has been used. The research method in this study was in 4 stages; including 1- a survey of the specialist salesforce of life insurance in order to identify the variables 2- the ranking of the variables by experts selected by a researcher-made questionnaire 3- holding a panel of experts with the aim of understanding the mutual effects of the variables and 4- statistical analyzes of the mutual effects matrix in Mick Mac software is done. The integrated analysis of influencing variables in the future has been done with the method of Structural Analysis, which is one of the efficient and innovative methods of future research. A list of opportunities and challenges was identified through a survey of best-selling life insurance representatives who were selected by snowball sampling. In order to prioritize and identify the most important issues, all the issues raised were sent to selected experts who were selected theoretically through a researcher-made questionnaire. The respondents determined the importance of 36 variables through scoring, so that the prioritization of opportunity and challenge variables can be determined. 8 of the variables identified in the first stage were removed by selected experts, and finally, the number of variables that could be examined in the third stage became 28 variables, which, in order to facilitate the examination, were divided into 6 categories, respectively, 11 variables of organization and management. Marketing and sales 7 cases, social and cultural 6 cases, technological 2 cases, rebranding 1 case and insurance 1 case were divided. The reliability of the researcher-made questionnaire was confirmed with the Cronbach's alpha test value of 0.96. In the third stage, by forming a panel consisting of 5 insurance industry experts, the consensus of their opinions about the influence of factors on each other and the ranking of variables was entered into the matrix. The matrix included the interrelationships of 28 variables, which were investigated using the structural analysis method. By analyzing the data obtained from the matrix by Mic Mac software, the findings of the research indicate that the categories of "correct training in the use of the software, the weakness of the technology of insurance companies in personalizing products, using the approach of equipping the customer, and honesty in declaring no need Customer to Insurance", the most important challenges of the influencer and the categories of "salesforce equipping approach, product personalization based on customer needs assessment, customer's pleasant experience of being consulted with consulting robots, business improvement of the insurance company due to the use of these tools, increasing the efficiency of the issuance process and optimal customer purchase" were identified as the most important opportunities for influence.

Keywords: personal financial planning, wealth management, advisor robots, life insurance, digital transformation

Procedia PDF Downloads 33
12396 Evaluation of the Need for Seismic Retrofitting of the Foundation of a Five Story Steel Building Because of Adding of a New Story

Authors: Mohammadreza Baradaran, F. Hamzezarghani

Abstract:

Every year in different points of the world it occurs with different strengths and thousands of people lose their lives because of this natural phenomenon. One of the reasons for destruction of buildings because of earthquake in addition to the passing of time and the effect of environmental conditions and the wearing-out of a building is changing the uses of the building and change the structure and skeleton of the building. A large number of structures that are located in earthquake bearing areas have been designed according to the old quake design regulations which are out dated. In addition, many of the major earthquakes which have occurred in recent years, emphasize retrofitting to decrease the dangers of quakes. Retrofitting structural quakes available is one of the most effective methods for reducing dangers and compensating lack of resistance caused by the weaknesses existing. In this article the foundation of a five-floor steel building with the moment frame system has been evaluated for quakes and the effect of adding a floor to this five-floor steel building has been evaluated and analyzed. The considered building is with a metallic skeleton and a piled roof and clayed block which after addition of a floor has increased to a six-floor foundation of 1416 square meters, and the height of the sixth floor from ground state has increased 18.95 meters. After analysis of the foundation model, the behavior of the soil under the foundation and also the behavior of the body or element of the foundation has been evaluated and the model of the foundation and its type of change in form and the amount of stress of the soil under the foundation for some of the composition has been determined many times in the SAFE software modeling and finally the need for retrofitting of the building's foundation has been determined.

Keywords: seismic, rehabilitation, steel building, foundation

Procedia PDF Downloads 264
12395 A Sparse Representation Speech Denoising Method Based on Adapted Stopping Residue Error

Authors: Qianhua He, Weili Zhou, Aiwu Chen

Abstract:

A sparse representation speech denoising method based on adapted stopping residue error was presented in this paper. Firstly, the cross-correlation between the clean speech spectrum and the noise spectrum was analyzed, and an estimation method was proposed. In the denoising method, an over-complete dictionary of the clean speech power spectrum was learned with the K-singular value decomposition (K-SVD) algorithm. In the sparse representation stage, the stopping residue error was adaptively achieved according to the estimated cross-correlation and the adjusted noise spectrum, and the orthogonal matching pursuit (OMP) approach was applied to reconstruct the clean speech spectrum from the noisy speech. Finally, the clean speech was re-synthesised via the inverse Fourier transform with the reconstructed speech spectrum and the noisy speech phase. The experiment results show that the proposed method outperforms the conventional methods in terms of subjective and objective measure.

Keywords: speech denoising, sparse representation, k-singular value decomposition, orthogonal matching pursuit

Procedia PDF Downloads 486
12394 Using Structural Equation Modeling to Analyze the Impact of Remote Work on Job Satisfaction

Authors: Florian Pfeffel, Valentin Nickolai, Christian Louis Kühner

Abstract:

Digitalization has disrupted the traditional workplace environment by allowing many employees to work from anywhere at any time. This trend of working from home was further accelerated due to the COVID-19 crisis, which forced companies to rethink their workplace models. While in many companies, this shift happened out of pure necessity; many employees were left more satisfied with their job due to the opportunity to work from home. This study focuses on employees’ job satisfaction in the service sector in dependence on the different work models, which are defined as a “work from home” model, the traditional “work in office” model, and a hybrid model. Using structural equation modeling (SEM), these three work models have been analyzed based on 13 influencing factors on job satisfaction that have been further summarized in the three groups “classic influencing factors”, “influencing factors changed by remote working”, and “new remote working influencing factors”. Based on the influencing factors on job satisfaction, a survey has been conducted with n = 684 employees in the service sector. Cronbach’s alpha of the individual constructs was shown to be suitable. Furthermore, the construct validity of the constructs was confirmed by face validity, content validity, convergent validity (AVE > 0.5: CR > 0.7), and discriminant validity. Additionally, confirmatory factor analysis (CFA) confirmed the model fit for the investigated sample (CMIN/DF: 2.567; CFI: 0.927; RMSEA: 0.048). The SEM-analysis has shown that the most significant influencing factor on job satisfaction is “identification with the work” with β = 0.540, followed by “Appreciation” (β = 0.151), “Compensation” (β = 0.124), “Work-Life-Balance” (β = 0.116), and “Communication and Exchange of Information” (β = 0.105). While the significance of each factor can vary depending on the work model, the SEM-analysis shows that the identification with the work is the most significant factor in all three work models and, in the case of the traditional office work model, it is the only significant influencing factor. The study shows that employees who work entirely remotely or have a hybrid work model are significantly more satisfied with their job, with a job satisfaction score of 5.0 respectively on a scale from 1 (very dissatisfied) to 7 (very satisfied), than employees do not have the option to work from home with a score of 4.6. This comes as a result of the lower identification with the work in the model without any remote working. Furthermore, the responses indicate that it is important to consider the individual preferences of each employee when it comes to the work model to achieve overall higher job satisfaction. Thus, it can be argued that companies can profit off of more motivation and higher productivity by considering the individual work model preferences, therefore, increasing the identification with the respective work.

Keywords: home-office, identification with work, job satisfaction, new work, remote work, structural equation modeling

Procedia PDF Downloads 70
12393 Electrical Dault Detection of Photovoltaic System: A Short-Circuit Fault Case

Authors: Moustapha H. Ibrahim, Dahir Abdourahman

Abstract:

This document presents a short-circuit fault detection process in a photovoltaic (PV) system. The proposed method is developed in MATLAB/Simulink. It determines whatever the size of the installation number of the short circuit module. The proposed algorithm indicates the presence or absence of an abnormality on the power of the PV system through measures of hourly global irradiation, power output, and ambient temperature. In case a fault is detected, it displays the number of modules in a short circuit. This fault detection method has been successfully tested on two different PV installations.

Keywords: PV system, short-circuit, fault detection, modelling, MATLAB-Simulink

Procedia PDF Downloads 221
12392 Analytical and Experimental Evaluation of Effects of Nonstructural Brick Walls on Earthquake Response of Reinforced Concrete Structures

Authors: Hasan Husnu Korkmaz, Serra Zerrin Korkmaz

Abstract:

The reinforced concrete (RC) framed structures composed of beams, columns, shear walls and the slabs. The other members are assumed to be nonstructural. Especially the brick infill walls which are used to separate the rooms or spaces are just handled as dead loads. On the other hand, if these infills are constructed within the frame bays, they also have higher shear and compression capacities. It is a well-known fact that, brick infills increase the lateral rigidity of the structure and thought to be a reserve capacity in the design. But, brick infills can create unfavorable failure or damage modes in the earthquake action such as soft story or short columns. The increase in the lateral rigidity also causes an over estimation of natural period of the structure and the corresponding earthquake loads in the design are less than the actual ones. In order to obtain accurate and realistic design results, the infills must be modelled in the structural design and their capacities must be included. Unfortunately, in Turkish Earthquake Code, there is no design methodology for the engineers. In this paper, finite element modelling of infilled reinforced concrete structures are studied. The proposed or used method is compared with the experimental results of a previous study. The effect of infills on the structural response is expressed within the paper.

Keywords: seismic loading, brick infills, finite element analysis, reinforced concrete, earthquake code

Procedia PDF Downloads 297
12391 Particle Swarm Optimization and Quantum Particle Swarm Optimization to Multidimensional Function Approximation

Authors: Diogo Silva, Fadul Rodor, Carlos Moraes

Abstract:

This work compares the results of multidimensional function approximation using two algorithms: the classical Particle Swarm Optimization (PSO) and the Quantum Particle Swarm Optimization (QPSO). These algorithms were both tested on three functions - The Rosenbrock, the Rastrigin, and the sphere functions - with different characteristics by increasing their number of dimensions. As a result, this study shows that the higher the function space, i.e. the larger the function dimension, the more evident the advantages of using the QPSO method compared to the PSO method in terms of performance and number of necessary iterations to reach the stop criterion.

Keywords: PSO, QPSO, function approximation, AI, optimization, multidimensional functions

Procedia PDF Downloads 563
12390 Neighbour Cell List Reduction in Multi-Tier Heterogeneous Networks

Authors: Mohanad Alhabo, Naveed Nawaz

Abstract:

The ongoing call or data session must be maintained to ensure a good quality of service. This can be accomplished by performing the handover procedure while the user is on the move. However, the dense deployment of small cells in 5G networks is a challenging issue due to the extensive number of handovers. In this paper, a neighbour cell list method is proposed to reduce the number of target small cells and hence minimizing the number of handovers. The neighbour cell list is built by omitting cells that could cause an unnecessary handover and handover failure because of short time of stay of the user in these cells. A multi-attribute decision making technique, simple additive weighting, is then applied to the optimized neighbour cell list. Multi-tier small cells network is considered in this work. The performance of the proposed method is analysed and compared with that of the existing methods. Results disclose that our method has decreased the candidate small cell list, unnecessary handovers, handover failure, and short time of stay cells compared to the competitive method.

Keywords: handover, HetNets, multi-attribute decision making, small cells

Procedia PDF Downloads 105
12389 Method for Targeting Small Volume in Rat Brainby Gamma Knife and Dosimetric Control: Towards a Standardization

Authors: J. Constanzo, B. Paquette, G. Charest, L. Masson-Côté, M. Guillot

Abstract:

Targeted and whole-brain irradiation in humans can result in significant side effects causing decreased patient quality of life. To adequately investigate structural and functional alterations after stereotactic radiosurgery, preclinical studies are needed. The first step is to establish a robust standardized method of targeted irradiation on small regions of the rat brain. Eleven euthanized male Fischer rats were imaged in a stereotactic bed, by computed tomographic (CT), to estimate positioning variations regarding to the bregma skull reference point. Using a rat brain atlas and the stereotactic bregma coordinates assessed from CT images, various regions of the brain were delimited and a treatment plan was generated. A dose of 37 Gy at 30% isodose which corresponds to 100 Gy in 100% of the target volume (X = 98.1; Y = 109.1; Z = 100.0) was set by Leksell Gamma Plan using sectors number 4, 5, 7, and 8 of the Gamma Knife unit with the 4-mm diameter collimators. Effects of positioning accuracy of the rat brain on the dose deposition were simulated by Gamma Plan and validated with dosimetric measurements. Our results showed that 90% of the target volume received 110 ± 4.7 Gy and the maximum of deposited dose was 124 ± 0.6 Gy, which corresponds to an excellent relative standard deviation of 0.5%. This dose deposition calculated with the Gamma Plan was validated with the dosimetric films resulting in a dose-profile agreement within 2%, both in X- and Z-axis,. Our results demonstrate the feasibility to standardize the irradiation procedure of a small volume in the rat brain using a Gamma Knife.

Keywords: brain irradiation, dosimetry, gamma knife, small-animal irradiation, stereotactic radiosurgery (SRS)

Procedia PDF Downloads 397
12388 The Contribution of the Lomé Charter to Combating Drugs Trafficking at Sea: Nigerian and South African Legal Perspectives

Authors: Obinna Emmanuel Nkomadu

Abstract:

The sea attracts many criminal activities including drug trafficking. The illicit traffic in narcotic drugs and psychotropic substances by sea poses a serious threat to maritime security globally. The seizure of drugs, particularly, on the African continent is on the raise. In terms of Southern Africa, South Africa is a major transit point for Latin American drugs and South Africa is the largest market for illicit drugs entering the Southern African region. Nigeria and South Africa have taken a number of steps to address this scourge, but, despite those steps, drugs trafficking at sea continues. For that reason and to combat a number of other threats to maritime security around the continent, a substantial number of AU members in 2016 adopted the African Charter on Maritime Security and Safety and Development in Africa (“the Charter”). However, the Charter is yet to come into force due to the number of States required to accede or ratify the Charter. This paper set out the pre-existing international instruments on drugs, to ascertain the domestic laws of Nigeria and South Africa relating to drugs with the relevant provisions of the Lomé Charter in order to establish whether any legal steps are required to ensure that Nigeria and South Africa comply with its obligations under the Charter. Indeed, should Nigeria and South Africa decide to ratify it and should it come into force, both States must cooperate with other relevant States in establishing policies, as well as a regional and continental institutions, and ensure the implementation of such policies. The paper urged the States to urgently ratify the Charter as it is a step in the right direction in the prevention and repression of drugs trafficking on the African maritime domain.

Keywords: cooperation against drugs trafficking at sea, Lomé Charter, maritime security, Nigerian and South Africa legislation on drugs

Procedia PDF Downloads 81
12387 Environmental Forensic Analysis of the Shoreline Microplastics Debris on the Limbe Coastline, Cameroon

Authors: Ndumbe Eric Esongami, Manga Veronica Ebot, Foba Josepha Tendo, Yengong Fabrice Lamfu, Tiku David Tambe

Abstract:

The prevalence and unpleasant nature of plastics pollution constantly observed on beach shore on stormy events has prompt researchers worldwide to thesis on sustainable economic and environmental designs on plastics, especially in Cameroon, a major touristic destination in the Central Africa Region. The inconsistent protocols develop by researchers has added to this burden, thus the morphological nature of microplastic remediation is a call for concerns. The prime aim of the study is to morphologically identify, quantify and forensically understands the distribution of each plastics polymer composition. Duplicates of 2×2 m (4m2) quadrants were sampled in each beach/month over 8 months period across five purposive beaches along the Limbe – Idenau coastline, Cameroon. Collected plastic samples were thoroughly washed and separation done using a 2 mm sieve. Only particles of size, < 2 mm, were considered and forward follow the microplastics laboratory analytical processes. Established step by step methodological procedures of particle filtration, organic matter digestion, density separation, particle extraction and polymer identification including microscope and were applied for the beach microplastics samples. Microplastics were observed in each sample/beach/month with an overall abundance of 241 particles/number weighs 89.15 g in total and with a mean abundance of 2 particles/m2 (0.69 g/m2) and 6 particles/month (2.0 g/m2). The accumulation of beach shoreline MPs rose dramatically towards decreasing size with microbeads and fiber only found in the < 1 mm size fraction. Approximately 75% of beach MPs contamination were found in LDB 2, LDB 1 and IDN beaches/average particles/number while the most dominant polymer type frequently observed also were PP, PE, and PS in all morphologically parameters analysed. Beach MPs accumulation significantly varied temporally and spatially at p = 0.05. ANOVA and Spearman’s rank correlation used shows linear relationships between the sizes categories considered in this study. In terms of polymer MPs analysis, the colour class recorded that white coloured MPs was dominant, 50 particles/number (22.25 g) with recorded abundance/number in PP (25), PE (15) and PS (5). The shape class also revealed that irregularly shaped MPs was dominant, 98 particles/number (30.5 g) with higher abundance/number in PP (39), PE (33), and PS (11). Similarly, MPs type class shows that fragmented MPs type was also dominant, 80 particles/number (25.25 g) with higher abundance/number in PP (30), PE (28) and PS (15). Equally, the sized class forward revealed that 1.5 – 1.99 mm sized ranged MPs had the highest abundance of 102 particles/number (51.77 g) with higher concentration observed in PP (47), PE (41), and PS (7) as well and finally, the weight class also show that 0.01 g weighs MPs was dominated by 98 particles/number (56.57 g) with varied numeric abundance seen in PP (49), PE (29) and PS (13). The forensic investigation of the pollution indicated that majority of the beach microplastic is sourced from the site/nearby area. The investigation could draw useful conclusions regarding the pathways of pollution. The fragmented microplastic, a significant component in the sample, was found to be sourced from recreational activities and partly from fishing boat installations and repairs activities carried out close to the shore.

Keywords: forensic analysis, beach MPs, particle/number, polymer composition, cameroon

Procedia PDF Downloads 66
12386 Economic Forecasting Analysis for Solar Photovoltaic Application

Authors: Enas R. Shouman

Abstract:

Economic development with population growth is leading to a continuous increase in energy demand. At the same time, growing global concern for the environment is driving to decrease the use of conventional energy sources and to increase the use of renewable energy sources. The objective of this study is to present the market trends of solar energy photovoltaic technology over the world and to represent economics methods for PV financial analyzes on the basis of expectations for the expansion of PV in many applications. In the course of this study, detailed information about the current PV market was gathered and analyzed to find factors influencing the penetration of PV energy. The paper methodology depended on five relevant economic financial analysis methods that are often used for investment decisions maker. These methods are payback analysis, net benefit analysis, saving-to-investment ratio, adjusted internal rate of return, and life-cycle cost. The results of this study may be considered as a marketing guide that helps diffusion of using PV Energy. The study showed that PV cost is economically reliable. The consumers will pay higher purchase prices for PV system installation but will get lower electricity bill.

Keywords: photovoltaic, financial methods, solar energy, economics, PV panel

Procedia PDF Downloads 98
12385 Prevalence, Median Time, and Associated Factors with the Likelihood of Initial Antidepressant Change: A Cross-Sectional Study

Authors: Nervana Elbakary, Sami Ouanes, Sadaf Riaz, Oraib Abdallah, Islam Mahran, Noriya Al-Khuzaei, Yassin Eltorki

Abstract:

Major Depressive Disorder (MDD) requires therapeutic interventions during the initial month after being diagnosed for better disease outcomes. International guidelines recommend a duration of 4–12 weeks for an initial antidepressant (IAD) trial at an optimized dose to get a response. If depressive symptoms persist after this duration, guidelines recommend switching, augmenting, or combining strategies as the next step. Most patients with MDD in the mental health setting have been labeled incorrectly as treatment-resistant where in fact they have not been subjected to an adequate trial of guideline-recommended therapy. Premature discontinuation of IAD due to ineffectiveness can cause unfavorable consequences. Avoiding irrational practices such as subtherapeutic doses of IAD, premature switching between the ADs, and refraining from unjustified polypharmacy can help the disease to go into a remission phase We aimed to determine the prevalence and the patterns of strategies applied after an IAD was changed because of a suboptimal response as a primary outcome. Secondary outcomes included the median survival time on IAD before any change; and the predictors that were associated with IAD change. This was a retrospective cross- sectional study conducted in Mental Health Services in Qatar. A dataset between January 1, 2018, and December 31, 2019, was extracted from the electronic health records. Inclusion and exclusion criteria were defined and applied. The sample size was calculated to be at least 379 patients. Descriptive statistics were reported as frequencies and percentages, in addition, to mean and standard deviation. The median time of IAD to any change strategy was calculated using survival analysis. Associated predictors were examined using two unadjusted and adjusted cox regression models. A total of 487 patients met the inclusion criteria of the study. The average age for participants was 39.1 ± 12.3 years. Patients with first experience MDD episode 255 (52%) constituted a major part of our sample comparing to the relapse group 206(42%). About 431 (88%) of the patients had an occurrence of IAD change to any strategy before end of the study. Almost half of the sample (212 (49%); 95% CI [44–53%]) had their IAD changed less than or equal to 30 days. Switching was consistently more common than combination or augmentation at any timepoint. The median time to IAD change was 43 days with 95% CI [33.2–52.7]. Five independent variables (age, bothersome side effects, un-optimization of the dose before any change, comorbid anxiety, first onset episode) were significantly associated with the likelihood of IAD change in the unadjusted analysis. The factors statistically associated with higher hazard of IAD change in the adjusted analysis were: younger age, un-optimization of the IAD dose before any change, and comorbid anxiety. Because almost half of the patients in this study changed their IAD as early as within the first month, efforts to avoid treatment failure are needed to ensure patient-treatment targets are met. The findings of this study can have direct clinical guidance for health care professionals since an optimized, evidence-based use of AD medication can improve the clinical outcomes of patients with MDD; and also, to identify high-risk factors that could worsen the survival time on IAD such as young age and comorbid anxiety

Keywords: initial antidepressant, dose optimization, major depressive disorder, comorbid anxiety, combination, augmentation, switching, premature discontinuation

Procedia PDF Downloads 135
12384 A Non-Parametric Analysis of District Disaster Management Authorities in Punjab, Pakistan

Authors: Zahid Hussain

Abstract:

Provincial Disaster Management Authority (PDMA) Punjab was established under NDM Act 2010 and now working under Senior Member Board of Revenue, deals with the whole spectrum of disasters including preparedness, mitigation, early warning, response, relief, rescue, recovery and rehabilitation. The District Disaster Management Authorities (DDMA) are acting as implementing arms of PDMA in the districts to respond any disaster. DDMAs' role is very important in disaster mitigation, response and recovery as they are the first responder and closest tier to the community. Keeping in view the significant role of DDMAs, technical and human resource capacity are need to be checked. For calculating the technical efficiencies of District Disaster Management Authority (DDMA) in Punjab, three inputs like number of labour, the number of transportation and number of equipment, two outputs like relief assistance and the number of rescue and 25 districts as decision making unit have been selected. For this purpose, 8 years secondary data from 2005 to 2012 has been used. Data Envelopment Analysis technique has been applied. DEA estimates the relative efficiency of peer entities or entities performing the similar tasks. The findings show that all decision making unit (DMU) (districts) are inefficient on techonological and scale efficiency scale while technically efficient on pure and total factor productivity efficiency scale. All DMU are found technically inefficient only in the year 2006. Labour and equipment were not efficiently used in the year 2005, 2007, 2008, 2009 and 2012. Furthermore, only three years 2006, 2010 and 2011 show that districts could not efficiently use transportation in a disaster situation. This study suggests that all districts should curtail labour, transportation and equipment to be efficient. Similarly, overall all districts are not required to achieve number of rescue and relief assistant, these should be reduced.

Keywords: DEA, DMU, PDMA, DDMA

Procedia PDF Downloads 229
12383 Correlation of Residential Community Layout and Neighborhood Relationship: A Morphological Analysis of Tainan Using Space Syntax

Authors: Ping-Hung Chen, Han-Liang Lin

Abstract:

Taiwan has formed diverse settlement patterns in different time and space backgrounds. Various socio-network links are created between individuals, families, communities, and societies, and different living cultures are also derived. But rapid urbanization and social structural change have caused the creation of densely-packed assembly housing complexes and made neighborhood community upward developed. This, among others, seemed to have affected neighborhood relationship and also created social problems. To understand the complex relations and socio-spatial structure of the community, it is important to use mixed methods. This research employs the theory of space syntax to analyze the layout and structural indicators of the selected communities in Tainan city. On the other hand, this research does the survey about residents' interactions and the sense of community by questionnaire of the selected communities. Then the mean values of the syntax measures from each community were correlated with the results of the questionnaire using a Pearson correlation to examine how elements in physical design affect the sense of community and neighborhood relationship. In Taiwan, most urban morphology research methods are qualitative study. This paper tries to use space syntax to find out the correlation between the community layout and the neighborhood relationship. The result of this study could be used in future studies or improve the quality of residential communities in Taiwan.

Keywords: community layout, neighborhood relationship, space syntax, mixed-method

Procedia PDF Downloads 175
12382 Structural, Magnetic, Dielectric and Electrical Properties of Gd3+ Doped Cobalt Ferrite Nanoparticles

Authors: Raghvendra Singh Yadav, Ivo Kuřitka, Jarmila Vilcakova, Jaromir Havlica, Lukas Kalina, Pavel Urbánek, Michal Machovsky, Milan Masař, Martin Holek

Abstract:

In this work, CoFe₂₋ₓGdₓO₄ (x=0.00, 0.05, 0.10, 0.15, 0.20) spinel ferrite nanoparticles are synthesized by sonochemical method. The structural properties and cation distribution are investigated using X-ray Diffraction (XRD), Raman Spectroscopy, Fourier Transform Infrared Spectroscopy and X-ray photoelectron spectroscopy. The morphology and elemental analysis are screened using field emission scanning electron microscopy (FE-SEM) and energy dispersive X-ray spectroscopy, respectively. The particle size measured by FE-SEM and XRD analysis confirm the formation of nanoparticles in the range of 7-10 nm. The electrical properties show that the Gd³⁺ doped cobalt ferrite (CoFe₂₋ₓGdₓO₄; x= 0.20) exhibit enhanced dielectric constant (277 at 100 Hz) and ac conductivity (20.17 x 10⁻⁹ S/cm at 100 Hz). The complex impedance measurement study reveals that as Gd³⁺ doping concentration increases, the impedance Z’ and Z’ ’ decreases. The influence of Gd³⁺ doping in cobalt ferrite nanoparticles on the magnetic property is examined by using vibrating sample magnetometer. Magnetic property measurement reveal that the coercivity decreases with Gd³⁺ substitution from 234.32 Oe (x=0.00) to 12.60 Oe (x=0.05) and further increases from 12.60 Oe (x=0.05) to 68.62 Oe (x=0.20). The saturation magnetization decreases with Gd³⁺ substitution from 40.19 emu/g (x=0.00) to 21.58 emu/g (x=0.20). This decrease follows the three-sublattice model suggested by Yafet-Kittel (Y-K). The Y-K angle increases with the increase of Gd³⁺ doping in cobalt ferrite nanoparticles.

Keywords: sonochemical method, nanoparticles, magnetic property, dielectric property, electrical property

Procedia PDF Downloads 339
12381 Use of the Gas Chromatography Method for Hydrocarbons' Quality Evaluation in the Offshore Fields of the Baltic Sea

Authors: Pavel Shcherban, Vlad Golovanov

Abstract:

Currently, there is an active geological exploration and development of the subsoil shelf of the Kaliningrad region. To carry out a comprehensive and accurate assessment of the volumes and degree of extraction of hydrocarbons from open deposits, it is necessary to establish not only a number of geological and lithological characteristics of the structures under study, but also to determine the oil quality, its viscosity, density, fractional composition as accurately as possible. In terms of considered works, gas chromatography is one of the most capacious methods that allow the rapid formation of a significant amount of initial data. The aspects of the application of the gas chromatography method for determining the chemical characteristics of the hydrocarbons of the Kaliningrad shelf fields are observed in the article, as well as the correlation-regression analysis of these parameters in comparison with the previously obtained chemical characteristics of hydrocarbon deposits located on the land of the region. In the process of research, a number of methods of mathematical statistics and computer processing of large data sets have been applied, which makes it possible to evaluate the identity of the deposits, to specify the amount of reserves and to make a number of assumptions about the genesis of the hydrocarbons under analysis.

Keywords: computer processing of large databases, correlation-regression analysis, hydrocarbon deposits, method of gas chromatography

Procedia PDF Downloads 144
12380 Heuristic Search Algorithm (HSA) for Enhancing the Lifetime of Wireless Sensor Networks

Authors: Tripatjot S. Panag, J. S. Dhillon

Abstract:

The lifetime of a wireless sensor network can be effectively increased by using scheduling operations. Once the sensors are randomly deployed, the task at hand is to find the largest number of disjoint sets of sensors such that every sensor set provides complete coverage of the target area. At any instant, only one of these disjoint sets is switched on, while all other are switched off. This paper proposes a heuristic search method to find the maximum number of disjoint sets that completely cover the region. A population of randomly initialized members is made to explore the solution space. A set of heuristics has been applied to guide the members to a possible solution in their neighborhood. The heuristics escalate the convergence of the algorithm. The best solution explored by the population is recorded and is continuously updated. The proposed algorithm has been tested for applications which require sensing of multiple target points, referred to as point coverage applications. Results show that the proposed algorithm outclasses the existing algorithms. It always finds the optimum solution, and that too by making fewer number of fitness function evaluations than the existing approaches.

Keywords: coverage, disjoint sets, heuristic, lifetime, scheduling, Wireless sensor networks, WSN

Procedia PDF Downloads 436
12379 Optimisation of the Input Layer Structure for Feedforward Narx Neural Networks

Authors: Zongyan Li, Matt Best

Abstract:

This paper presents an optimization method for reducing the number of input channels and the complexity of the feed-forward NARX neural network (NN) without compromising the accuracy of the NN model. By utilizing the correlation analysis method, the most significant regressors are selected to form the input layer of the NN structure. An application of vehicle dynamic model identification is also presented in this paper to demonstrate the optimization technique and the optimal input layer structure and the optimal number of neurons for the neural network is investigated.

Keywords: correlation analysis, F-ratio, levenberg-marquardt, MSE, NARX, neural network, optimisation

Procedia PDF Downloads 355
12378 Magnetic Survey for the Delineation of Concrete Pillars in Geotechnical Investigation for Site Characterization

Authors: Nuraddeen Usman, Khiruddin Abdullah, Mohd Nawawi, Amin Khalil Ismail

Abstract:

A magnetic survey is carried out in order to locate the remains of construction items, specifically concrete pillars. The conventional Euler deconvolution technique can perform the task but it requires the use of fixed structural index (SI) and the construction items are made of materials with different shapes which require different SI (unknown). A Euler deconvolution technique that estimate background, horizontal coordinate (xo and yo), depth and structural index (SI) simultaneously is prepared and used for this task. The synthetic model study carried indicated the new methodology can give a good estimate of location and does not depend on magnetic latitude. For field data, both the total magnetic field and gradiometer reading had been collected simultaneously. The computed vertical derivatives and gradiometer readings are compared and they have shown good correlation signifying the effectiveness of the method. The filtering is carried out using automated procedure, analytic signal and other traditional techniques. The clustered depth solutions coincided with the high amplitude/values of analytic signal and these are the possible target positions of the concrete pillars being sought. The targets under investigation are interpreted to be located at the depth between 2.8 to 9.4 meters. More follow up survey is recommended as this mark the preliminary stage of the work.

Keywords: concrete pillar, magnetic survey, geotechnical investigation, Euler Deconvolution

Procedia PDF Downloads 247
12377 Sparse Modelling of Cancer Patients’ Survival Based on Genomic Copy Number Alterations

Authors: Khaled M. Alqahtani

Abstract:

Copy number alterations (CNA) are variations in the structure of the genome, where certain regions deviate from the typical two chromosomal copies. These alterations are pivotal in understanding tumor progression and are indicative of patients' survival outcomes. However, effectively modeling patients' survival based on their genomic CNA profiles while identifying relevant genomic regions remains a statistical challenge. Various methods, such as the Cox proportional hazard (PH) model with ridge, lasso, or elastic net penalties, have been proposed but often overlook the inherent dependencies between genomic regions, leading to results that are hard to interpret. In this study, we enhance the elastic net penalty by incorporating an additional penalty that accounts for these dependencies. This approach yields smooth parameter estimates and facilitates variable selection, resulting in a sparse solution. Our findings demonstrate that this method outperforms other models in predicting survival outcomes, as evidenced by our simulation study. Moreover, it allows for a more meaningful interpretation of genomic regions associated with patients' survival. We demonstrate the efficacy of our approach using both real data from a lung cancer cohort and simulated datasets.

Keywords: copy number alterations, cox proportional hazard, lung cancer, regression, sparse solution

Procedia PDF Downloads 28
12376 Comparison of Elastic and Viscoelastic Modeling for Asphalt Concrete Surface Layer

Authors: Fouzieh Rouzmehr, Mehdi Mousavi

Abstract:

Hot mix asphalt concrete (HMAC) is a mixture of aggregates and bitumen. The primary ingredient that determines the mechanical properties of HMAC is the bitumen in it, which displays viscoelastic behavior under normal service conditions. For simplicity, asphalt concrete is considered an elastic material, but this is far from reality at high service temperatures and longer loading times. Viscoelasticity means that the material's stress-strain relationship depends on the strain rate and loading duration. The goal of this paper is to simulate the mechanical response of flexible pavements using linear elastic and viscoelastic modeling of asphalt concrete and predict pavement performance. Falling Weight Deflectometer (FWD) load will be simulated and the results for elastic and viscoelastic modeling will be evaluated. The viscoelastic simulation is performed by the Prony series, which will be modeled by using ANSYS software. Inflexible pavement design, tensile strain at the bottom of the surface layer and compressive strain at the top of the last layer plays an important role in the structural response of the pavement and they will imply the number of loads for fatigue (Nf) and rutting (Nd) respectively. The differences of these two modelings are investigated on fatigue cracking and rutting problem, which are the two main design parameters in flexible pavement design. Although the differences in rutting problem between the two models were negligible, in fatigue cracking, the viscoelastic model results were more accurate. Results indicate that modeling the flexible pavement with elastic material is efficient enough and gives acceptable results.

Keywords: flexible pavement, asphalt, FEM, viscoelastic, elastic, ANSYS, modeling

Procedia PDF Downloads 119
12375 Heat Transfer Analysis of Corrugated Plate Heat Exchanger

Authors: Ketankumar Gandabhai Patel, Jalpit Balvantkumar Prajapati

Abstract:

Plate type heat exchangers has many thin plates that are slightly apart and have very large surface areas and fluid flow passages that are good for heat transfer. This can be a more effective heat exchanger than the tube or shell heat exchanger due to advances in brazing and gasket technology that have made this plate exchanger more practical. Plate type heat exchangers are most widely used in food processing industries and dairy industries. Mostly fouling occurs in plate type heat exchanger due to deposits create an insulating layer over the surface of the heat exchanger, that decreases the heat transfer between fluids and increases the pressure drop. The pressure drop increases as a result of the narrowing of the flow area, which increases the gap velocity. Therefore, the thermal performance of the heat exchanger decreases with time, resulting in an undersized heat exchanger and causing the process efficiency to be reduced. Heat exchangers are often over sized by 70 to 80%, of which 30 % to 50% is assigned to fouling. The fouling can be reduced by varying some geometric parameters and flow parameters. Based on the study, a correlation will estimate for Nusselt number as a function of Reynolds number, Prandtl number and chevron angle.

Keywords: heat transfer coefficient, single phase flow, mass flow rate, pressure drop

Procedia PDF Downloads 300
12374 Sustainability in Hospitality: An Inevitable Necessity in New Age with Big Environmental Challenges

Authors: Majid Alizadeh, Sina Nematizadeh, Hassan Esmailpour

Abstract:

The mutual effects of hospitality and the environment are undeniable, so that the tourism industry has major harmful effects on the environment. Hotels, as one of the most important pillars of the hospitality industry, have significant effects on the environment. Green marketing is a promising strategy in response to the growing concerns about the environment. A green hotel marketing model was proposed using a grounded theory approach in the hotel industry. The study was carried out as a mixed method study. Data gathering in the qualitative phase was done through literature review and In-depth, semi-structured interviews with 10 experts in green marketing using snowball technique. Following primary analysis, open, axial, and selective coding was done on the data, which yielded 69 concepts, 18 categories and six dimensions. Green hotel (green product) was adopted as the core phenomenon. In the quantitative phase, data were gleaned using 384 questionnaires filled-out by hotel guests and descriptive statistics and Structural equation modeling (SEM) were used for data analysis. The results indicated that the mediating role of behavioral response between the ecological literacy, trust, marketing mix and performance was significant. The green marketing mix, as a strategy, had a significant and positive effect on guests’ behavioral response, corporate green image, and financial and environmental performance of hotels.

Keywords: green marketing, sustainable development, hospitality, grounded theory, structural equations model

Procedia PDF Downloads 58
12373 Assessing the Socio-Economic Problems and Environmental Implications of Green Revolution In Uttar Pradesh, India

Authors: Naima Umar

Abstract:

Mid-1960’s has been landmark in the history of Indian agriculture. It was in 1966-67 when a New Agricultural Strategy was put into practice to tide over chronic shortages of food grains in the country. This strategy adopted was the use High-Yielding Varieties (HYV) of seeds (wheat and rice), which was popularly known as the Green Revolution. This phase of agricultural development has saved us from hunger and starvation and made the peasants more confident than ever before, but it has also created a number of socio-economic and environmental implications such as the reduction in area under forest, salinization, waterlogging, soil erosion, lowering of underground water table, soil, water and air pollution, decline in soil fertility, silting of rivers and emergence of several diseases and health hazards. The state of Uttar Pradesh in the north is bounded by the country of Nepal, the states of Uttrakhand on the northwest, Haryana on the west, Rajasthan on the southwest, Madhya Pradesh on the south and southwest, and Bihar on the east. It is situated between 23052´N and 31028´N latitudes and 7703´ and 84039´E longitudes. It is the fifth largest state of the country in terms of area, and first in terms of population. Forming the part of Ganga plain the state is crossed by a number of rivers which originate from the snowy peaks of Himalayas. The fertile plain of the Ganga has led to a high concentration of population with high density and the dominance of agriculture as an economic activity. Present paper highlights the negative impact of new agricultural technology on health of the people and environment and will attempt to find out factors which are responsible for these implications. Karl Pearson’s Correlation coefficient technique has been applied by selecting 1 dependent variable (i.e. Productivity Index) and some independent variables which may impact crop productivity in the districts of the state. These variables have categorized as: X1 (Cropping Intensity), X2 (Net irrigated area), X3 (Canal Irrigated area), X4 (Tube-well Irrigated area), X5 (Irrigated area by other sources), X6 (Consumption of chemical fertilizers (NPK) Kg. /ha.), X7 (Number of wooden plough), X8 (Number of iron plough), X9 (Number of harrows and cultivators), X10 (Number of thresher machines), X11(Number of sprayers), X12 (Number of sowing instruments), X13 (Number of tractors) and X14 (Consumption of insecticides and pesticides (in Kg. /000 ha.). The entire data during 2001-2005 and 2006- 2010 have been taken and 5 years average value is taken into consideration, based on secondary sources obtained from various government, organizations, master plan report, economic abstracts, district census handbooks and village and town directories etc,. put on a standard computer programmed SPSS and the results obtained have been properly tabulated.

Keywords: agricultural technology, environmental implications, health hazards, socio-economic problems

Procedia PDF Downloads 294
12372 Towards the Use of Software Product Metrics as an Indicator for Measuring Mobile Applications Power Consumption

Authors: Ching Kin Keong, Koh Tieng Wei, Abdul Azim Abd. Ghani, Khaironi Yatim Sharif

Abstract:

Maintaining factory default battery endurance rate over time in supporting huge amount of running applications on energy-restricted mobile devices has created a new challenge for mobile applications developer. While delivering customers’ unlimited expectations, developers are barely aware of efficient use of energy from the application itself. Thus developers need a set of valid energy consumption indicators in assisting them to develop energy saving applications. In this paper, we present a few software product metrics that can be used as an indicator to measure energy consumption of Android-based mobile applications in the early of design stage. In particular, Trepn Profiler (Power profiling tool for Qualcomm processor) has used to collect the data of mobile application power consumption, and then analyzed for the 23 software metrics in this preliminary study. The results show that McCabe cyclomatic complexity, number of parameters, nested block depth, number of methods, weighted methods per class, number of classes, total lines of code and method lines have direct relationship with power consumption of mobile application.

Keywords: battery endurance, software metrics, mobile application, power consumption

Procedia PDF Downloads 383
12371 Investigation of Mode II Fracture Toughness in Orthotropic Materials

Authors: Mahdi Fakoor, Nabi Mehri Khansari, Ahmadreza Farokhi

Abstract:

Evaluation of mode II fracture toughness (KIIC) in composite materials is very hard problem to be solved, since it can be affected by many mechanisms of dissipation. Furthermore, non-linearity in its behavior can offer an extra difficulty to obtain accuracy in the results. Different reported values for KIIC in various references can prove the mentioned assertion. In this research, some solutions proposed based on the form of necessary corrections that should be executed on the common test fixtures. Due to the fact that the common test fixtures are not able to active toughening mechanisms in pure Mode II correctly, we have employed some structural modifications on common fixtures. Particularly, the Iosipescu test is used as start point. The tests are applied on graphite/epoxy; PMMA and Western White Pine Wood. Also, mixed mode I/II fracture limit curves are used to indicate the scattering in test results are really relevant to the creation of Fracture Process Zone (FPZ). In the present paper, shear load consideration applied at the predicted shear zone by considering some significant structural amendments that can active mode II toughening mechanisms. Indeed, the employed empirical method causes significant developing in repeatability and reproducibility as well. Moreover, a 3D Finite Element (FE) is performed for verification of the obtained results. Eventually, it is figured out that, a remarkable precision can be obtained in common test fixture in comparison with the previous one.

Keywords: FPZ, shear test fixture, mode II fracture toughness, composite material, FEM

Procedia PDF Downloads 347