Search results for: quantitative techniques
7913 White Wine Discrimination Based on Deconvoluted Surface Enhanced Raman Spectroscopy Signals
Authors: Dana Alina Magdas, Nicoleta Simona Vedeanu, Ioana Feher, Rares Stiufiuc
Abstract:
Food and beverages authentication using rapid and non-expensive analytical tools represents nowadays an important challenge. In this regard, the potential of vibrational techniques in food authentication has gained an increased attention during the last years. For wines discrimination, Raman spectroscopy appears more feasible to be used as compared with IR (infrared) spectroscopy, because of the relatively weak water bending mode in the vibrational spectroscopy fingerprint range. Despite this, the use of Raman technique in wine discrimination is in an early stage. Taking this into consideration, the wine discrimination potential of surface-enhanced Raman scattering (SERS) technique is reported in the present work. The novelty of this study, compared with the previously reported studies, concerning the application of vibrational techniques in wine discrimination consists in the fact that the present work presents the wines differentiation based on the individual signals obtained from deconvoluted spectra. In order to achieve wines classification with respect to variety, geographical origin and vintage, the peaks intensities obtained after spectra deconvolution were compared using supervised chemometric methods like Linear Discriminant Analysis (LDA). For this purpose, a set of 20 white Romanian wines from different viticultural Romanian regions four varieties, was considered. Chemometric methods applied directly to row SERS experimental spectra proved their efficiency, but discrimination markers identification found to be very difficult due to the overlapped signals as well as for the band shifts. By using this approach, a better general view related to the differences that appear among the wines in terms of compositional differentiation could be reached.Keywords: chemometry, SERS, variety, wines discrimination
Procedia PDF Downloads 1587912 Gnss Aided Photogrammetry for Digital Mapping
Authors: Muhammad Usman Akram
Abstract:
This research work based on GNSS-Aided Photogrammetry for Digital Mapping. It focuses on topographic survey of an area or site which is to be used in future Planning & development (P&D) or can be used for further, examination, exploration, research and inspection. Survey and Mapping in hard-to-access and hazardous areas are very difficult by using traditional techniques and methodologies; as well it is time consuming, labor intensive and has less precision with limited data. In comparison with the advance techniques it is saving with less manpower and provides more precise output with a wide variety of multiple data sets. In this experimentation, Aerial Photogrammetry technique is used where an UAV flies over an area and captures geocoded images and makes a Three-Dimensional Model (3-D Model), UAV operates on a user specified path or area with various parameters; Flight altitude, Ground sampling distance (GSD), Image overlapping, Camera angle etc. For ground controlling, a network of points on the ground would be observed as a Ground Control point (GCP) using Differential Global Positioning System (DGPS) in PPK or RTK mode. Furthermore, that raw data collected by UAV and DGPS will be processed in various Digital image processing programs and Computer Aided Design software. From which as an output we obtain Points Dense Cloud, Digital Elevation Model (DEM) and Ortho-photo. The imagery is converted into geospatial data by digitizing over Ortho-photo, DEM is further converted into Digital Terrain Model (DTM) for contour generation or digital surface. As a result, we get Digital Map of area to be surveyed. In conclusion, we compared processed data with exact measurements taken on site. The error will be accepted if the amount of error is not breached from survey accuracy limits set by concerned institutions.Keywords: photogrammetry, post processing kinematics, real time kinematics, manual data inquiry
Procedia PDF Downloads 297911 Flood Vulnerability Zoning for Blue Nile Basin Using Geospatial Techniques
Authors: Melese Wondatir
Abstract:
Flooding ranks among the most destructive natural disasters, impacting millions of individuals globally and resulting in substantial economic, social, and environmental repercussions. This study's objective was to create a comprehensive model that assesses the Nile River basin's susceptibility to flood damage and improves existing flood risk management strategies. Authorities responsible for enacting policies and implementing measures may benefit from this research to acquire essential information about the flood, including its scope and susceptible areas. The identification of severe flood damage locations and efficient mitigation techniques were made possible by the use of geospatial data. Slope, elevation, distance from the river, drainage density, topographic witness index, rainfall intensity, distance from road, NDVI, soil type, and land use type were all used throughout the study to determine the vulnerability of flood damage. Ranking elements according to their significance in predicting flood damage risk was done using the Analytic Hierarchy Process (AHP) and geospatial approaches. The analysis finds that the most important parameters determining the region's vulnerability are distance from the river, topographic witness index, rainfall, and elevation, respectively. The consistency ratio (CR) value obtained in this case is 0.000866 (<0.1), which signifies the acceptance of the derived weights. Furthermore, 10.84m2, 83331.14m2, 476987.15m2, 24247.29m2, and 15.83m2 of the region show varying degrees of vulnerability to flooding—very low, low, medium, high, and very high, respectively. Due to their close proximity to the river, the northern-western regions of the Nile River basin—especially those that are close to Sudanese cities like Khartoum—are more vulnerable to flood damage, according to the research findings. Furthermore, the AUC ROC curve demonstrates that the categorized vulnerability map achieves an accuracy rate of 91.0% based on 117 sample points. By putting into practice strategies to address the topographic witness index, rainfall patterns, elevation fluctuations, and distance from the river, vulnerable settlements in the area can be protected, and the impact of future flood occurrences can be greatly reduced. Furthermore, the research findings highlight the urgent requirement for infrastructure development and effective flood management strategies in the northern and western regions of the Nile River basin, particularly in proximity to major towns such as Khartoum. Overall, the study recommends prioritizing high-risk locations and developing a complete flood risk management plan based on the vulnerability map.Keywords: analytic hierarchy process, Blue Nile Basin, geospatial techniques, flood vulnerability, multi-criteria decision making
Procedia PDF Downloads 677910 Comparison between High Resolution Ultrasonography and Magnetic Resonance Imaging in Assessment of Musculoskeletal Disorders Causing Ankle Pain
Authors: Engy S. El-Kayal, Mohamed M. S. Arafa
Abstract:
There are various causes of ankle pain including traumatic and non-traumatic causes. Various imaging techniques are available for assessment of AP. MRI is considered to be the imaging modality of choice for ankle joint evaluation with an advantage of its high spatial resolution, multiplanar capability, hence its ability to visualize small complex anatomical structures around the ankle. However, the high costs and the relatively limited availability of MRI systems, as well as the relatively long duration of the examination all are considered disadvantages of MRI examination. Therefore there is a need for a more rapid and less expensive examination modality with good diagnostic accuracy to fulfill this gap. HRU has become increasingly important in the assessment of ankle disorders, with advantages of being fast, reliable, of low cost and readily available. US can visualize detailed anatomical structures and assess tendinous and ligamentous integrity. The aim of this study was to compare the diagnostic accuracy of HRU with MRI in the assessment of patients with AP. We included forty patients complaining of AP. All patients were subjected to real-time HRU and MRI of the affected ankle. Results of both techniques were compared to surgical and arthroscopic findings. All patients were examined according to a defined protocol that includes imaging the tendon tears or tendinitis, muscle tears, masses, or fluid collection, ligament sprain or tears, inflammation or fluid effusion within the joint or bursa, bone and cartilage lesions, erosions and osteophytes. Analysis of the results showed that the mean age of patients was 38 years. The study comprised of 24 women (60%) and 16 men (40%). The accuracy of HRU in detecting causes of AP was 85%, while the accuracy of MRI in the detection of causes of AP was 87.5%. In conclusions: HRU and MRI are two complementary tools of investigation with the former will be used as a primary tool of investigation and the latter will be used to confirm the diagnosis and the extent of the lesion especially when surgical interference is planned.Keywords: ankle pain (AP), high-resolution ultrasound (HRU), magnetic resonance imaging (MRI) ultrasonography (US)
Procedia PDF Downloads 1897909 Systematic Exploration and Modulation of Nano-Bio Interactions
Authors: Bing Yan
Abstract:
Nanomaterials are widely used in various industrial sectors, biomedicine, and more than 1300 consumer products. Although there is still no standard safety regulation, their potential toxicity is a major concern worldwide. We discovered that nanoparticles target and enter human cells1, perturb cellular signaling pathways2, affect various cell functions3, and cause malfunctions in animals4,5. Because the majority of atoms in nanoparticles are on the surface, chemistry modification on their surface may change their biological properties significantly. We modified nanoparticle surface using nano-combinatorial chemistry library approach6. Novel nanoparticles were discovered to exhibit significantly reduced toxicity6,7, enhance cancer targeting ability8, or re-program cellular signaling machineries7. Using computational chemistry, quantitative nanostructure-activity relationship (QNAR) is established and predictive models have been built to predict biocompatible nanoparticles.Keywords: nanoparticle, nanotoxicity, nano-bio, nano-combinatorial chemistry, nanoparticle library
Procedia PDF Downloads 4077908 The Study of Suan Sunandha Rajabhat University’s Image among People in Bangkok
Authors: Sawitree Suvanno
Abstract:
The objective of this study is to investigate the Suan Sunandha Rajabhat University (SSRU) image among people in Bangkok. This study was conducted in the quantitative research and the questionnaires were used to collect data from 360 people of a sample group. Descriptive and inferential statistics were used in data analysis. The result showed that the SSRU’s image among people in Bangkok is in the “rather true” level of questionnaire scale in all aspects measured. The aspect that gains the utmost average is that the university is considered as royal-oriented and conservative; 2) the instructional supplies, buildings and venue promoting Thai art and tradition; 3) the moral and honest university administration; 4) the curriculum and the skillful students as well as graduates. Additional, people in Bangkok with different profession have the different view to the SSRU’s image at the significant level 0.05; there is no significant difference in gender, age and income.Keywords: Bangkok, demographics, image, Suan Sunandha Rajabhpat University
Procedia PDF Downloads 2457907 Activating Psychological Resources of DUI (Drivers under the Influence of Alcohol) Using the Traffic Psychology Intervention (IFT Course), Germany
Authors: Parichehr Sharifi, Konrad Reschke, Hans-Liudger Dienel
Abstract:
Psychological intervention generally targets changes in attitudes and behavior. Working with DUIs is part of traffic psychologists’ work. The primary goal of this field is to reduce the probability of re-conspicuous of the delinquent driver. One of these measurements in Germany is IFT courses for DUI s. The IFT course was designed by the Institute for Therapy Research. Participants are drivers who have fallen several times or once with a blood alcohol concentration of 1.6 per mill and who have completed a medical-psychological assessment (MPU) with the result of the course recommendation. The course covers four sessions of 3.5 hours each (1 hour / 60 m) and in a period of 3 to 4 weeks in the group discussion. This work analyzes interventions for the rehabilitation of DUI (Drunk Drivers offenders) offenders in groups under the aspect of activating psychological resources. From the aspect of sustainability, they should also have long-term consequences for the maintenance of unproblematic driving behavior in terms of the activation of resources. It is also addressing a selected consistency-theory-based intervention effect, activating psychological resources. So far, this has only been considered in the psychotherapeutic field but never in the field of traffic psychology. The methodology of this survey is one qualitative and three quantitative. In four sub-studies, it will be examined which measurements can determine the resources and how traffic psychological interventions can strengthen resources. The results of the studies have the following implications for traffic psychology research and practice: (1) In the field of traffic psychology intervention for the restoration of driving fitness, it can be stated that aspects of resource activation in this work have been investigated for the first time by qualitative and quantitative methods. (2) The resource activation could be confirmed based on the determined results as an effective factor of traffic psychological intervention. (3) Two sub-studies show a range of resources and resource activation options that must be given greater emphasis in traffic psychology interventions: - Social resource activation - improvement of the life skills of participants - Reactivation of existing social support options - Re-experiencing self-esteem, self-assurance, and acceptance of traffic-related behaviors. (4) In revising the IFT-§70 course, as well as other courses on recreating aptitude for DUI, new traffic-specific resource-enabling interventions against alcohol abuse should be developed to further enhance the courses through motivational, cognitive, and behavioral effects of resource activation, Resource-activating interventions can not only be integrated into behavioral group interventions but can also be applied in psychodynamic, psychodynamic (individual psychological) and other contexts of individual traffic psychology. The results are indicative but clearly show that personal resources can be strengthened through traffic psychology interventions. In the research, practice, training, and further education of traffic psychology, the aspect of primary resource activation (Grawe, 1999), therefore, always deserves the greatest attention for the rehabilitation of DUIs and Traffic safety.Keywords: traffic safety, psychological resources, activating of resources, intervention programs for alcohol offenders, empowerment
Procedia PDF Downloads 777906 Comparison of Different Artificial Intelligence-Based Protein Secondary Structure Prediction Methods
Authors: Jamerson Felipe Pereira Lima, Jeane Cecília Bezerra de Melo
Abstract:
The difficulty and cost related to obtaining of protein tertiary structure information through experimental methods, such as X-ray crystallography or NMR spectroscopy, helped raising the development of computational methods to do so. An approach used in these last is prediction of tridimensional structure based in the residue chain, however, this has been proved an NP-hard problem, due to the complexity of this process, explained by the Levinthal paradox. An alternative solution is the prediction of intermediary structures, such as the secondary structure of the protein. Artificial Intelligence methods, such as Bayesian statistics, artificial neural networks (ANN), support vector machines (SVM), among others, were used to predict protein secondary structure. Due to its good results, artificial neural networks have been used as a standard method to predict protein secondary structure. Recent published methods that use this technique, in general, achieved a Q3 accuracy between 75% and 83%, whereas the theoretical accuracy limit for protein prediction is 88%. Alternatively, to achieve better results, support vector machines prediction methods have been developed. The statistical evaluation of methods that use different AI techniques, such as ANNs and SVMs, for example, is not a trivial problem, since different training sets, validation techniques, as well as other variables can influence the behavior of a prediction method. In this study, we propose a prediction method based on artificial neural networks, which is then compared with a selected SVM method. The chosen SVM protein secondary structure prediction method is the one proposed by Huang in his work Extracting Physico chemical Features to Predict Protein Secondary Structure (2013). The developed ANN method has the same training and testing process that was used by Huang to validate his method, which comprises the use of the CB513 protein data set and three-fold cross-validation, so that the comparative analysis of the results can be made comparing directly the statistical results of each method.Keywords: artificial neural networks, protein secondary structure, protein structure prediction, support vector machines
Procedia PDF Downloads 6197905 An AI-generated Semantic Communication Platform in HCI Course
Authors: Yi Yang, Jiasong Sun
Abstract:
Almost every aspect of our daily lives is now intertwined with some degree of human-computer interaction (HCI). HCI courses draw on knowledge from disciplines as diverse as computer science, psychology, design principles, anthropology, and more. Our HCI courses, named the Media and Cognition course, are constantly updated to reflect state-of-the-art technological advancements such as virtual reality, augmented reality, and artificial intelligence-based interactions. For more than a decade, our course has used an interest-based approach to teaching, in which students proactively propose some research-based questions and collaborate with teachers, using course knowledge to explore potential solutions. Semantic communication plays a key role in facilitating understanding and interaction between users and computer systems, ultimately enhancing system usability and user experience. The advancements in AI-generated technology, which have gained significant attention from both academia and industry in recent years, are exemplified by language models like GPT-3 that generate human-like dialogues from given prompts. Our latest version of the Human-Computer Interaction course practices a semantic communication platform based on AI-generated techniques. The purpose of this semantic communication is twofold: to extract and transmit task-specific information while ensuring efficient end-to-end communication with minimal latency. An AI-generated semantic communication platform evaluates the retention of signal sources and converts low-retain ability visual signals into textual prompts. These data are transmitted through AI-generated techniques and reconstructed at the receiving end; on the other hand, visual signals with a high retain ability rate are compressed and transmitted according to their respective regions. The platform and associated research are a testament to our students' growing ability to independently investigate state-of-the-art technologies.Keywords: human-computer interaction, media and cognition course, semantic communication, retainability, prompts
Procedia PDF Downloads 1137904 Physiological and Biochemical Based Analysis to Assess the Efficacy of Mulch under Partial Root Zone Drying in Wheat
Authors: Salman Ahmad, Muhammad Aown Sammar Raza, Muhammad Farrukh Saleem, Rashid Iqbal, Muhammad Saqlain Zaheer, Muhammad Usman Aslam, Imran Haider, Muhammad Adnan Nazar, Muhammad Ali
Abstract:
Among the various abiotic stresses, drought stress is one of the most challenging for field crops. Wheat is one of the major staple food of the world, which is highly affected by water deficit stress in the current scenario of climate change. In order to ensure food security by depleting water resources, there is an urgent need to adopt technologies which result in sufficient crop yield with less water consumption. Mulching and partial rootzone drying (PRD) are two important management techniques used for water conservation and to mitigate the negative impacts of drought. The experiment was conducted to screen out the best-suited mulch for wheat under PRD system. Two water application techniques (I1= full irrigation I2= PRD irrigation) and four mulch treatments (M0= un-mulched, M1= black plastic mulch, M2= wheat straw mulch and M4= cotton sticks mulch) were conducted in completely randomized design with four replications. The treatment, black plastic mulch was performed the best than other mulch treatments. For irrigation levels, higher values of growth, physiological and water-related parameters were recorded in control treatment while, quality traits and enzymatic activities were higher under partial root zone drying. The current study concluded that adverse effects of drought on wheat can be significantly mitigated by using mulches but black plastic mulch was best suited for partial rootzone drying irrigation system in wheat.Keywords: antioxidants, leaf water relations, Mulches, osmolytes, partial root zone drying, photosynthesis
Procedia PDF Downloads 2637903 Rejuvenating a Space into World Class Environment through Conservation of Heritage Architecture
Authors: Abhimanyu Sharma
Abstract:
India is known for its cultural heritage. As the country is rich in diversity along its length and breadth, the state of Jammu & Kashmir is world famous for the beautiful tourist destinations in the Kashmir region of the state. However, equally destined destinations are also located in Jammu region of the said state. For most of the time in last 50-60 years, the prime focus of development was centered around Kashmir region. But now due to an ever increase in globalization, the focus is decentralizing throughout the country. Pertinently, the potential of Jammu Region needs to be incorporated into the world tourist map in particular. One such spot in the Jammu region of the state is a place called ‘Mubarak Mandi’ – the palace with the royal residence of the Maharaja of Jammu & Kashmir from the Dogra Dynasty, is located in the heart of Jammu city (the winter capital of the state). Since the place is destined with a heritage importance but yet lack the supporting infrastructure to attract the national tourist in general and worldwide tourist at large. For such places, conservation and restoration of the existing structures are the potential tools to overcome the present limiting nature of the place. The rejuvenation of this place through potential and dynamic conservation techniques is targeted through this paper. This paper deals with developing and restoring the areas within the whole campus with appropriate building materials, conservation techniques, etc. to promote a great number of visitors by developing it into a prioritised tourist attraction point. Major thrust shall be on studying the criteria’s for developing the place considering the psychological effect needed to create a socially interactive environment. Additionally, thrust shall be on the spatial elements that will aid in creating a common platform for all kinds of tourists. Accordingly, different conservation guidelines (or model) shall be targeted through this paper so that this Jammu region shall also be an equally contributor to the tourist graph of the country as the Kashmir part is.Keywords: conservation, heritage architecture, rejuvenating, restoration
Procedia PDF Downloads 2977902 The Intention to Use E-Money Transaction: The Moderating Effect of Security in Conceptual Frammework
Authors: Husnil Khatimah, Fairol Halim
Abstract:
This research examines the moderating impact of security on intention to use e-money that adapted from some variables of the TAM (Technology Acceptance Model) and TPB (Theory of Planned Behavior). This study will use security as moderating variable and finds these relationship depends on customer intention to use e-money as payment tools. The conceptual framework of e-money transactions was reviewed to understand behavioral intention of consumers from perceived usefulness, perceived ease of use, perceived behavioral control and security. Quantitative method will be utilized as sources of data collection. A total of one thousand respondents will be selected using quota sampling method in Medan, Indonesia. Descriptive analysis and Multiple Regression analysis will be conducted to analyze the data. The article ended with suggestion for future studies.Keywords: e-money transaction, TAM & TPB, moderating variable, behavioral intention, conceptual paper
Procedia PDF Downloads 4527901 A 0-1 Goal Programming Approach to Optimize the Layout of Hospital Units: A Case Study in an Emergency Department in Seoul
Authors: Farhood Rismanchian, Seong Hyeon Park, Young Hoon Lee
Abstract:
This paper proposes a method to optimize the layout of an emergency department (ED) based on real executions of care processes by considering several planning objectives simultaneously. Recently, demand for healthcare services has been dramatically increased. As the demand for healthcare services increases, so do the need for new healthcare buildings as well as the need for redesign and renovating existing ones. The importance of implementation of a standard set of engineering facilities planning and design techniques has been already proved in both manufacturing and service industry with many significant functional efficiencies. However, high complexity of care processes remains a major challenge to apply these methods in healthcare environments. Process mining techniques applied in this study to tackle the problem of complexity and to enhance care process analysis. Process related information such as clinical pathways extracted from the information system of an ED. A 0-1 goal programming approach is then proposed to find a single layout that simultaneously satisfies several goals. The proposed model solved by optimization software CPLEX 12. The solution reached using the proposed method has 42.2% improvement in terms of walking distance of normal patients and 47.6% improvement in walking distance of critical patients at minimum cost of relocation. It has been observed that lots of patients must unnecessarily walk long distances during their visit to the emergency department because of an inefficient design. A carefully designed layout can significantly decrease patient walking distance and related complications.Keywords: healthcare operation management, goal programming, facility layout problem, process mining, clinical processes
Procedia PDF Downloads 2927900 Monitoring and Evaluation of Web-Services Quality and Medium-Term Impact on E-Government Agencies' Efficiency
Authors: A. F. Huseynov, N. T. Mardanov, J. Y. Nakhchivanski
Abstract:
This practical research is aimed to improve the management quality and efficiency of public administration agencies providing e-services. The monitoring system developed will provide continuous review of the websites compliance with the selected indicators, their evaluation based on the selected indicators and ranking of services according to the quality criteria. The responsible departments in the government agencies were surveyed; the questionnaire includes issues of management and feedback, e-services provided, and the application of information systems. By analyzing the main affecting factors and barriers, the recommendations will be given that lead to the relevant decisions to strengthen the state agencies competencies for the management and the provision of their services. Component 1. E-services monitoring system. Three separate monitoring activities are proposed to be executed in parallel: Continuous tracing of e-government sites using built-in web-monitoring program; this program generates several quantitative values which are basically related to the technical characteristics and the performance of websites. The expert assessment of e-government sites in accordance with the two general criteria. Criterion 1. Technical quality of the site. Criterion 2. Usability/accessibility (load, see, use). Each high-level criterion is in turn subdivided into several sub-criteria, such as: the fonts and the color of the background (Is it readable?), W3C coding standards, availability of the Robots.txt and the site map, the search engine, the feedback/contact and the security mechanisms. The on-line survey of the users/citizens – a small group of questions embedded in the e-service websites. The questionnaires comprise of the information concerning navigation, users’ experience with the website (whether it was positive or negative), etc. Automated monitoring of web-sites by its own could not capture the whole evaluation process, and should therefore be seen as a complement to expert’s manual web evaluations. All of the separate results were integrated to provide the complete evaluation picture. Component 2. Assessment of the agencies/departments efficiency in providing e-government services. - the relevant indicators to evaluate the efficiency and the effectiveness of e-services were identified; - the survey was conducted in all the governmental organizations (ministries, committees and agencies) that provide electronic services for the citizens or the businesses; - the quantitative and qualitative measures are covering the following sections of activities: e-governance, e-services, the feedback from the users, the information systems at the agencies’ disposal. Main results: 1. The software program and the set of indicators for internet sites evaluation has been developed and the results of pilot monitoring have been presented. 2. The evaluation of the (internal) efficiency of the e-government agencies based on the survey results with the practical recommendations related to the human potential, the information systems used and e-services provided.Keywords: e-government, web-sites monitoring, survey, internal efficiency
Procedia PDF Downloads 3047899 Mass Flux and Forensic Assessment: Informed Remediation Decision Making at One of Canada’s Most Polluted Sites
Authors: Tony R. Walker, N. Devin MacAskill, Andrew Thalhiemer
Abstract:
Sydney Harbour, Nova Scotia, Canada has long been subject to effluent and atmospheric inputs of contaminants, including thousands of tons of PAHs from a large coking and steel plant which operated in Sydney for nearly a century. Contaminants comprised of coal tar residues which were discharged from coking ovens into a small tidal tributary, which became known as the Sydney Tar Ponds (STPs), and subsequently discharged into Sydney Harbour. An Environmental Impact Statement concluded that mobilization of contaminated sediments posed unacceptable ecological risks, therefore immobilizing contaminants in the STPs using solidification and stabilization was identified as a primary source control remediation option to mitigate against continued transport of contaminated sediments from the STPs into Sydney Harbour. Recent developments in contaminant mass flux techniques focus on understanding “mobile” vs. “immobile” contaminants at remediation sites. Forensic source evaluations are also increasingly used for understanding origins of PAH contaminants in soils or sediments. Flux and forensic source evaluation-informed remediation decision-making uses this information to develop remediation end point goals aimed at reducing off-site exposure and managing potential ecological risk. This study included reviews of previous flux studies, calculating current mass flux estimates and a forensic assessment using PAH fingerprint techniques, during remediation of one of Canada’s most polluted sites at the STPs. Historically, the STPs was thought to be the major source of PAH contamination in Sydney Harbour with estimated discharges of nearly 800 kg/year of PAHs. However, during three years of remediation monitoring only 17-97 kg/year of PAHs were discharged from the STPs, which was also corroborated by an independent PAH flux study during the first year of remediation which estimated 119 kg/year. The estimated mass efflux of PAHs from the STPs during remediation was in stark contrast to ~2000 kg loading thought necessary to cause a short term increase in harbour sediment PAH concentrations. These mass flux estimates during remediation were also between three to eight times lower than PAHs discharged from the STPs a decade prior to remediation, when at the same time, government studies demonstrated on-going reduction in PAH concentrations in harbour sediments. Flux results were also corroborated using forensic source evaluations using PAH fingerprint techniques which found a common source of PAHs for urban soils, marine and aquatic sediments in and around Sydney. Coal combustion (from historical coking) and coal dust transshipment (from current coal transshipment facilities), are likely the principal source of PAHs in these media and not migration of PAH laden sediments from the STPs during a large scale remediation project.Keywords: contaminated sediment, mass flux, forensic source evaluations, remediation
Procedia PDF Downloads 2387898 Review on the Role of Sustainability Techniques in Development of Green Building
Authors: Ubaid Ur Rahman, Waqar Younas, Sooraj Kumar Chhabira
Abstract:
Environmentally sustainable building construction has experienced significant growth during the past 10 years at international level. This paper shows that the conceptual framework adopts sustainability techniques in construction to develop environment friendly building called green building. Waste occurs during the different construction phases which causes the environmental problems like, deposition of waste on ground surface creates major problems such as bad smell. It also gives birth to different health diseases and produces toxic waste agent which is specifically responsible for making soil infertile. Old recycled building material is used in the construction of new building. Sustainable construction is economical and saves energy sources. Sustainable construction is the major responsibility of designer and project manager. The designer has to fulfil the client demands while keeping the design environment friendly. Project manager has to deliver and execute sustainable construction according to sustainable design. Steel is the most appropriate sustainable construction material. It is more durable and easily recyclable. Steel occupies less area and has more tensile and compressive strength than concrete, making it a better option for sustainable construction as compared to other building materials. New technology like green roof has made the environment pleasant, and has reduced the construction cost. It minimizes economic, social and environmental issues. This paper presents an overview of research related to the material use of green building and by using this research recommendation are made which can be followed in the construction industry. In this paper, we go through detailed analysis on construction material. By making suitable adjustments to project management practices it is shown that a green building improves the cost efficiency of the project, makes it environmental friendly and also meets future generation demands.Keywords: sustainable construction, green building, recycled waste material, environment
Procedia PDF Downloads 2447897 Evaluating Structural Crack Propagation Induced by Soundless Chemical Demolition Agent Using an Energy Release Rate Approach
Authors: Shyaka Eugene
Abstract:
The efficient and safe demolition of structures is a critical challenge in civil engineering and construction. This study focuses on the development of optimal demolition strategies by investigating the crack propagation behavior in beams induced by soundless cracking agents. It is commonly used in controlled demolition and has gained prominence due to its non-explosive and environmentally friendly nature. This research employs a comprehensive experimental and computational approach to analyze the crack initiation, propagation, and eventual failure in beams subjected to soundless cracking agents. Experimental testing involves the application of various cracking agents under controlled conditions to understand their effects on the structural integrity of beams. High-resolution imaging and strain measurements are used to capture the crack propagation process. In parallel, numerical simulations are conducted using advanced finite element analysis (FEA) techniques to model crack propagation in beams, considering various parameters such as cracking agent composition, loading conditions, and beam properties. The FEA models are validated against experimental results, ensuring their accuracy in predicting crack propagation patterns. The findings of this study provide valuable insights into optimizing demolition strategies, allowing engineers and demolition experts to make informed decisions regarding the selection of cracking agents, their application techniques, and structural reinforcement methods. Ultimately, this research contributes to enhancing the safety, efficiency, and sustainability of demolition practices in the construction industry, reducing environmental impact and ensuring the protection of adjacent structures and the surrounding environment.Keywords: expansion pressure, energy release rate, soundless chemical demolition agent, crack propagation
Procedia PDF Downloads 617896 Strategic Asset Allocation Optimization: Enhancing Portfolio Performance Through PCA-Driven Multi-Objective Modeling
Authors: Ghita Benayad
Abstract:
Asset allocation, which affects the long-term profitability of portfolios by distributing assets to fulfill a range of investment objectives, is the cornerstone of investment management in the dynamic and complicated world of financial markets. This paper offers a technique for optimizing strategic asset allocation with the goal of improving portfolio performance by addressing the inherent complexity and uncertainty of the market through the use of Principal Component Analysis (PCA) in a multi-objective modeling framework. The study's first section starts with a critical evaluation of conventional asset allocation techniques, highlighting how poorly they are able to capture the intricate relationships between assets and the volatile nature of the market. In order to overcome these challenges, the project suggests a PCA-driven methodology that isolates important characteristics influencing asset returns by decreasing the dimensionality of the investment universe. This decrease provides a stronger basis for asset allocation decisions by facilitating a clearer understanding of market structures and behaviors. Using a multi-objective optimization model, the project builds on this foundation by taking into account a number of performance metrics at once, including risk minimization, return maximization, and the accomplishment of predetermined investment goals like regulatory compliance or sustainability standards. This model provides a more comprehensive understanding of investor preferences and portfolio performance in comparison to conventional single-objective optimization techniques. While applying the PCA-driven multi-objective optimization model to historical market data, aiming to construct portfolios better under different market situations. As compared to portfolios produced from conventional asset allocation methodologies, the results show that portfolios optimized using the proposed method display improved risk-adjusted returns, more resilience to market downturns, and better alignment with specified investment objectives. The study also looks at the implications of this PCA technique for portfolio management, including the prospect that it might give investors a more advanced framework for navigating financial markets. The findings suggest that by combining PCA with multi-objective optimization, investors may obtain a more strategic and informed asset allocation that is responsive to both market conditions and individual investment preferences. In conclusion, this capstone project improves the field of financial engineering by creating a sophisticated asset allocation optimization model that integrates PCA with multi-objective optimization. In addition to raising concerns about the condition of asset allocation today, the proposed method of portfolio management opens up new avenues for research and application in the area of investment techniques.Keywords: asset allocation, portfolio optimization, principle component analysis, multi-objective modelling, financial market
Procedia PDF Downloads 457895 Impact of Urbanization on Natural Drainage Pattern in District of Larkana, Sindh Pakistan
Authors: Sumaira Zafar, Arjumand Zaidi
Abstract:
During past few years, several floods have adversely affected the areas along lower Indus River. Besides other climate related anomalies, rapidly increasing urbanization and blockage of natural drains due to siltation or encroachments are two other critical causes that may be responsible for these disasters. Due to flat topography of river Indus plains and blockage of natural waterways, drainage of storm water takes time adversely affecting the crop health and soil properties of the area. Government of Sindh is taking a keen interest in revival of natural drainage network in the province and has initiated this work under Sindh Irrigation and Drainage Authority. In this paper, geospatial techniques are used to analyze landuse/land-cover changes of Larkana district over the past three decades (1980-present) and their impact on natural drainage system. Satellite derived Digital Elevation Model (DEM) and topographic sheets (recent and 1950) are used to delineate natural drainage pattern of the district. The urban landuse map developed in this study is further overlaid on drainage line layer to identify the critical areas where the natural floodwater flows are being inhibited by urbanization. Rainfall and flow data are utilized to identify areas of heavy flow, whereas, satellite data including Landsat 7 and Google Earth are used to map previous floods extent and landuse/cover of the study area. Alternatives to natural drainage systems are also suggested wherever possible. The output maps of natural drainage pattern can be used to develop a decision support system for urban planners, Sindh development authorities and flood mitigation and management agencies.Keywords: geospatial techniques, satellite data, natural drainage, flood, urbanization
Procedia PDF Downloads 5057894 Adoption of Climate-Smart Agriculture Practices Among Farmers and Its Effect on Crop Revenue in Ethiopia
Authors: Fikiru Temesgen Gelata
Abstract:
Food security, adaptation, and climate change mitigation are all problems that can be resolved simultaneously with Climate-Smart Agriculture (CSA). This study examines determinants of climate-smart agriculture (CSA) practices among smallholder farmers, aiming to understand the factors guiding adoption decisions and evaluate the impact of CSA on smallholder farmer income in the study areas. For this study, three-stage sampling techniques were applied to select 230 smallholders randomly. Mann-Kendal test and multinomial endogenous switching regression model were used to analyze trends of decrease or increase within long-term temporal data and the impact of CSA on the smallholder farmer income, respectively. Findings revealed education level, household size, land ownership, off-farm income, climate information, and contact with extension agents found to be highly adopted CSA practices. On the contrary, erosion exerted a detrimental impact on all the agricultural practices examined within the study region. Various factors such as farming methods, the size of farms, proximity to irrigated farmlands, availability of extension services, distance to market hubs, and access to weather forecasts were recognized as key determinants influencing the adoption of CSA practices. The multinomial endogenous switching regression model (MESR) revealed that joint adoption of crop rotation and soil and water conservation practices significantly increased farm income by 1,107,245 ETB. The study recommends that counties and governments should prioritize addressing climate change in their development agendas to increase the adoption of climate-smart farming techniques.Keywords: climate-smart practices, food security, Oincome, MERM, Ethiopia
Procedia PDF Downloads 337893 The Impact of Environmental Social and Governance (ESG) on Corporate Financial Performance (CFP): Evidence from New Zealand Companies
Authors: Muhammad Akhtaruzzaman
Abstract:
The impact of corporate environmental social and governance (ESG) on financial performance is often difficult to quantify despite the ESG related theories predict that ESG performance improves financial performance of a company. This research examines the link between corporate ESG performance and the financial performance of the NZX (New Zealand Stock Exchange) listed companies. For this purpose, this research utilizes mixed methods approaches to examine and understand this link. While quantitative results found no robust evidence of such a link, however, the qualitative analysis of content data suggests a strong cooccurrence exists between ESG performance and financial performance. The findings of this research have important implications for policymakers to support higher ESG-performing companies and for management practitioners to develop ESG-related strategies.Keywords: ESG, financial performance, New Zealand firms, thematic analysis, mixed methods
Procedia PDF Downloads 637892 Multi-Objective Optimization (Pareto Sets) and Multi-Response Optimization (Desirability Function) of Microencapsulation of Emamectin
Authors: Victoria Molina, Wendy Franco, Sergio Benavides, José M. Troncoso, Ricardo Luna, Jose R. PéRez-Correa
Abstract:
Emamectin Benzoate (EB) is a crystal antiparasitic that belongs to the avermectin family. It is one of the most common treatments used in Chile to control Caligus rogercresseyi in Atlantic salmon. However, the sea lice acquired resistance to EB when it is exposed at sublethal EB doses. The low solubility rate of EB and its degradation at the acidic pH in the fish digestive tract are the causes of the slow absorption of EB in the intestine. To protect EB from degradation and enhance its absorption, specific microencapsulation technologies must be developed. Amorphous Solid Dispersion techniques such as Spray Drying (SD) and Ionic Gelation (IG) seem adequate for this purpose. Recently, Soluplus® (SOL) has been used to increase the solubility rate of several drugs with similar characteristics than EB. In addition, alginate (ALG) is a widely used polymer in IG for biomedical applications. Regardless of the encapsulation technique, the quality of the obtained microparticles is evaluated with the following responses, yield (Y%), encapsulation efficiency (EE%) and loading capacity (LC%). In addition, it is important to know the percentage of EB released from the microparticles in gastric (GD%) and intestinal (ID%) digestions. In this work, we microencapsulated EB with SOL (EB-SD) and with ALG (EB-IG) using SD and IG, respectively. Quality microencapsulation responses and in vitro gastric and intestinal digestions at pH 3.35 and 7.8, respectively, were obtained. A central composite design was used to find the optimum microencapsulation variables (amount of EB, amount of polymer and feed flow). In each formulation, the behavior of these variables was predicted with statistical models. Then, the response surface methodology was used to find the best combination of the factors that allowed a lower EB release in gastric conditions, while permitting a major release at intestinal digestion. Two approaches were used to determine this. The desirability approach (DA) and multi-objective optimization (MOO) with multi-criteria decision making (MCDM). Both microencapsulation techniques allowed to maintain the integrity of EB in acid pH, given the small amount of EB released in gastric medium, while EB-IG microparticles showed greater EB release at intestinal digestion. For EB-SD, optimal conditions obtained with MOO plus MCDM yielded a good compromise among the microencapsulation responses. In addition, using these conditions, it is possible to reduce microparticles costs due to the reduction of 60% of BE regard the optimal BE proposed by (DA). For EB-GI, the optimization techniques used (DA and MOO) yielded solutions with different advantages and limitations. Applying DA costs can be reduced 21%, while Y, GD and ID showed 9.5%, 84.8% and 2.6% lower values than the best condition. In turn, MOO yielded better microencapsulation responses, but at a higher cost. Overall, EB-SD with operating conditions selected by MOO seems the best option, since a good compromise between costs and encapsulation responses was obtained.Keywords: microencapsulation, multiple decision-making criteria, multi-objective optimization, Soluplus®
Procedia PDF Downloads 1307891 Brain-Computer Interface System for Lower Extremity Rehabilitation of Chronic Stroke Patients
Authors: Marc Sebastián-Romagosa, Woosang Cho, Rupert Ortner, Christy Li, Christoph Guger
Abstract:
Neurorehabilitation based on Brain-Computer Interfaces (BCIs) shows important rehabilitation effects for patients after stroke. Previous studies have shown improvements for patients that are in a chronic stage and/or have severe hemiparesis and are particularly challenging for conventional rehabilitation techniques. For this publication, seven stroke patients in the chronic phase with hemiparesis in the lower extremity were recruited. All of them participated in 25 BCI sessions about 3 times a week. The BCI system was based on the Motor Imagery (MI) of the paretic ankle dorsiflexion and healthy wrist dorsiflexion with Functional Electrical Stimulation (FES) and avatar feedback. Assessments were conducted to assess the changes in motor improvement before, after and during the rehabilitation training. Our primary measures used for the assessment were the 10-meters walking test (10MWT), Range of Motion (ROM) of the ankle dorsiflexion and Timed Up and Go (TUG). Results show a significant increase in the gait speed in the primary measure 10MWT fast velocity of 0.18 m/s IQR = [0.12 to 0.2], P = 0.016. The speed in the TUG was also significantly increased by 0.1 m/s IQR = [0.09 to 0.11], P = 0.031. The active ROM assessment increased 4.65º, and IQR = [ 1.67 - 7.4], after rehabilitation training, P = 0.029. These functional improvements persisted at least one month after the end of the therapy. These outcomes show the feasibility of this BCI approach for chronic stroke patients and further support the growing consensus that these types of tools might develop into a new paradigm for rehabilitation tools for stroke patients. However, the results are from only seven chronic stroke patients, so the authors believe that this approach should be further validated in broader randomized controlled studies involving more patients. MI and FES-based non-invasive BCIs are showing improvement in the gait rehabilitation of patients in the chronic stage after stroke. This could have an impact on the rehabilitation techniques used for these patients, especially when they are severely impaired and their mobility is limited.Keywords: neuroscience, brain computer interfaces, rehabilitat, stroke
Procedia PDF Downloads 917890 Convolutional Neural Network Based on Random Kernels for Analyzing Visual Imagery
Authors: Ja-Keoung Koo, Kensuke Nakamura, Hyohun Kim, Dongwha Shin, Yeonseok Kim, Ji-Su Ahn, Byung-Woo Hong
Abstract:
The machine learning techniques based on a convolutional neural network (CNN) have been actively developed and successfully applied to a variety of image analysis tasks including reconstruction, noise reduction, resolution enhancement, segmentation, motion estimation, object recognition. The classical visual information processing that ranges from low level tasks to high level ones has been widely developed in the deep learning framework. It is generally considered as a challenging problem to derive visual interpretation from high dimensional imagery data. A CNN is a class of feed-forward artificial neural network that usually consists of deep layers the connections of which are established by a series of non-linear operations. The CNN architecture is known to be shift invariant due to its shared weights and translation invariance characteristics. However, it is often computationally intractable to optimize the network in particular with a large number of convolution layers due to a large number of unknowns to be optimized with respect to the training set that is generally required to be large enough to effectively generalize the model under consideration. It is also necessary to limit the size of convolution kernels due to the computational expense despite of the recent development of effective parallel processing machinery, which leads to the use of the constantly small size of the convolution kernels throughout the deep CNN architecture. However, it is often desired to consider different scales in the analysis of visual features at different layers in the network. Thus, we propose a CNN model where different sizes of the convolution kernels are applied at each layer based on the random projection. We apply random filters with varying sizes and associate the filter responses with scalar weights that correspond to the standard deviation of the random filters. We are allowed to use large number of random filters with the cost of one scalar unknown for each filter. The computational cost in the back-propagation procedure does not increase with the larger size of the filters even though the additional computational cost is required in the computation of convolution in the feed-forward procedure. The use of random kernels with varying sizes allows to effectively analyze image features at multiple scales leading to a better generalization. The robustness and effectiveness of the proposed CNN based on random kernels are demonstrated by numerical experiments where the quantitative comparison of the well-known CNN architectures and our models that simply replace the convolution kernels with the random filters is performed. The experimental results indicate that our model achieves better performance with less number of unknown weights. The proposed algorithm has a high potential in the application of a variety of visual tasks based on the CNN framework. Acknowledgement—This work was supported by the MISP (Ministry of Science and ICT), Korea, under the National Program for Excellence in SW (20170001000011001) supervised by IITP, and NRF-2014R1A2A1A11051941, NRF2017R1A2B4006023.Keywords: deep learning, convolutional neural network, random kernel, random projection, dimensionality reduction, object recognition
Procedia PDF Downloads 2857889 Exploitation of Technology by the Tshwane Residence for Tourism Development Purposes
Authors: P. P. S. Sifolo, P. Tladi, J. Maimela
Abstract:
This article investigates technology used by Tshwane residents intended for tourism purposes. The aim is to contribute information to the Tshwane interested parties for planning and management concerning technology within the tourism sector. This study identified the types of tourist related technologies used by the Tshwane residents, be it for business purposes or personal use. The study connected the exploitation of technology for tourism purposes through unpacking the tourism sector as it utilizes technology. Quantitative research methodology was used whereby self-completed questionnaires were chosen as research instruments. The research study carried out a search for knowledge on technology for tourism and the Tshwane residents; however the study revealed that technology has certainly imprinted tourism massively because of its effectiveness and efficiency. Technology has assisted tourism businesses stay abreast of competition with ICT and because of that, SA is on the map as one the economically performing countries in Africa. Moreover, technology and tourism make a meaningful impact on job creation and Gross Domestic Product (GDP).Keywords: tourism, information and communication technology, Tshwane residents, technology for tourism
Procedia PDF Downloads 3887888 Employee Branding: An Exploratory Study Applied to Nurses in an Organization
Authors: Pawan Hinge, Priya Gupta
Abstract:
Due to cutting edge competitions between organizations and war for talent, the workforce as an asset is gaining significance. The employees are considered as the brand ambassadors of an organization, and their interactions with the clients and customers might impact directly or indirectly on the overall value of the organization. Especially, organizations in the healthcare industry the value of an organization in the perception of their employees can be one of the revenue generating and talent retention strategy. In such context, it is essential to understand that the brand awareness among employees can effect on employer brand image and brand value since the brand ambassadors are the interface between organization and customers and clients. In this exploratory study, we have adopted both quantitative and qualitative approaches for data analysis. Our study shows existing variation among nurses working in different business units of the same organization in terms of their customer interface or interactions and brand awareness.Keywords: brand awareness, brand image, brand value, customer interface
Procedia PDF Downloads 2847887 Spectroscopy and Electron Microscopy for the Characterization of CdSxSe1-x Quantum Dots in a Glass Matrix
Authors: C. Fornacelli, P. Colomban, E. Mugnaioli, I. Memmi Turbanti
Abstract:
When semiconductor particles are reduced in scale to nanometer dimension, their optical and electro-optical properties strongly differ from those of bulk crystals of the same composition. Since sampling is often not allowed concerning cultural heritage artefacts, the potentialities of two non-invasive techniques, such as Raman and Fiber Optic Reflectance Spectroscopy (FORS), have been investigated and the results of the analysis on some original glasses of different colours (from yellow to orange and deep red) and periods (from the second decade of the 20th century to present days) are reported in the present study. In order to evaluate the potentialities of the application of non-invasive techniques to the investigation of the structure and distribution of nanoparticles dispersed in a glass matrix, Scanning Electron Microscopy (SEM) and energy-disperse spectroscopy (EDS) mapping, together with Transmission Electron Microscopy (TEM) and Electron Diffraction Tomography (EDT) have also been used. Raman spectroscopy allows a fast and non-destructive measure of the quantum dots composition and size, thanks to the evaluation of the frequencies and the broadening/asymmetry of the LO phonons bands, respectively, though the important role of the compressive strain arising from the glass matrix and the possible diffusion of zinc from the matrix to the nanocrystals should be taken into account when considering the optical-phonons frequency values. The incorporation of Zn has been assumed by an upward shifting of the LO band related to the most abundant anion (S or Se), while the role of the surface phonons as well as the confinement-induced scattering by phonons with a non-zero wavevectors on the Raman peaks broadening has been verified. The optical band gap varies from 2.42 eV (pure CdS) to 1.70 eV (CdSe). For the compositional range between 0.5≤x≤0.2, the presence of two absorption edges has been related to the contribution of both pure CdS and the CdSxSe1-x solid solution; this particular feature is probably due to the presence of unaltered cubic zinc blende structures of CdS that is not taking part to the formation of the solid solution occurring only between hexagonal CdS and CdSe. Moreover, the band edge tailing originating from the disorder due to the formation of weak bonds and characterized by the Urbach edge energy has been studied and, together with the FWHM of the Raman signal, has been assumed as a good parameter to evaluate the degree of topological disorder. SEM-EDS mapping showed a peculiar distribution of the major constituents of the glass matrix (fluxes and stabilizers), especially concerning those samples where a layered structure has been assumed thanks to the spectroscopic study. Finally, TEM-EDS and EDT were used to get high-resolution information about nanocrystals (NCs) and heterogeneous glass layers. The presence of ZnO NCs (< 4 nm) dispersed in the matrix has been verified for most of the samples, while, for those samples where a disorder due to a more complex distribution of the size and/or composition of the NCs has been assumed, the TEM clearly verified most of the assumption made by the spectroscopic techniques.Keywords: CdSxSe1-x, EDT, glass, spectroscopy, TEM-EDS
Procedia PDF Downloads 2987886 Awareness about Work-Related Hazards Causing Musculoskeletal Disorders
Authors: Bintou Jobe
Abstract:
Musculo-skeletal disorders (MSDs) are injuries or disorders of the spine disc, muscle strains, and low back injuries. It remains a major cause of occupational illness. Findings: Due to poor grips during handling, it is possible for neck, shoulder, arm, knees, ankle, fingers, waist, lower back injuries, and other muscle joints to be affected. Pregnant women are more prone to physical and hormonal changes, which lead to the relaxation of supporting ligaments. MSD continues to pose a global concern due to its impact on workers worldwide. The prevalence of the disorder is high, according to research into the workforce in Europe and developing countries. The causes are characterized by long working hours, insufficient rest breaks, poor posture, repetitive motion, poor manual handling techniques, psychological stress, and poor nutrition. To prevent MSD, the design mainly involves avoiding and assessing the risk. However, clinical solutions, policy governance, and minimizing manual labour are also an alternative. In addition, eating a balanced diet and teamwork force are key to elements in minimising the risk. This review aims to raise awareness and promote cost effectiveness prevention and understanding of MSD through research and identify proposed solutions to recognise the underlying causes of MSDs in the construction sectors. The methodology involves a literature review approach, engaging with the policy landscape of MSD, synthesising publications on MSD and a wider range of academic publications. In conclusion, training on effective manual handling techniques should be considered, and Personal Protective Equipment should be a last resort. The implementation of training guidelines has yielded significant benefits.Keywords: musculoskeletal disorder work related, MSD, manual handling, work hazards
Procedia PDF Downloads 587885 Emotion Regulation Mediates the Relationship between Affective Disposition and Depression
Authors: Valentina Colonnello, Paolo Maria Russo
Abstract:
Studies indicate a link between individual differences in affective disposition and depression, as well as between emotion dysregulation and depression. However, the specific role of emotion dysregulation domains in mediating the relationship between affective disposition and depression remains largely unexplored. In three cross-sectional quantitative studies (total n = 1350), we explored the extent to which specific emotion regulation difficulties mediate the relationship between personal distress disposition (Study 1), separation distress as a primary emotional trait (Study 2), and an insecure, anxious attachment style (Study 3) and depression. Across all studies, we found that the relationship between affective disposition and depression was mediated by difficulties in accessing adaptive emotion regulation strategies. These findings underscore the potential for modifiable abilities that could be targeted through preventive interventions.Keywords: emotions, mental health, individual traits, personality
Procedia PDF Downloads 667884 Gender Inequality in the Nigerian Labour Market as a Cause of Unemployment among Female Graduates
Authors: Temitope Faloye
Abstract:
The absence of equity and transparency in Nigeria's economic system has resulted in unemployment. Women’s unemployment rate remains higher because women's range of jobs is often narrower due to discriminatory attitudes of employers and gender segregation in the labor market. Gender inequality is one of the strong factors of unemployment, especially in developing countries like Nigeria, where the female gender is marginalized in the labor force market. However, gender equality in terms of labor market access and employment condition has not yet been attained. Feminist theory is considered as an appropriate theory for this study. The study will use a mixed-method design, collecting qualitative and quantitative data to provide answers to the research questions. Therefore, the research study aims to investigate the present situation of gender inequality in the Nigerian labor market.Keywords: unemployment, gender inequality, gender equality, labor market, female graduate
Procedia PDF Downloads 239