Search results for: scale invariant feature
5414 Mathematical Model and Algorithm for the Berth and Yard Resource Allocation at Seaports
Authors: Ming Liu, Zhihui Sun, Xiaoning Zhang
Abstract:
This paper studies a deterministic container transportation problem, jointly optimizing the berth allocation, quay crane assignment and yard storage allocation at container ports. The problem is formulated as an integer program to coordinate the decisions. Because of the large scale, it is then transformed into a set partitioning formulation, and a framework of branchand- price algorithm is provided to solve it.Keywords: branch-and-price, container terminal, joint scheduling, maritime logistics
Procedia PDF Downloads 2935413 Electrospun NaMnPO₄/CNF as High-Performance Cathode Material for Sodium Ion Batteries
Authors: Concetta Busacca, Leone Frusteri, Orazio Di Blasi, Alessandra Di Blasi
Abstract:
The large-scale extension of renewable energy led, recently, to the development of efficient and low-cost electrochemical energy storage (EES) systems such as batteries. Although lithium-ion battery (LIB) technology is relatively mature, several issues regarding safety, cyclability, and high costs must be overcome. Thanks to the availability and low cost of sodium, sodium-ion batteries (NIB) have the potential to meet the energy storage needs of the large-scale grid, becoming a valid alternative to LIB in some energy sectors, such as the stationary one. However, important challenges such as low specific energy and short cyclic life due to the large radius of Na+ must be faced to introduce this technology into the market. As an important component of SIBs, cathode materials have a significant effect on the electrochemical performance of SIBs. Recently, sodium layer transition metal oxides, phosphates, and organic compounds have been investigated as cathode materials for SIBs. In particular, phosphate-based compounds such as NaₓMPO₄ (M= Fe, Co, Mn) have been extensively studied as cathodic polyanion materials due to their long cycle stability and appropriate operating voltage. Among these, an interesting cathode material is the NaMnPO₄ based one, thanks to the stability and the high redox potential of the Mn²⁺/Mn³⁺ ion pair (3÷4 V vs. Na+/Na), which allows reaching a high energy density. This work concerns with the synthesis of a composite material based on NaMnPO₄ and carbon nanofibers (NaMnPO₄-CNF) characterized by a mixed crystalline structure between the maricite and olivine phases and a self-standing manufacture obtained by electrospinning technique. The material was tested in a Na-ion battery coin cell in half cell configuration, and showed outstanding electrocatalytic performances with a specific discharge capacity of 125 mAhg⁻¹ and 101 mAhg⁻¹ at 0.3C and 0.6C, respectively, and a retention capacity of about 80% a 0.6C after 100 cycles.Keywords: electrospinning, self standing materials, Na ion battery, cathode materials
Procedia PDF Downloads 705412 Changing the Landscape of Fungal Genomics: New Trends
Authors: Igor V. Grigoriev
Abstract:
Understanding of biological processes encoded in fungi is instrumental in addressing future food, feed, and energy demands of the growing human population. Genomics is a powerful and quickly evolving tool to understand these processes. The Fungal Genomics Program of the US Department of Energy Joint Genome Institute (JGI) partners with researchers around the world to explore fungi in several large scale genomics projects, changing the fungal genomics landscape. The key trends of these changes include: (i) rapidly increasing scale of sequencing and analysis, (ii) developing approaches to go beyond culturable fungi and explore fungal ‘dark matter,’ or unculturables, and (iii) functional genomics and multi-omics data integration. Power of comparative genomics has been recently demonstrated in several JGI projects targeting mycorrhizae, plant pathogens, wood decay fungi, and sugar fermenting yeasts. The largest JGI project ‘1000 Fungal Genomes’ aims at exploring the diversity across the Fungal Tree of Life in order to better understand fungal evolution and to build a catalogue of genes, enzymes, and pathways for biotechnological applications. At this point, at least 65% of over 700 known families have one or more reference genomes sequenced, enabling metagenomics studies of microbial communities and their interactions with plants. For many of the remaining families no representative species are available from culture collections. To sequence genomes of unculturable fungi two approaches have been developed: (a) sequencing DNA from fruiting bodies of ‘macro’ and (b) single cell genomics using fungal spores. The latter has been tested using zoospores from the early diverging fungi and resulted in several near-complete genomes from underexplored branches of the Fungal Tree, including the first genomes of Zoopagomycotina. Genome sequence serves as a reference for transcriptomics studies, the first step towards functional genomics. In the JGI fungal mini-ENCODE project transcriptomes of the model fungus Neurospora crassa grown on a spectrum of carbon sources have been collected to build regulatory gene networks. Epigenomics is another tool to understand gene regulation and recently introduced single molecule sequencing platforms not only provide better genome assemblies but can also detect DNA modifications. For example, 6mC methylome was surveyed across many diverse fungi and the highest among Eukaryota levels of 6mC methylation has been reported. Finally, data production at such scale requires data integration to enable efficient data analysis. Over 700 fungal genomes and other -omes have been integrated in JGI MycoCosm portal and equipped with comparative genomics tools to enable researchers addressing a broad spectrum of biological questions and applications for bioenergy and biotechnology.Keywords: fungal genomics, single cell genomics, DNA methylation, comparative genomics
Procedia PDF Downloads 2085411 3-D Modeling of Particle Size Reduction from Micro to Nano Scale Using Finite Difference Method
Authors: Himanshu Singh, Rishi Kant, Shantanu Bhattacharya
Abstract:
This paper adopts a top-down approach for mathematical modeling to predict the size reduction from micro to nano-scale through persistent etching. The process is simulated using a finite difference approach. Previously, various researchers have simulated the etching process for 1-D and 2-D substrates. It consists of two processes: 1) Convection-Diffusion in the etchant domain; 2) Chemical reaction at the surface of the particle. Since the process requires analysis along moving boundary, partial differential equations involved cannot be solved using conventional methods. In 1-D, this problem is very similar to Stefan's problem of moving ice-water boundary. A fixed grid method using finite volume method is very popular for modelling of etching on a one and two dimensional substrate. Other popular approaches include moving grid method and level set method. In this method, finite difference method was used to discretize the spherical diffusion equation. Due to symmetrical distribution of etchant, the angular terms in the equation can be neglected. Concentration is assumed to be constant at the outer boundary. At the particle boundary, the concentration of the etchant is assumed to be zero since the rate of reaction is much faster than rate of diffusion. The rate of reaction is proportional to the velocity of the moving boundary of the particle. Modelling of the above reaction was carried out using Matlab. The initial particle size was taken to be 50 microns. The density, molecular weight and diffusion coefficient of the substrate were taken as 2.1 gm/cm3, 60 and 10-5 cm2/s respectively. The etch-rate was found to decline initially and it gradually became constant at 0.02µ/s (1.2µ/min). The concentration profile was plotted along with space at different time intervals. Initially, a sudden drop is observed at the particle boundary due to high-etch rate. This change becomes more gradual with time due to declination of etch rate.Keywords: particle size reduction, micromixer, FDM modelling, wet etching
Procedia PDF Downloads 4315410 A Clustering-Based Approach for Weblog Data Cleaning
Authors: Amine Ganibardi, Cherif Arab Ali
Abstract:
This paper addresses the data cleaning issue as a part of web usage data preprocessing within the scope of Web Usage Mining. Weblog data recorded by web servers within log files reflect usage activity, i.e., End-users’ clicks and underlying user-agents’ hits. As Web Usage Mining is interested in End-users’ behavior, user-agents’ hits are referred to as noise to be cleaned-off before mining. Filtering hits from clicks is not trivial for two reasons, i.e., a server records requests interlaced in sequential order regardless of their source or type, website resources may be set up as requestable interchangeably by end-users and user-agents. The current methods are content-centric based on filtering heuristics of relevant/irrelevant items in terms of some cleaning attributes, i.e., website’s resources filetype extensions, website’s resources pointed by hyperlinks/URIs, http methods, user-agents, etc. These methods need exhaustive extra-weblog data and prior knowledge on the relevant and/or irrelevant items to be assumed as clicks or hits within the filtering heuristics. Such methods are not appropriate for dynamic/responsive Web for three reasons, i.e., resources may be set up to as clickable by end-users regardless of their type, website’s resources are indexed by frame names without filetype extensions, web contents are generated and cancelled differently from an end-user to another. In order to overcome these constraints, a clustering-based cleaning method centered on the logging structure is proposed. This method focuses on the statistical properties of the logging structure at the requested and referring resources attributes levels. It is insensitive to logging content and does not need extra-weblog data. The used statistical property takes on the structure of the generated logging feature by webpage requests in terms of clicks and hits. Since a webpage consists of its single URI and several components, these feature results in a single click to multiple hits ratio in terms of the requested and referring resources. Thus, the clustering-based method is meant to identify two clusters based on the application of the appropriate distance to the frequency matrix of the requested and referring resources levels. As the ratio clicks to hits is single to multiple, the clicks’ cluster is the smallest one in requests number. Hierarchical Agglomerative Clustering based on a pairwise distance (Gower) and average linkage has been applied to four logfiles of dynamic/responsive websites whose click to hits ratio range from 1/2 to 1/15. The optimal clustering set on the basis of average linkage and maximum inter-cluster inertia results always in two clusters. The evaluation of the smallest cluster referred to as clicks cluster under the terms of confusion matrix indicators results in 97% of true positive rate. The content-centric cleaning methods, i.e., conventional and advanced cleaning, resulted in a lower rate 91%. Thus, the proposed clustering-based cleaning outperforms the content-centric methods within dynamic and responsive web design without the need of any extra-weblog. Such an improvement in cleaning quality is likely to refine dependent analysis.Keywords: clustering approach, data cleaning, data preprocessing, weblog data, web usage data
Procedia PDF Downloads 1705409 Parkinson's Disease Gene Identification Using Physicochemical Properties of Amino Acids
Authors: Priya Arora, Ashutosh Mishra
Abstract:
Gene identification, towards the pursuit of mutated genes, leading to Parkinson’s disease, puts forward a challenge towards proactive cure of the disorder itself. Computational analysis is an effective technique for exploring genes in the form of protein sequences, as the theoretical and manual analysis is infeasible. The limitations and effectiveness of a particular computational method are entirely dependent on the previous data that is available for disease identification. The article presents a sequence-based classification method for the identification of genes responsible for Parkinson’s disease. During the initiation phase, the physicochemical properties of amino acids transform protein sequences into a feature vector. The second phase of the method employs Jaccard distances to select negative genes from the candidate population. The third phase involves artificial neural networks for making final predictions. The proposed approach is compared with the state of art methods on the basis of F-measure. The results confirm and estimate the efficiency of the method.Keywords: disease gene identification, Parkinson’s disease, physicochemical properties of amino acid, protein sequences
Procedia PDF Downloads 1405408 Numerical Study of a Ventilation Principle Based on Flow Pulsations
Authors: Amir Sattari, Mac Panah, Naeim Rashidfarokhi
Abstract:
To enhance the mixing of fluid in a rectangular enclosure with a circular inlet and outlet, an energy-efficient approach is further investigated through computational fluid dynamics (CFD). Particle image velocimetry (PIV) measurements help confirm that the pulsation of the inflow velocity improves the mixing performance inside the enclosure considerably without increasing energy consumption. In this study, multiple CFD simulations with different turbulent models were performed. The results obtained were compared with experimental PIV results. This study investigates small-scale representations of flow patterns in a ventilated rectangular room. The objective is to validate the concept of an energy-efficient ventilation strategy with improved thermal comfort and reduction of stagnant air inside the room. Experimental and simulated results confirm that through pulsation of the inflow velocity, strong secondary vortices are generated downstream of the entrance wall-jet. The pulsatile inflow profile promotes a periodic generation of vortices with stronger eddies despite a relatively low inlet velocity, which leads to a larger boundary layer with increased kinetic energy in the occupied zone. A real-scale study was not conducted; however, it can be concluded that a constant velocity inflow profile can be replaced with a lower pulsated flow rate profile while preserving the mixing efficiency. Among the turbulent CFD models demonstrated in this study, SST-kω is most advantageous, exhibiting a similar global airflow pattern as in the experiments. The detailed near-wall velocity profile is utilized to identify the wall-jet instabilities that consist of mixing and boundary layers. The SAS method was later applied to predict the turbulent parameters in the center of the domain. In both cases, the predictions are in good agreement with the measured results.Keywords: CFD, PIV, pulsatile inflow, ventilation, wall-jet
Procedia PDF Downloads 1745407 Sustainable Crop Mechanization among Small Scale Rural Farmers in Nigeria: The Hurdles
Authors: Charles Iledun Oyewole
Abstract:
The daunting challenge that the ‘man with the hoe’ is going to face in the coming decades will be complex and interwoven. With global population already above 7 billion people, it has been estimated that food (crop) production must more than double by 2050 to meet up with the world’s food requirements. Nigeria population is also expected to reach over 240 million people by 2050, at the current annual population growth of 2.61 per cent. The country’s farming population is estimated at over 65 per cent, but the country still depends on food importation to complement production. The small scale farmer, who depends on simple hand tools: hoes and cutlasses, remains the centre of agricultural production, accounting for 90 per cent of the total agricultural output and 80 per cent of the market flow. While the hoe may have been a tool for sustainable development at a time in human history, this role has been smothered by population growth, which has brought too many mouths to be fed (over 170 million), as well as many industries to fuel with raw materials. It may then be argued that the hoe is unfortunately not a tool for the coming challenges and that agricultural mechanization should be the focus. However, agriculture as an enterprise is a ‘complete wheel’ which does not work when broken, particularly, in respect to mechanization. Generally, mechanization will prompt increase production, where land is readily available; increase production, will require post-harvest handling mechanisms, crop processing and subsequent storage. An important aspect of this is readily available and favourable markets for such produce; fuel by good agricultural policies. A break in this wheel will lead to the process of mechanization crashing back to subsistence production, and probably reversal to the hoe. The focus of any agricultural policy should be to chart a course for sustainable mechanization that is environmentally friendly, that may ameliorate Nigeria’s food and raw material gaps. This is the focal point of this article.Keywords: Crop production, Farmer, Hoes, Mechanization, Policy framework, Population, Growth, Rural areas
Procedia PDF Downloads 2215406 Optimized Preprocessing for Accurate and Efficient Bioassay Prediction with Machine Learning Algorithms
Authors: Jeff Clarine, Chang-Shyh Peng, Daisy Sang
Abstract:
Bioassay is the measurement of the potency of a chemical substance by its effect on a living animal or plant tissue. Bioassay data and chemical structures from pharmacokinetic and drug metabolism screening are mined from and housed in multiple databases. Bioassay prediction is calculated accordingly to determine further advancement. This paper proposes a four-step preprocessing of datasets for improving the bioassay predictions. The first step is instance selection in which dataset is categorized into training, testing, and validation sets. The second step is discretization that partitions the data in consideration of accuracy vs. precision. The third step is normalization where data are normalized between 0 and 1 for subsequent machine learning processing. The fourth step is feature selection where key chemical properties and attributes are generated. The streamlined results are then analyzed for the prediction of effectiveness by various machine learning algorithms including Pipeline Pilot, R, Weka, and Excel. Experiments and evaluations reveal the effectiveness of various combination of preprocessing steps and machine learning algorithms in more consistent and accurate prediction.Keywords: bioassay, machine learning, preprocessing, virtual screen
Procedia PDF Downloads 2745405 A Geospatial Consumer Marketing Campaign Optimization Strategy: Case of Fuzzy Approach in Nigeria Mobile Market
Authors: Adeolu O. Dairo
Abstract:
Getting the consumer marketing strategy right is a crucial and complex task for firms with a large customer base such as mobile operators in a competitive mobile market. While empirical studies have made efforts to identify key constructs, no geospatial model has been developed to comprehensively assess the viability and interdependency of ground realities regarding the customer, competition, channel and the network quality of mobile operators. With this research, a geo-analytic framework is proposed for strategy formulation and allocation for mobile operators. Firstly, a fuzzy analytic network using a self-organizing feature map clustering technique based on inputs from managers and literature, which depicts the interrelationships amongst ground realities is developed. The model is tested with a mobile operator in the Nigeria mobile market. As a result, a customer-centric geospatial and visualization solution is developed. This provides a consolidated and integrated insight that serves as a transparent, logical and practical guide for strategic, tactical and operational decision making.Keywords: geospatial, geo-analytics, self-organizing map, customer-centric
Procedia PDF Downloads 1835404 The Effects of Future Priming on Resource Concern
Authors: Calvin Rong, Regina Agassian, Mindy Engle-Friedman
Abstract:
Climate changes, including rising sea levels and increases in global temperature, can have major effects on resource availability, leading to increased competition for resources and rising food prices. The abstract nature and often delayed consequences of many ecological problems cause people focus on immediate, specific, and personal events and circumstances that compel immediate and emotional involvement. This finding may be explained by the challenges humans have in imagining themselves in the future, a shortcoming that interferes with decision-making involving far-off rewards, and leads people to indicate a lower concern toward the future than to present circumstances. The present study sought to assess whether priming people to think of themselves in the future might strengthen the connection to their future selves and stimulate environmentally-protective behavior. We hypothesize that priming participants to think about themselves in the future would increase concern for the future environment. 45 control participants were primed to think about themselves in the present, and 42 participants were primed to think about themselves in the futures. After priming, the participants rated their concern over access to clean water, food, and energy on a scale of 1 to 10. They also rated their predicted care levels for the environment at age points 40, 50, 60, 70, 80, and 90 on a scale of 1(not at all) to 10 (very much). Predicted care levels at age 90 for the experimental group was significantly higher than for the control group. Overall the experimental group rated their concern for resources higher than the control. In comparison to the control group (M=7.60, SD=2.104) participants in the experimental group had greater concern for clean water (M=8.56, SD=1.534). In comparison to the control group (M=7.49, SD=2.041) participants in the experimental group were more concerned about food resources (M=8.41, SD=1.830). In comparison to the control group (M=7.22, SD=1.999) participants in the experimental group were more concerned about energy resources (M=8.07, SD=1.967). This study assessed whether a priming strategy could be used to encourage pro-environmental practices that protect limited resources. Future-self priming helped participants see past short term issues and focus on concern for the future environment.Keywords: climate change, future, priming, global warming
Procedia PDF Downloads 2575403 Reducing the Computational Cost of a Two-way Coupling CFD-FEA Model via a Multi-scale Approach for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Kevin Tinkham, Ella Quigley
Abstract:
Structural integrity for cladding products is a key performance parameter, especially concerning fire performance. Cladding products such as PIR-based sandwich panels are tested rigorously, in line with industrial standards. Physical fire tests are necessary to ensure the customer's safety but can give little information about critical behaviours that can help develop new materials. Numerical modelling is a tool that can help investigate a fire's behaviour further by replicating the fire test. However, fire is an interdisciplinary problem as it is a chemical reaction that behaves fluidly and impacts structural integrity. An analysis using Computational Fluid Dynamics (CFD) and Finite Element Analysis (FEA) is needed to capture all aspects of a fire performance test. One method is a two-way coupling analysis that imports the updated changes in thermal data, due to the fire's behaviour, to the FEA solver in a series of iterations. In light of our recent work with Tata Steel U.K using a two-way coupling methodology to determine the fire performance, it has been shown that a program called FDS-2-Abaqus can make predictions of a BS 476 -22 furnace test with a degree of accuracy. The test demonstrated the fire performance of Tata Steel U.K Trisomet product, a Polyisocyanurate (PIR) based sandwich panel used for cladding. Previous works demonstrated the limitations of the current version of the program, the main limitation being the computational cost of modelling three Trisomet panels, totalling an area of 9 . The computational cost increases substantially, with the intention to scale up to an LPS 1181-1 test, which includes a total panel surface area of 200 .The FDS-2-Abaqus program is developed further within this paper to overcome this obstacle and better accommodate Tata Steel U.K PIR sandwich panels. The new developments aim to reduce the computational cost and error margin compared to experimental data. One avenue explored is a multi-scale approach in the form of Reduced Order Modeling (ROM). The approach allows the user to include refined details of the sandwich panels, such as the overlapping joints, without a computationally costly mesh size.Comparative studies will be made between the new implementations and the previous study completed using the original FDS-2-ABAQUS program. Validation of the study will come from physical experiments in line with governing body standards such as BS 476 -22 and LPS 1181-1. The physical experimental data includes the panels' gas and surface temperatures and mechanical deformation. Conclusions are drawn, noting the new implementations' impact factors and discussing the reasonability for scaling up further to a whole warehouse.Keywords: fire testing, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 795402 A Framework for Automated Nuclear Waste Classification
Authors: Seonaid Hume, Gordon Dobie, Graeme West
Abstract:
Detecting and localizing radioactive sources is a necessity for safe and secure decommissioning of nuclear facilities. An important aspect for the management of the sort-and-segregation process is establishing the spatial distributions and quantities of the waste radionuclides, their type, corresponding activity, and ultimately classification for disposal. The data received from surveys directly informs decommissioning plans, on-site incident management strategies, the approach needed for a new cell, as well as protecting the workforce and the public. Manual classification of nuclear waste from a nuclear cell is time-consuming, expensive, and requires significant expertise to make the classification judgment call. Also, in-cell decommissioning is still in its relative infancy, and few techniques are well-developed. As with any repetitive and routine tasks, there is the opportunity to improve the task of classifying nuclear waste using autonomous systems. Hence, this paper proposes a new framework for the automatic classification of nuclear waste. This framework consists of five main stages; 3D spatial mapping and object detection, object classification, radiological mapping, source localisation based on gathered evidence and finally, waste classification. The first stage of the framework, 3D visual mapping, involves object detection from point cloud data. A review of related applications in other industries is provided, and recommendations for approaches for waste classification are made. Object detection focusses initially on cylindrical objects since pipework is significant in nuclear cells and indeed any industrial site. The approach can be extended to other commonly occurring primitives such as spheres and cubes. This is in preparation of stage two, characterizing the point cloud data and estimating the dimensions, material, degradation, and mass of the objects detected in order to feature match them to an inventory of possible items found in that nuclear cell. Many items in nuclear cells are one-offs, have limited or poor drawings available, or have been modified since installation, and have complex interiors, which often and inadvertently pose difficulties when accessing certain zones and identifying waste remotely. Hence, this may require expert input to feature match objects. The third stage, radiological mapping, is similar in order to facilitate the characterization of the nuclear cell in terms of radiation fields, including the type of radiation, activity, and location within the nuclear cell. The fourth stage of the framework takes the visual map for stage 1, the object characterization from stage 2, and radiation map from stage 3 and fuses them together, providing a more detailed scene of the nuclear cell by identifying the location of radioactive materials in three dimensions. The last stage involves combining the evidence from the fused data sets to reveal the classification of the waste in Bq/kg, thus enabling better decision making and monitoring for in-cell decommissioning. The presentation of the framework is supported by representative case study data drawn from an application in decommissioning from a UK nuclear facility. This framework utilises recent advancements of the detection and mapping capabilities of complex radiation fields in three dimensions to make the process of classifying nuclear waste faster, more reliable, cost-effective and safer.Keywords: nuclear decommissioning, radiation detection, object detection, waste classification
Procedia PDF Downloads 2005401 Urban Security and Social Sustainability in Cities of Developing Countries
Authors: Taimaz Larimian, Negin Sadeghi
Abstract:
Very little is known about the impacts of urban security on the level of social sustainability within the cities of developing countries. Urban security is still struggling to find its position in the social sustainability agenda, despite the significant role of safety and security on different aspects of peoples’ lives. This paper argues that urban safety and security should be better integrated within the social sustainability framework. With this aim, this study investigates the hypothesized relationship between social sustainability and Crime Prevention through Environmental Design (CPTED) approach at the neighborhood scale. This study proposes a model of key influential dimensions of CPTED analyzed into localized factors and sub-factors. These factors are then prioritized using pairwise comparison logic and fuzzy group Analytic Hierarchy Process (AHP) method in order to determine the relative importance of each factor on achieving social sustainability. The proposed model then investigates social sustainability in six case study neighborhoods of Isfahan city based on residents’ perceptions of safety within their neighborhood. Mixed method of data collection is used by using a self-administered questionnaire to explore the residents’ perceptions of social sustainability in their area of residency followed by an on-site observation to measure the CPTED construct. In all, 150 respondents from selected neighborhoods were involved in this research. The model indicates that CPTED approach has a significant direct influence on increasing social sustainability in neighborhood scale. According to the findings, among different dimensions of CPTED, ‘activity support’ and ‘image/ management’ have the most influence on people’s feeling of safety within studied areas. This model represents a useful designing tool in achieving urban safety and security during the development of more socially sustainable and user-friendly urban areas.Keywords: crime prevention through environmental design (CPTED), developing countries, fuzzy analytic hierarchy process (FAHP), social sustainability
Procedia PDF Downloads 3065400 Cognitive Mechanisms of Mindfulness-Based Cognitive Therapy on Depressed Older Adults: The Mediating Role of Rumination and Autobiographical Memory Specificity
Authors: Wai Yan Shih, Sau Man Wong, Wing Chung Chang, Wai Chi Chan
Abstract:
Background: Late-life depression is associated with significant consequences. Although symptomatic reduction is achievable through pharmacological interventions, older adults are more vulnerable to the side effects than their younger counterparts. In addition, drugs do not address underlying cognitive dysfunctions such as rumination and reduced autobiographical memory specificity (AMS), both shown to be maladaptive coping styles that are associated with a poorer prognosis in depression. Considering how aging is accompanied by cognitive, psychological and physical changes, the interplay of these age-related factors may potentially aggravate and interfere with these depressive cognitive dysfunctions in late-life depression. Special care should, therefore, be drawn to ensure these cognitive dysfunctions are adequately addressed. Aim: This randomized controlled trial aims to examine the effect of mindfulness-based cognitive therapy (MBCT) on depressed older adults, and whether the potential benefits of MBCT are mediated by improvements in rumination and AMS. Method: Fifty-seven participants with an average age of 70 years old were recruited from multiple elderly centers and online mailing lists. Participants were assessed with: (1) Hamilton depression scale, (2) ruminative response scale, (3) autobiographical memory test, (4) mindful attention awareness scale, and (5) Montreal cognitive assessment. Eligible participants with mild to moderate depressive symptoms and normal cognitive functioning were randomly allocated to an 8-week MBCT group or active control group consisting of a low-intensity exercise program and health education. Post-intervention assessments were conducted after the 8-week program. Ethics approval was given by the Institutional Review Board of the University of Hong Kong/Hospital Authority. Results: Mixed-factorials ANOVAs demonstrated significant time x group interaction effects for depressive symptoms, AMS, and dispositional mindfulness. A marginally significant interaction effect was found for rumination. Simple effect analyses revealed a significant reduction in depressive symptoms for the both the MBCT group (mean difference = 7.1, p = .000), and control group (mean difference = 2.7, p = .023). However, only participants in the MBCT group demonstrated improvements in rumination, AMS, and dispositional mindfulness. Bootstrapping-based mediation analyses showed that the effect of MBCT in alleviating depressive symptoms was only mediated by the reduction in rumination. Conclusions: The findings support the use of MBCT as an effective intervention for depressed older adults, considering the improvements in depressive symptoms, rumination, AMS and dispositional mindfulness despite their age. Reduction in ruminative tendencies plays a major role in the cognitive mechanism of MBCT.Keywords: mindfulness-based cognitive therapy, depression, older adults, rumination, autobiographical memory specificity
Procedia PDF Downloads 2115399 A Hybrid Fuzzy Clustering Approach for Fertile and Unfertile Analysis
Authors: Shima Soltanzadeh, Mohammad Hosain Fazel Zarandi, Mojtaba Barzegar Astanjin
Abstract:
Diagnosis of male infertility by the laboratory tests is expensive and, sometimes it is intolerable for patients. Filling out the questionnaire and then using classification method can be the first step in decision-making process, so only in the cases with a high probability of infertility we can use the laboratory tests. In this paper, we evaluated the performance of four classification methods including naive Bayesian, neural network, logistic regression and fuzzy c-means clustering as a classification, in the diagnosis of male infertility due to environmental factors. Since the data are unbalanced, the ROC curves are most suitable method for the comparison. In this paper, we also have selected the more important features using a filtering method and examined the impact of this feature reduction on the performance of each methods; generally, most of the methods had better performance after applying the filter. We have showed that using fuzzy c-means clustering as a classification has a good performance according to the ROC curves and its performance is comparable to other classification methods like logistic regression.Keywords: classification, fuzzy c-means, logistic regression, Naive Bayesian, neural network, ROC curve
Procedia PDF Downloads 3375398 Geometric Simplification Method of Building Energy Model Based on Building Performance Simulation
Authors: Yan Lyu, Yiqun Pan, Zhizhong Huang
Abstract:
In the design stage of a new building, the energy model of this building is often required for the analysis of the performance on energy efficiency. In practice, a certain degree of geometric simplification should be done in the establishment of building energy models, since the detailed geometric features of a real building are hard to be described perfectly in most energy simulation engine, such as ESP-r, eQuest or EnergyPlus. Actually, the detailed description is not necessary when the result with extremely high accuracy is not demanded. Therefore, this paper analyzed the relationship between the error of the simulation result from building energy models and the geometric simplification of the models. Finally, the following two parameters are selected as the indices to characterize the geometric feature of in building energy simulation: the southward projected area and total side surface area of the building, Based on the parameterization method, the simplification from an arbitrary column building to a typical shape (a cuboid) building can be made for energy modeling. The result in this study indicates that this simplification would only lead to the error that is less than 7% for those buildings with the ratio of southward projection length to total perimeter of the bottom of 0.25~0.35, which can cover most situations.Keywords: building energy model, simulation, geometric simplification, design, regression
Procedia PDF Downloads 1805397 Study of the Biochemical Properties of the Protease Coagulant Milk Extracted from Sunflower Cake: Manufacturing Test of Cheeses Uncooked Dough Press and Analysis of Sensory Properties
Authors: Kahlouche Amal, Touzene F. Zohra, Betatache Fatihaet Nouani Abdelouahab
Abstract:
The development of the world production of the cheese these last decades, as well as agents' greater request cheap coagulants, accentuated the search for new surrogates of the rennet. What about the interest to explore the vegetable biodiversity, the source well cheap of many naturals metabolites that the scientists today praise it (thistle, latex of fig tree, Cardoon, seeds of melon). Indeed, a big interest is concerned the search for surrogates of vegetable origin. The objective of the study is to show the possibility of extracting a protease coagulant the milk from the cake of Sunflower, available raw material and the potential source of surrogates of rennet. so, the determination of the proteolytic activity of raw extracts, the purification, the elimination of the pigments of tint of the enzymatic preparations, a better knowledge of the coagulative properties through study of the effect of certain factors (temperature, pH, concentration in CaCl2) are so many factors which contribute to value milk particularly those produced by the small ruminants of the Algerian dairy exploitations. Otherwise, extracts coagulants of vegetable origin allowed today to value traditional, in addition, although the extract coagulants of vegetable origin made it possible today to develop traditional cheeses whose Iberian peninsula is the promoter, but the test of 'pressed paste not cooked' cheese manufacturing led to the semi-scale pilot; and that, by using the enzymatic extract of sunflower (Helianthus annus) which gave satisfactory results as well to the level of outputs as on the sensory level,which, statistically,did not give any significant difference between studied cheeses. These results confirm the possibility of use of this coagulase as a substitute of rennet commercial on an industrial scale.Keywords: characterization, cheese, Rennet, sunflower
Procedia PDF Downloads 3515396 Transfer Learning for Protein Structure Classification at Low Resolution
Authors: Alexander Hudson, Shaogang Gong
Abstract:
Structure determination is key to understanding protein function at a molecular level. Whilst significant advances have been made in predicting structure and function from amino acid sequence, researchers must still rely on expensive, time-consuming analytical methods to visualise detailed protein conformation. In this study, we demonstrate that it is possible to make accurate (≥80%) predictions of protein class and architecture from structures determined at low (>3A) resolution, using a deep convolutional neural network trained on high-resolution (≤3A) structures represented as 2D matrices. Thus, we provide proof of concept for high-speed, low-cost protein structure classification at low resolution, and a basis for extension to prediction of function. We investigate the impact of the input representation on classification performance, showing that side-chain information may not be necessary for fine-grained structure predictions. Finally, we confirm that high resolution, low-resolution and NMR-determined structures inhabit a common feature space, and thus provide a theoretical foundation for boosting with single-image super-resolution.Keywords: transfer learning, protein distance maps, protein structure classification, neural networks
Procedia PDF Downloads 1365395 Immediate Effect of Transcutaneous Electrical Nerves Stimulation on Flexibility and Health Status in Patients with Chronic Nonspecific Low Back Pain (A Pilot Study)
Authors: Narupon Kunbootsri, Patpiya Sirasaporn
Abstract:
Low back pain is the most common of chief complaints in chronic pain. Low back pain directly affect to activities daily living and also has high socioeconomic costs. The prevalence of low back pain is high in both genders in all populations. The symptoms of low back pain including, pain at low back area, muscle spasm, tenderness points and stiff back. Trancutanous Electrical Nerve Stimulation (TENS) is one of modalities mainly use for control pain. There was indicated that TENS is wildly use in low back pain, but no scientific data about the flexibility of muscle after TENS in low back pain. Thus the aim of this study was to investigate immediate effect of TENS on flexibility and health status in patients with chronic nonspecific low back pain. Eight chronic nonspecific low back pain patients 1 male and 7 female employed in this study. Participants were diagnosed by a doctor based on history and physical examination. Each participant received treatment at physiotherapy unit. Participants completed Roland Morris Disability Questionnaire (RMDQ), numeric rating scale (NRS) and trunk flexibility before treatment. Each participant received low frequency TENS set at asymmetrical, 10 Hz, 20 minutes per point. Immediately after treatment, participants completed RNS, RMDQ and trunk flexibility again. All participants were treated by only one physiotherapist. There was a statistically significant increased in flexibility immediately after low frequency TENS [mean difference -6.37 with 95%CI were (-8.35)-(-4.39)]. There was a statistically significant decreased in numeric rating scale [mean difference 2.13 with 95%CI were 1.08-3.16]. Roland Morris Disability Questionnaire showed improvement of health status average 44.8% immediately after treatment. In conclusion, the results of the present study indicate that immediately effect after low frequency TENS can decrease pain and improve flexibility of back muscle in chronic nonspecific low back pain patients.Keywords: low back pain, flexibility, TENS, chronic
Procedia PDF Downloads 5565394 Preparation and Sealing of Polymer Microchannels Using EB Lithography and Laser Welding
Authors: Ian Jones, Jonathan Griffiths
Abstract:
Laser welding offers the potential for making very precise joints in plastics products, both in terms of the joint location and the amount of heating applied. These methods have allowed the production of complex products such as microfluidic devices where channels and structure resolution below 100 µm is regularly used. However, to date, the dimension of welds made using lasers has been limited by the focus spot size that is achievable from the laser source. Theoretically, the minimum spot size possible from a laser is comparable to the wavelength of the radiation emitted. Practically, with reasonable focal length optics the spot size achievable is a few factors larger than this, and the melt zone in a plastics weld is larger again than this. The narrowest welds feasible to date have therefore been 10-20 µm wide using a near-infrared laser source. The aim of this work was to prepare laser absorber tracks and channels less than 10 µm wide in PMMA thermoplastic using EB lithography followed by sealing of channels using laser welding to carry out welds with widths of the order of 1 µm, below the resolution limit of the near-infrared laser used. Welded joints with a width of 1 µm have been achieved as well as channels with a width of 5 µm. The procedure was based on the principle of transmission laser welding using a thin coating of infrared absorbent material at the joint interface. The coating was patterned using electron-beam lithography to obtain the required resolution in a reproducible manner and that resolution was retained after the transmission laser welding process. The joint strength was ratified using larger scale samples. The results demonstrate that plastics products could be made with a high density of structure with resolution below 1 um, and that welding can be applied without excessively heating regions beyond the weld lines. This may be applied to smaller scale sensor and analysis chips, micro-bio and chemical reactors and to microelectronic packaging.Keywords: microchannels, polymer, EB lithography, laser welding
Procedia PDF Downloads 4015393 Protein Remote Homology Detection by Using Profile-Based Matrix Transformation Approaches
Authors: Bin Liu
Abstract:
As one of the most important tasks in protein sequence analysis, protein remote homology detection has been studied for decades. Currently, the profile-based methods show state-of-the-art performance. Position-Specific Frequency Matrix (PSFM) is widely used profile. However, there exists noise information in the profiles introduced by the amino acids with low frequencies. In this study, we propose a method to remove the noise information in the PSFM by removing the amino acids with low frequencies called Top frequency profile (TFP). Three new matrix transformation methods, including Autocross covariance (ACC) transformation, Tri-gram, and K-separated bigram (KSB), are performed on these profiles to convert them into fixed length feature vectors. Combined with Support Vector Machines (SVMs), the predictors are constructed. Evaluated on two benchmark datasets, and experimental results show that these proposed methods outperform other state-of-the-art predictors.Keywords: protein remote homology detection, protein fold recognition, top frequency profile, support vector machines
Procedia PDF Downloads 1255392 Identification of Ideal Plain Sufu (Fermented Soybean Curds) Based on Ideal Profile Method and Assessment of the Consistency of Ideal Profiles Obtained from Consumers
Authors: Yan Ping Chen, Hau Yin Chung
Abstract:
The Ideal Profile Method (IPM) is a newly developed descriptive sensory analysis conducted by consumers without previous training. To perform this test, both the perceived and the ideal intensities from the judgements of consumers on products’ attributes, as well as their hedonic ratings were collected for formulating an ideal product (the most liked one). In addition, Ideal Profile Analysis (IPA) was conducted to check the consistency of the ideal data at both the panel and consumer levels. In this test, 12 commercial plain sufus bought from Hong Kong local market were tested by 113 consumers according to the IPM, and rated on 22 attributes. Principal component analysis was used to profile the perceived and the ideal spaces of tested products. The consistency of ideal data was then checked by IPA. The result showed that most consumers shared a common ideal. It was observed that the sensory product space and the ideal product space were structurally similar. Their first dimensions all opposed products with intense fermented related aroma to products with less fermented related aroma. And the predicted ideal profile (the estimated liking score around 7.0 in a 9.0-point scale) got higher hedonic score than the tested products (the average liking score around 6.0 in a 9.0-point scale). For the majority of consumers (95.2%), the stated ideal product considered as a potential ideal through checking the R2 coefficient value. Among all the tested products, sample-6 was the most popular one with consumer liking percentage around 30%. This product with less fermented and moldy flavour but easier to melt in mouth texture possessed close sensory profile according to the ideal product. This experiment validated that data from untrained consumers could be guided as useful information. Appreciated sensory characteristics could be served as reference in the optimization of the commercial plain sufu.Keywords: ideal profile method, product development, sensory evaluation, sufu (fermented soybean curd)
Procedia PDF Downloads 1885391 Clinical Response of Nuberol Forte® (Paracetamol 650 MG+Orphenadrine 50 MG) For Pain Management with Musculoskeletal Conditions in Routine Pakistani Practice (NFORTE-EFFECT)
Authors: Shahid Noor, Kazim Najjad, Muhammad Nasir, Irshad Bhutto, Abdul Samad Memon, Khurram Anwar, Tehseen Riaz, Mian Muhammad Hanif, Nauman A. Mallik, Saeed Ahmed, Israr Ahmed, Ali Yasir
Abstract:
Background: Musculoskeletal pain is the most common complaint presented to the health practitioner. It is well known that untreated or under-treated pain can have a significant negative impact on an individual’s quality of life (QoL). Objectives: This study was conducted across 10 sites in six (6) major cities of Pakistan to evaluate the tolerability, safety, and the clinical response of Nuberol Forte® (Paracetamol 650 mg + Orphenadrine 50 mg) to musculoskeletal pain in routine Pakistani practice and its impact on improving the patient’s QoL. Design & Methods: This NFORT-EFFECT observational, prospective multicenter study was conducted in compliance with Good Clinical Practice guidelines and local regulatory requirements. The study sponsor was "The Searle Company Limited, Pakistan. To maintain the GCP compliances, the sponsor assigned the CRO for the site and data management. Ethical approval was obtained from an independent ethics committee. The IEC reviewed the progress of the study. Written informed consent was obtained from the study participants, and their confidentiality was maintained throughout the study. A total of 399 patients with known prescreened musculoskeletal conditions and pain who attended the study sites were recruited, as per the inclusion/exclusion criteria (clinicaltrials.gov ID# NCT04765787). The recruited patients were then prescribed Paracetamol (650 mg) and Orphenadrine (50 mg) combination (Nuberol Forte®) for 7 to 14 days as per the investigator's discretion based on the pain intensity. After the initial screening (visit 1), a follow-up visit was conducted after 1-2 weeks of the treatment (visit 2). Study Endpoints: The primary objective was to assess the pain management response of Nuberol Forte treatment and the overall safety of the drug. The Visual Analogue Scale (VAS) scale was used to measure pain severity. Secondary to pain, the patients' health-related quality of life (HRQoL) was also assessed using the Muscle, Joint Measure (MJM) scale. The safety was monitored on the first dose by the patients. These assessments were done on each study visit. Results: Out of 399 enrolled patients, 49.4% were males, and 50.6% were females with a mean age of 47.24 ± 14.20 years. Most patients were presented with Knee Osteoarthritis (OA), i.e., 148(38%), followed by backache 70(18.2%). A significant reduction in the mean pain score was observed after the treatment with the combination of Paracetamol and Orphenadrine (p<0.05). Furthermore, an overall improvement in the patient’s QoL was also observed. During the study, only ten patients reported mild adverse events (AEs). Conclusion: The combination of Paracetamol and Orphenadrine (Nuberol Forte®) exhibited effective pain management among patients with musculoskeletal conditions and also improved their QoL.Keywords: musculoskeletal pain, orphenadrine/paracetamol combination, pain management, quality of life, Pakistani population
Procedia PDF Downloads 1695390 A Study on the Construction Process and Sustainable Renewal Development of High-Rise Residential Areas in Chongqing (1978-2023)
Authors: Xiaoting Jing, Ling Huang
Abstract:
After the reform and opening up, Chongqing has formed far more high-rise residential areas than other cities in its more than 40 years of urban construction. High-rise residential areas have become one of the main modern living models in Chongqing and an important carrier reflecting the city's high quality of life. Reviewing the construction process and renewal work helps understand the characteristics of high-rise residential areas in Chongqing at different stages, clarify current development demands, and look forward to the focus of future renewal work. Based on socio-economic development and policy background, the article sorts the construction process of high-rise residential areas in Chongqing into four stages: the early experimental construction period of high-rise residential areas (1978-1996), the rapid start-up period of high-rise commodity housing construction (1997-2006), the large-scale construction period of high-rise commodity housing and public rental housing (2007-2014), and the period of renewal and renovation of high-rise residential areas and step-by-step construction of quality commodity housing (2015-present). Based on the construction demands and main construction types of each stage, the article summarizes that the construction of high-rise residential areas in Chongqing features large scale, high speed, and high density. It points out that a large number of high-rise residential areas built after 2000 will become important objects of renewal and renovation in the future. Based on existing renewal work experience, it is urgent to explore a path for sustainable renewal and development in terms of policy mechanisms, digital supervision, and renewal and renovation models, leading the high-rise living in Chongqing toward high-quality development.Keywords: high-rise residential areas, construction process, renewal and renovation, Chongqing
Procedia PDF Downloads 675389 Registration of Multi-Temporal Unmanned Aerial Vehicle Images for Facility Monitoring
Authors: Dongyeob Han, Jungwon Huh, Quang Huy Tran, Choonghyun Kang
Abstract:
Unmanned Aerial Vehicles (UAVs) have been used for surveillance, monitoring, inspection, and mapping. In this paper, we present a systematic approach for automatic registration of UAV images for monitoring facilities such as building, green house, and civil structures. The two-step process is applied; 1) an image matching technique based on SURF (Speeded up Robust Feature) and RANSAC (Random Sample Consensus), 2) bundle adjustment of multi-temporal images. Image matching to find corresponding points is one of the most important steps for the precise registration of multi-temporal images. We used the SURF algorithm to find a quick and effective matching points. RANSAC algorithm was used in the process of finding matching points between images and in the bundle adjustment process. Experimental results from UAV images showed that our approach has a good accuracy to be applied to the change detection of facility.Keywords: building, image matching, temperature, unmanned aerial vehicle
Procedia PDF Downloads 2925388 A Tutorial on Model Predictive Control for Spacecraft Maneuvering Problem with Theory, Experimentation and Applications
Authors: O. B. Iskender, K. V. Ling, V. Dubanchet, L. Simonini
Abstract:
This paper discusses the recent advances and future prospects of spacecraft position and attitude control using Model Predictive Control (MPC). First, the challenges of the space missions are summarized, in particular, taking into account the errors, uncertainties, and constraints imposed by the mission, spacecraft and, onboard processing capabilities. The summary of space mission errors and uncertainties provided in categories; initial condition errors, unmodeled disturbances, sensor, and actuator errors. These previous constraints are classified into two categories: physical and geometric constraints. Last, real-time implementation capability is discussed regarding the required computation time and the impact of sensor and actuator errors based on the Hardware-In-The-Loop (HIL) experiments. The rationales behind the scenarios’ are also presented in the scope of space applications as formation flying, attitude control, rendezvous and docking, rover steering, and precision landing. The objectives of these missions are explained, and the generic constrained MPC problem formulations are summarized. Three key design elements used in MPC design: the prediction model, the constraints formulation and the objective cost function are discussed. The prediction models can be linear time invariant or time varying depending on the geometry of the orbit, whether it is circular or elliptic. The constraints can be given as linear inequalities for input or output constraints, which can be written in the same form. Moreover, the recent convexification techniques for the non-convex geometrical constraints (i.e., plume impingement, Field-of-View (FOV)) are presented in detail. Next, different objectives are provided in a mathematical framework and explained accordingly. Thirdly, because MPC implementation relies on finding in real-time the solution to constrained optimization problems, computational aspects are also examined. In particular, high-speed implementation capabilities and HIL challenges are presented towards representative space avionics. This covers an analysis of future space processors as well as the requirements of sensors and actuators on the HIL experiments outputs. The HIL tests are investigated for kinematic and dynamic tests where robotic arms and floating robots are used respectively. Eventually, the proposed algorithms and experimental setups are introduced and compared with the authors' previous work and future plans. The paper concludes with a conjecture that MPC paradigm is a promising framework at the crossroads of space applications while could be further advanced based on the challenges mentioned throughout the paper and the unaddressed gap.Keywords: convex optimization, model predictive control, rendezvous and docking, spacecraft autonomy
Procedia PDF Downloads 1105387 Anomalous Behaviors of Visible Luminescence from Graphene Quantum Dots
Authors: Hyunho Shin, Jaekwang Jung, Jeongho Park, Sungwon Hwang
Abstract:
For the application of graphene quantum dots (GQDs) to optoelectronic nanodevices, it is of critical importance to understand the mechanisms which result in novel phenomena of their light absorption/emission. The optical transitions are known to be available up to ~6 eV in GQDs, especially useful for ultraviolet (UV) photodetectors (PDs). Here, we present size-dependent shape/edge-state variations of GQDs and visible photoluminescence (PL) showing anomalous size dependencies. With varying the average size (da) of GQDs from 5 to 35 nm, the peak energy of the absorption spectra monotonically decreases, while that of the visible PL spectra unusually shows nonmonotonic behaviors having a minimum at diameter ∼17 nm. The PL behaviors can be attributed to the novel feature of GQDs, that is, the circular-to-polygonal-shape and corresponding edge-state variations of GQDs at diameter ∼17 nm as the GQD size increases, as demonstrated by high resolution transmission electron microscopy. We believe that such a comprehensive scheme in designing device architecture and the structural formulation of GQDs provides a device for practical realization of environmentally benign, high performance flexible devices in the future.Keywords: graphene, quantum dot, size, photoluminescence
Procedia PDF Downloads 2955386 Unfolding the Social Clash between Online and Non-Online Transportation Providers in Bandung
Authors: Latifah Putti Tiananda, Sasti Khoirunnisa, Taniadiana Yapwito, Jessica Noviena
Abstract:
Innovations are often met with two responses, acceptance or rejection. In the past few years, Indonesia is experiencing a revolution of transportation service, which utilizes online platform for its operation. Such improvement is welcomed by consumers and challenged by conventional or ‘non-online’ transportation providers simultaneously. Conflicts arise as the existence of this online transportation mode results in declining income of non-online transportation workers. Physical confrontations and demonstrations demand policing from central authority. However, the obscurity of legal measures from the government persists the social instability. Bandung, a city in West Java with the highest rate of online transportation usage, has recently issued a recommendation withholding the operation of online transportation services to maintain peace and order. Thus, this paper seeks to elaborate the social unrest between the two contesting transportation actors in Bandung and explore community-based approaches to solve this problem. Using qualitative research method, this paper will also feature in-depth interviews with directly involved sources from Bandung.Keywords: Bandung, market competition, online transportation services, social unrest
Procedia PDF Downloads 2745385 Exploring the Relationships between Cyberbullying Perceptions and Facebook Attitudes of Turkish Students
Authors: Yavuz Erdoğan, Hidayet Çiftçi
Abstract:
Cyberbullying, a phenomenon among adolescents, is defined as actions that use information and communication technologies such as social media to support deliberate, repeated, and hostile behaviour by an individual or group. With the advancement in communication and information technology, cyberbullying has expanded its boundaries among students in schools. Thus, parents, psychologists, educators, and lawmakers must become aware of the potential risks of this phenomenon. In the light of these perspectives, this study aims to investigate the relationships between cyberbullying perception and Facebook attitudes of Turkish students. A survey method was used for the study and the data were collected by “Cyberbullying Perception Scale”, “Facebook Attitude Scale” and “Personal Information Form”. For this purpose, study has been conducted during 2014-2015 academic year, with a total of 748 students with 493 male (%65.9) and 255 female (%34.1) from randomly selected high schools. In the analysis of data Pearson correlation and multiple regression analysis, multivariate analysis of variance (MANOVA) and Scheffe post hoc test has been used. At the end of the study, the results displayed a negative correlation between Turkish students’ Facebook attitudes and cyberbullying perception (r=-.210; p<0.05). In order to identify the predictors of students’ cyberbullying perception, multiple regression analysis was used. As a result, significant relations were detected between cyberbullying perception and independent variables (F=5.102; p<0.05). Independent variables together explain 11.0% of the total variance in cyberbullying scores. The variables that significantly predict the students’ cyberbullying perception are Facebook attitudes (t=-5.875; p<0.05), and gender (t=3.035; p<0.05). In order to calculate the effects of independent variables on students’ Facebook attitudes and cyberbullying perception MANOVA was conducted. The results of the MANOVA indicate that the Facebook attitudes and cyberbullying perception were significantly differed according to students’ gender, age, educational attainment of the mother, educational attainment of the father, income of the family and daily usage of internet.Keywords: facebook, cyberbullying, attitude, internet usage
Procedia PDF Downloads 402