Search results for: STS benchmark dataset
691 LGG Architecture for Brain Tumor Segmentation Using Convolutional Neural Network
Authors: Sajeeha Ansar, Asad Ali Safi, Sheikh Ziauddin, Ahmad R. Shahid, Faraz Ahsan
Abstract:
The most aggressive form of brain tumor is called glioma. Glioma is kind of tumor that arises from glial tissue of the brain and occurs quite often. A fully automatic 2D-CNN model for brain tumor segmentation is presented in this paper. We performed pre-processing steps to remove noise and intensity variances using N4ITK and standard intensity correction, respectively. We used Keras open-source library with Theano as backend for fast implementation of CNN model. In addition, we used BRATS 2015 MRI dataset to evaluate our proposed model. Furthermore, we have used SimpleITK open-source library in our proposed model to analyze images. Moreover, we have extracted random 2D patches for proposed 2D-CNN model for efficient brain segmentation. Extracting 2D patched instead of 3D due to less dimensional information present in 2D which helps us in reducing computational time. Dice Similarity Coefficient (DSC) is used as performance measure for the evaluation of the proposed method. Our method achieved DSC score of 0.77 for complete, 0.76 for core, 0.77 for enhanced tumor regions. However, these results are comparable with methods already implemented 2D CNN architecture.Keywords: brain tumor segmentation, convolutional neural networks, deep learning, LGG
Procedia PDF Downloads 183690 SiamMask++: More Accurate Object Tracking through Layer Wise Aggregation in Visual Object Tracking
Authors: Hyunbin Choi, Jihyeon Noh, Changwon Lim
Abstract:
In this paper, we propose SiamMask++, an architecture that performs layer-wise aggregation and depth-wise cross-correlation and introduce multi-RPN module and multi-MASK module to improve EAO (Expected Average Overlap), a representative performance evaluation metric for Visual Object Tracking (VOT) challenge. The proposed architecture, SiamMask++, has two versions, namely, bi_SiamMask++, which satisfies the real time (56fps) on systems equipped with GPUs (Titan XP), and rf_SiamMask++, which combines mask refinement modules for EAO improvements. Tests are performed on VOT2016, VOT2018 and VOT2019, the representative datasets of Visual Object Tracking tasks labeled as rotated bounding boxes. SiamMask++ perform better than SiamMask on all the three datasets tested. SiamMask++ is achieved performance of 62.6% accuracy, 26.2% robustness and 39.8% EAO, especially on the VOT2018 dataset. Compared to SiamMask, this is an improvement of 4.18%, 37.17%, 23.99%, respectively. In addition, we do an experimental in-depth analysis of how much the introduction of features and multi modules extracted from the backbone affects the performance of our model in the VOT task.Keywords: visual object tracking, video, deep learning, layer wise aggregation, Siamese network
Procedia PDF Downloads 163689 Tax Treaties between Developed and Developing Countries: Withholding Taxes and Treaty Heterogeneity Content
Authors: Pranvera Shehaj
Abstract:
Unlike any prior analysis on the withholding tax rates negotiated in tax treaties, this study looks at the treaty heterogeneity content, by investigating the impact of the residence country’s double tax relief method and of tax-sparing agreements, on the difference between developing countries’ domestic withholding taxes on dividends on one side, and treaty negotiated withholding taxes at source on portfolio dividends on the other side. Using a dyadic panel dataset of asymmetric double tax treaties between 2005 and 2019, this study suggests first that the difference between domestic and negotiated WHTs on portfolio dividends is higher when the OECD member uses the credit method, as compared to when it uses the exemption method. Second, results suggest that the inclusion of tax-sparing provisions vanishes the positive effect of the credit method at home on the difference between domestic and negotiated WHTs on portfolio dividends, incentivizing developing countries to negotiate higher withholding taxes.Keywords: double tax treaties, asymmetric investments, withholding tax, dividends, double tax relief method, tax sparing
Procedia PDF Downloads 63688 DISGAN: Efficient Generative Adversarial Network-Based Method for Cyber-Intrusion Detection
Authors: Hongyu Chen, Li Jiang
Abstract:
Ubiquitous anomalies endanger the security of our system con- stantly. They may bring irreversible damages to the system and cause leakage of privacy. Thus, it is of vital importance to promptly detect these anomalies. Traditional supervised methods such as Decision Trees and Support Vector Machine (SVM) are used to classify normality and abnormality. However, in some case, the abnormal status are largely rarer than normal status, which leads to decision bias of these methods. Generative adversarial network (GAN) has been proposed to handle the case. With its strong generative ability, it only needs to learn the distribution of normal status, and identify the abnormal status through the gap between it and the learned distribution. Nevertheless, existing GAN-based models are not suitable to process data with discrete values, leading to immense degradation of detection performance. To cope with the discrete features, in this paper, we propose an efficient GAN-based model with specifically-designed loss function. Experiment results show that our model outperforms state-of-the-art models on discrete dataset and remarkably reduce the overhead.Keywords: GAN, discrete feature, Wasserstein distance, multiple intermediate layers
Procedia PDF Downloads 129687 Student Loan Debt among Students with Disabilities
Authors: Kaycee Bills
Abstract:
This study will determine if students with disabilities have higher student loan debt payments than other student populations. The hypothesis was that students with disabilities would have significantly higher student loan debt payments than other students due to the length of time they spend in school. Using the Bachelorette and Beyond Study Wave 2015/017 dataset, quantitative methods were employed. These data analysis methods included linear regression and a correlation matrix. Due to the exploratory nature of the study, the significance levels for the overall model and each variable were set at .05. The correlation matrix demonstrated that students with certain types of disabilities are more likely to fall under higher student loan payment brackets than students without disabilities. These results also varied among the different types of disabilities. The result of the overall linear regression model was statistically significant (p = .04). Despite the overall model being statistically significant, the majority of the significance values for the different types of disabilities were null. However, several other variables had statistically significant results, such as veterans, people of minority races, and people who attended private schools. Implications for how this impacts the economy, capitalism, and financial wellbeing of various students are discussed.Keywords: disability, student loan debt, higher education, social work
Procedia PDF Downloads 170686 Evaluation of Random Forest and Support Vector Machine Classification Performance for the Prediction of Early Multiple Sclerosis from Resting State FMRI Connectivity Data
Authors: V. Saccà, A. Sarica, F. Novellino, S. Barone, T. Tallarico, E. Filippelli, A. Granata, P. Valentino, A. Quattrone
Abstract:
The work aim was to evaluate how well Random Forest (RF) and Support Vector Machine (SVM) algorithms could support the early diagnosis of Multiple Sclerosis (MS) from resting-state functional connectivity data. In particular, we wanted to explore the ability in distinguishing between controls and patients of mean signals extracted from ICA components corresponding to 15 well-known networks. Eighteen patients with early-MS (mean-age 37.42±8.11, 9 females) were recruited according to McDonald and Polman, and matched for demographic variables with 19 healthy controls (mean-age 37.55±14.76, 10 females). MRI was acquired by a 3T scanner with 8-channel head coil: (a)whole-brain T1-weighted; (b)conventional T2-weighted; (c)resting-state functional MRI (rsFMRI), 200 volumes. Estimated total lesion load (ml) and number of lesions were calculated using LST-toolbox from the corrected T1 and FLAIR. All rsFMRIs were pre-processed using tools from the FMRIB's Software Library as follows: (1) discarding of the first 5 volumes to remove T1 equilibrium effects, (2) skull-stripping of images, (3) motion and slice-time correction, (4) denoising with high-pass temporal filter (128s), (5) spatial smoothing with a Gaussian kernel of FWHM 8mm. No statistical significant differences (t-test, p < 0.05) were found between the two groups in the mean Euclidian distance and the mean Euler angle. WM and CSF signal together with 6 motion parameters were regressed out from the time series. We applied an independent component analysis (ICA) with the GIFT-toolbox using the Infomax approach with number of components=21. Fifteen mean components were visually identified by two experts. The resulting z-score maps were thresholded and binarized to extract the mean signal of the 15 networks for each subject. Statistical and machine learning analysis were then conducted on this dataset composed of 37 rows (subjects) and 15 features (mean signal in the network) with R language. The dataset was randomly splitted into training (75%) and test sets and two different classifiers were trained: RF and RBF-SVM. We used the intrinsic feature selection of RF, based on the Gini index, and recursive feature elimination (rfe) for the SVM, to obtain a rank of the most predictive variables. Thus, we built two new classifiers only on the most important features and we evaluated the accuracies (with and without feature selection) on test-set. The classifiers, trained on all the features, showed very poor accuracies on training (RF:58.62%, SVM:65.52%) and test sets (RF:62.5%, SVM:50%). Interestingly, when feature selection by RF and rfe-SVM were performed, the most important variable was the sensori-motor network I in both cases. Indeed, with only this network, RF and SVM classifiers reached an accuracy of 87.5% on test-set. More interestingly, the only misclassified patient resulted to have the lowest value of lesion volume. We showed that, with two different classification algorithms and feature selection approaches, the best discriminant network between controls and early MS, was the sensori-motor I. Similar importance values were obtained for the sensori-motor II, cerebellum and working memory networks. These findings, in according to the early manifestation of motor/sensorial deficits in MS, could represent an encouraging step toward the translation to the clinical diagnosis and prognosis.Keywords: feature selection, machine learning, multiple sclerosis, random forest, support vector machine
Procedia PDF Downloads 241685 A Monocular Measurement for 3D Objects Based on Distance Area Number and New Minimize Projection Error Optimization Algorithms
Authors: Feixiang Zhao, Shuangcheng Jia, Qian Li
Abstract:
High-precision measurement of the target’s position and size is one of the hotspots in the field of vision inspection. This paper proposes a three-dimensional object positioning and measurement method using a monocular camera and GPS, namely the Distance Area Number-New Minimize Projection Error (DAN-NMPE). Our algorithm contains two parts: DAN and NMPE; specifically, DAN is a picture sequence algorithm, NMPE is a relatively positive optimization algorithm, which greatly improves the measurement accuracy of the target’s position and size. Comprehensive experiments validate the effectiveness of our proposed method on a self-made traffic sign dataset. The results show that with the laser point cloud as the ground truth, the size and position errors of the traffic sign measured by this method are ± 5% and 0.48 ± 0.3m, respectively. In addition, we also compared it with the current mainstream method, which uses a monocular camera to locate and measure traffic signs. DAN-NMPE attains significant improvements compared to existing state-of-the-art methods, which improves the measurement accuracy of size and position by 50% and 15.8%, respectively.Keywords: monocular camera, GPS, positioning, measurement
Procedia PDF Downloads 144684 Impact of Infrastructural Development on Socio-Economic Growth: An Empirical Investigation in India
Authors: Jonardan Koner
Abstract:
The study attempts to find out the impact of infrastructural investment on state economic growth in India. It further tries to determine the magnitude of the impact of infrastructural investment on economic indicator, i.e., per-capita income (PCI) in Indian States. The study uses panel regression technique to measure the impact of infrastructural investment on per-capita income (PCI) in Indian States. Panel regression technique helps incorporate both the cross-section and time-series aspects of the dataset. In order to analyze the difference in impact of the explanatory variables on the explained variables across states, the study uses Fixed Effect Panel Regression Model. The conclusions of the study are that infrastructural investment has a desirable impact on economic development and that the impact is different for different states in India. We analyze time series data (annual frequency) ranging from 1991 to 2010. The study reveals that the infrastructural investment significantly explains the variation of economic indicators.Keywords: infrastructural investment, multiple regression, panel regression techniques, economic development, fixed effect dummy variable model
Procedia PDF Downloads 373683 The Role of Social Networks in Promoting Ethics in Iranian Sports
Authors: Tayebeh Jameh-Bozorgi, M. Soleymani
Abstract:
In this research, the role of social networks in promoting ethics in Iranian sports was investigated. The research adopted a descriptive-analytic method, and the survey’s population consisted of all the athletes invited to the national football, volleyball, wrestling and taekwondo teams. Considering the limited population, the size of the society was considered as the sample size. After the distribution of the questionnaires, 167 respondents answered the questionnaires correctly. The data collection tool was chosen according to Hamid Ghasemi`s, standard questionnaire for social networking and mass media, which has 28 questions. Reliability of the questionnaire was calculated using Cronbach's alpha coefficient (94%). The content validity of the questionnaire was also approved by the professors. In this study, descriptive statistics and inferential statistical methods were used to analyze the data using statistical software. The benchmark tests used in this research included the following: Binomial test, Friedman test, Spearman correlation coefficient, Vermont Creamers, Good fit test and comparative prototypes. The results showed that athletes believed that social network has a significant role in promoting sport ethics in the community. Telegram has been known to play a big role than other social networks. Moreover, the respondents' view on the role of social networks in promoting sport ethics was significantly different in both men and women groups. In fact, women had a more positive attitude towards the role of social networks in promoting sport ethics than men. The respondents' view of the role of social networks in promoting the ethics of sports in the study groups also had a significant difference. Additionally, there was a significant and reverse relationship between the sports experience and the attitude of national athletes regarding the role of social networks in promoting ethics in sports.Keywords: ethics, social networks, mass media, Iranian sports, internet
Procedia PDF Downloads 289682 Machine Learning Assisted Prediction of Sintered Density of Binary W(MO) Alloys
Authors: Hexiong Liu
Abstract:
Powder metallurgy is the optimal method for the consolidation and preparation of W(Mo) alloys, which exhibit excellent application prospects at high temperatures. The properties of W(Mo) alloys are closely related to the sintered density. However, controlling the sintered density and porosity of these alloys is still challenging. In the past, the regulation methods mainly focused on time-consuming and costly trial-and-error experiments. In this study, the sintering data for more than a dozen W(Mo) alloys constituted a small-scale dataset, including both solid and liquid phases of sintering. Furthermore, simple descriptors were used to predict the sintered density of W(Mo) alloys based on the descriptor selection strategy and machine learning method (ML), where the ML algorithm included the least absolute shrinkage and selection operator (Lasso) regression, k-nearest neighbor (k-NN), random forest (RF), and multi-layer perceptron (MLP). The results showed that the interpretable descriptors extracted by our proposed selection strategy and the MLP neural network achieved a high prediction accuracy (R>0.950). By further predicting the sintered density of W(Mo) alloys using different sintering processes, the error between the predicted and experimental values was less than 0.063, confirming the application potential of the model.Keywords: sintered density, machine learning, interpretable descriptors, W(Mo) alloy
Procedia PDF Downloads 82681 Modified Form of Margin Based Angular Softmax Loss for Speaker Verification
Authors: Jamshaid ul Rahman, Akhter Ali, Adnan Manzoor
Abstract:
Learning-based systems have received increasing interest in recent years; recognition structures, including end-to-end speak recognition, are one of the hot topics in this area. A famous work on end-to-end speaker verification by using Angular Softmax Loss gained significant importance and is considered useful to directly trains a discriminative model instead of the traditional adopted i-vector approach. The margin-based strategy in angular softmax is beneficial to learn discriminative speaker embeddings where the random selection of margin values is a big issue in additive angular margin and multiplicative angular margin. As a better solution in this matter, we present an alternative approach by introducing a bit similar form of an additive parameter that was originally introduced for face recognition, and it has a capacity to adjust automatically with the corresponding margin values and is applicable to learn more discriminative features than the Softmax. Experiments are conducted on the part of Fisher dataset, where it observed that the additive parameter with angular softmax to train the front-end and probabilistic linear discriminant analysis (PLDA) in the back-end boosts the performance of the structure.Keywords: additive parameter, angular softmax, speaker verification, PLDA
Procedia PDF Downloads 104680 Informal Governance as Response to Institutional Paralysis
Authors: Stefanie Kasparek
Abstract:
The United Nations Security Council (UNSC) is probably the most recognized international security organization. It is also profoundly misunderstood and undervalued in its effort to promote peace and security. With the rising involvement of non-state actors and the way states fight wars, international governance has become increasingly complex. However, the formal UNSC agenda has long remained static, reflecting states' unwillingness to entertain more conflicts. Nevertheless, resolutions remain the scholarly measure of states' interests and policies, neglecting the significant share of issues the Council entertains informally. This project builds on a rational institutionalism framework. It provides a systematic analysis of how and under what conditions states use informal governance instead of, or in combination with, formal rules at the agenda-setting stage of the policy process. Data for this project comes from elite interviews and a newly created dataset on governance choices. The results show that counter existing arguments, weaker states successfully circumvent formal institutional roadblocks and use informal governance mechanisms to pursue vital interests, thereby countering institutional restrictions and power asymmetries present informal governance settings.Keywords: agenda-setting, decision-making, international governance, UNSC
Procedia PDF Downloads 200679 Ethics Can Enable Open Source Data Research
Authors: Dragana Calic
Abstract:
The openness, availability and the sheer volume of big data have provided, what some regard as, an invaluable and rich dataset. Researchers, businesses, advertising agencies, medical institutions, to name only a few, collect, share, and analyze this data to enable their processes and decision making. However, there are important ethical considerations associated with the use of big data. The rapidly evolving nature of online technologies has overtaken the many legislative, privacy, and ethical frameworks and principles that exist. For example, should we obtain consent to use people’s online data, and under what circumstances can privacy considerations be overridden? Current guidance on how to appropriately and ethically handle big data is inconsistent. Consequently, this paper focuses on two quite distinct but related ethical considerations that are at the core of the use of big data for research purposes. They include empowering the producers of data and empowering researchers who want to study big data. The first consideration focuses on informed consent which is at the core of empowering producers of data. In this paper, we discuss some of the complexities associated with informed consent and consider studies of producers’ perceptions to inform research ethics guidelines and practice. The second consideration focuses on the researcher. Similarly, we explore studies that focus on researchers’ perceptions and experiences.Keywords: big data, ethics, producers’ perceptions, researchers’ perceptions
Procedia PDF Downloads 286678 Good Banks, Bad Banks, and Public Scrutiny: The Determinants of Corporate Social Responsibility in Times of Financial Volatility
Authors: A. W. Chalmers, O. M. van den Broek
Abstract:
This article examines the relationship between the global financial crisis and corporate social responsibility activities of financial services firms. It challenges the general consensus in existing studies that firms, when faced with economic hardship, tend to jettison CSR commitments. Instead, and building on recent insights into the institutional determinants of CSR, it is argued that firms are constrained in their ability to abandon CSR by the extent to which they are subject to intense public scrutiny by regulators and the news media. This argument is tested in the context of the European sovereign debt crisis drawing on a unique dataset of 170 firms in 15 different countries over a six-year period. Controlling for a battery of alternative explanations and comparing financial service providers to firms operating in other economic sectors, results indicate considerable evidence supporting the main argument. Rather than abandoning CSR during times of economic hardship, financial industry firms ramp up their CSR commitments in order to manage their public image and foster public trust in light of intense public scrutiny.Keywords: corporate social responsibility (CSR), public scrutiny, global financial crisis, financial services firms
Procedia PDF Downloads 307677 Biases in Macroprudential Supervision and Their Legal Implications
Authors: Anat Keller
Abstract:
Given that macro-prudential supervision is a relatively new policy area and its empirical and analytical research are still in their infancy, its theoretical foundations are also lagging behind. This paper contributes to the developing discussion on effective legal and institutional macroprudential supervision frameworks. In the first part of the paper, it is argued that effectiveness as a key benchmark poses some challenges in the context of macroprudential supervision such as the difficulty in proving causality between supervisory actions and the achievement of the supervisor’s mission. The paper suggests that effectiveness in the macroprudential context should, therefore, be assessed at the supervisory decision-making process (to be differentiated from the supervisory outcomes). The second part of the essay examines whether insights from behavioural economics can point to biases in the macroprudential decision-making process. These biases include, inter alia, preference bias, groupthink bias and inaction bias. It is argued that these biases are exacerbated in the multilateral setting of the macroprudential supervision framework in the EU. The paper then examines how legal and institutional frameworks should be designed to acknowledge and perhaps contain these identified biases. The paper suggests that the effectiveness of macroprudential policy will largely depend on the existence of clear and robust transparency and accountability arrangements. Accountability arrangements can be used as a vehicle for identifying and addressing potential biases in the macro-prudential framework, in particular, inaction bias. Inclusiveness of the public in the supervisory process in the form of transparency and awareness of the logic behind policy decisions may assist in minimising their potential unpopularity thus promoting their effectiveness. Furthermore, a governance structure which facilitates coordination of the macroprudential supervisor with other policymakers and incorporates outside perspectives and opinions could ‘break-down’ groupthink bias as well as inaction bias.Keywords: behavioural economics and biases, effectiveness of macroprudential supervision, legal and institutional macroprudential frameworks, macroprudential decision-making process
Procedia PDF Downloads 282676 Identification of Thermally Critical Zones Based on Inter Seasonal Variation in Temperature
Authors: Sakti Mandal
Abstract:
Varying distribution of land surface temperature in an urbanized environment is a globally addressed phenomenon. Usually has been noticed that criticality of surface temperature increases from the periphery to the urban centre. As the centre experiences maximum severity of heat throughout the year, it also represents most critical zone in terms of thermal condition. In this present study, an attempt has been taken to propose a quantitative approach of thermal critical zonation (TCZ) on the basis of seasonal temperature variation. Here the zonation is done by calculating thermal critical value (TCV). From the Landsat 8 thermal digital data of summer and winter seasons for the year 2014, the land surface temperature maps and thermally critical zonation has been prepared, and corresponding dataset has been computed to conduct the overall study of that particular study area. It is shown that TCZ can be clearly identified and analyzed by the help of inter-seasonal temperature range. The results of this study can be utilized effectively in future urban development and planning projects as well as a framework for implementing rules and regulations by the authorities for a sustainable urban development through an environmentally affable approach.Keywords: thermal critical values (TCV), thermally critical zonation (TCZ), land surface temperature (LST), Landsat 8, Kolkata Municipal Corporation (KMC)
Procedia PDF Downloads 197675 An Empirical Study to Predict Myocardial Infarction Using K-Means and Hierarchical Clustering
Authors: Md. Minhazul Islam, Shah Ashisul Abed Nipun, Majharul Islam, Md. Abdur Rakib Rahat, Jonayet Miah, Salsavil Kayyum, Anwar Shadaab, Faiz Al Faisal
Abstract:
The target of this research is to predict Myocardial Infarction using unsupervised Machine Learning algorithms. Myocardial Infarction Prediction related to heart disease is a challenging factor faced by doctors & hospitals. In this prediction, accuracy of the heart disease plays a vital role. From this concern, the authors have analyzed on a myocardial dataset to predict myocardial infarction using some popular Machine Learning algorithms K-Means and Hierarchical Clustering. This research includes a collection of data and the classification of data using Machine Learning Algorithms. The authors collected 345 instances along with 26 attributes from different hospitals in Bangladesh. This data have been collected from patients suffering from myocardial infarction along with other symptoms. This model would be able to find and mine hidden facts from historical Myocardial Infarction cases. The aim of this study is to analyze the accuracy level to predict Myocardial Infarction by using Machine Learning techniques.Keywords: Machine Learning, K-means, Hierarchical Clustering, Myocardial Infarction, Heart Disease
Procedia PDF Downloads 204674 Understanding the Influence on Drivers’ Recommendation and Review-Writing Behavior in the P2P Taxi Service
Authors: Liwen Hou
Abstract:
The booming mobile business has been penetrating the taxi industry worldwide with P2P (peer to peer) taxi services, as an emerging business model, transforming the industry. Parallel with other mobile businesses, member recommendations and online reviews are believed to be very effective with regard to acquiring new users for P2P taxi services. Based on an empirical dataset of the taxi industry in China, this study aims to reveal which factors influence users’ recommendations and review-writing behaviors. Differing from the existing literature, this paper takes the taxi driver’s perspective into consideration and hence selects a group of variables related to the drivers. We built two models to reflect the factors that influence the number of recommendations and reviews posted on the platform (i.e., the app). Our models show that all factors, except the driver’s score, significantly influence the recommendation behavior. Likewise, only one factor, passengers’ bad reviews, is insignificant in generating more drivers’ reviews. In the conclusion, we summarize the findings and limitations of the research.Keywords: online recommendation, P2P taxi service, review-writing, word of mouth
Procedia PDF Downloads 307673 Assessment of Heavy Metal Contamination for the Sustainable Management of Vulnerable Mangrove Ecosystem, the Sundarbans
Authors: S. Begum, T. Biswas, M. A. Islam
Abstract:
The present research investigates the distribution and contamination of heavy metals in core sediments collected from three locations of the Sundarbans mangrove forest. In this research, quality of the analysis is evaluated by analyzing certified reference materials IAEA-SL-1 (lake sediment), IAEA-Soil-7, and NIST-1633b (coal fly ash). Total concentrations of 28 heavy metals (Na, Al, K, Ca, Sc, Ti, V, Cr, Mn, Fe, Co, Zn, Ga, As, Sb, Cs, La, Ce, Sm, Eu, Tb, Dy, Ho, Yb, Hf, Ta, Th, and U) have determined in core sediments of the Sundarbans mangrove by neutron activation analysis (NAA) technique. When compared with upper continental crustal (UCC) values, it is observed that mean concentrations of K, Ti, Zn, Cs, La, Ce, Sm, Hf, and Th show elevated values in the research area is high. In this research, the assessments of metal contamination levels using different environmental contamination indices (EF, Igeo, CF) indicate that Ti, Sb, Cs, REEs, and Th have minor enrichment of the sediments of the Sundarbans. The modified degree of contamination (mCd) of studied samples of the Sundarbans ecosystem show low contamination. The pollution load index (PLI) values for the cores suggested that sampling points are moderately polluted. The possible sources of the deterioration of the sediment quality can be attributed to the different chemical carrying cargo accidents, port activities, ship breaking, agricultural and aquaculture run-off of the area. Pearson correlation matrix (PCM) established relationships among elements. The PCM indicates that most of the metal's distributions have been controlled by the same factors such as Fe-oxy-hydroxides and clay minerals, and also they have a similar origin. The poor correlations of Ca with most of the elements in the sediment cores indicate that calcium carbonate has a less significant role in this mangrove sediment. Finally, the data from this research will be used as a benchmark for future research and help to quantify levels of metal pollutions, as well as to manage future ecological risks of the vulnerable mangrove ecosystem, the Sundarbans.Keywords: contamination, core sediment, trace element, sundarbans, vulnerable
Procedia PDF Downloads 123672 Challenges to Developing a Trans-European Programme for Health Professionals to Recognize and Respond to Survivors of Domestic Violence and Abuse
Authors: June Keeling, Christina Athanasiades, Vaiva Hendrixson, Delyth Wyndham
Abstract:
Recognition and education in violence, abuse, and neglect for medical and healthcare practitioners (REVAMP) is a trans-European project aiming to introduce a training programme that has been specifically developed by partners across seven European countries to meet the needs of medical and healthcare practitioners. Amalgamating the knowledge and experience of clinicians, researchers, and educators from interdisciplinary and multi-professional backgrounds, REVAMP has tackled the under-resourced and underdeveloped area of domestic violence and abuse. The team designed an online training programme to support medical and healthcare practitioners to recognise and respond appropriately to survivors of domestic violence and abuse at their point of contact with a health provider. The REVAMP partner countries include Europe: France, Lithuania, Germany, Greece, Iceland, Norway, and the UK. The training is delivered through a series of interactive online modules, adapting evidence-based pedagogical approaches to learning. Capturing and addressing the complexities of the project impacted the methodological decisions and approaches to evaluation. The challenge was to find an evaluation methodology that captured valid data across all partner languages to demonstrate the extent of the change in knowledge and understanding. Co-development by all team members was a lengthy iterative process, challenged by a lack of consistency in terminology. A mixed methods approach enabled both qualitative and quantitative data to be collected, at the start, during, and at the conclusion of the training for the purposes of evaluation. The module content and evaluation instrument were accessible in each partner country's language. Collecting both types of data provided a high-level snapshot of attainment via the quantitative dataset and an in-depth understanding of the impact of the training from the qualitative dataset. The analysis was mixed methods, with integration at multiple interfaces. The primary focus of the analysis was to support the overall project evaluation for the funding agency. A key project outcome was identifying that the trans-European approach posed several challenges. Firstly, the project partners did not share a first language or a legal or professional approach to domestic abuse and neglect. This was negotiated through complex, systematic, and iterative interaction between team members so that consensus could be achieved. Secondly, the context of the data collection in several different cultural, educational, and healthcare systems across Europe challenged the development of a robust evaluation. The participants in the pilot evaluation shared that the training was contemporary, well-designed, and of great relevance to inform practice. Initial results from the evaluation indicated that the participants were drawn from more than eight partner countries due to the online nature of the training. The primary results indicated a high level of engagement with the content and achievement through the online assessment. The main finding was that the participants perceived the impact of domestic abuse and neglect in very different ways in their individual professional contexts. Most significantly, the participants recognised the need for the training and the gap that existed previously. It is notable that a mixed-methods evaluation of a trans-European project is unusual at this scale.Keywords: domestic violence, e-learning, health professionals, trans-European
Procedia PDF Downloads 85671 Designing Urban Spaces Differently: A Case Study of the Hercity Herstreets Public Space Improvement Initiative in Nairobi, Kenya
Authors: Rehema Kabare
Abstract:
As urban development initiatives continue to emerge and are implemented amid rapid urbanization and climate change effects in the global south, the plight of women is only being noticed. The pandemic exposed the atrocities, violence and unsafety women and girls face daily both in their homes and in public urban spaces. This is a result of poorly implemented and managed urban structures, which women have been left out of during design and implementation for centuries. The UN Habitat’s HerCity toolkit provides a unique opportunity to change course for both governments and civil society actors where women and girls are onboarded onto urban development initiatives, with their designs and ideas being the focal point. This toolkit proves that when women and girls design, they design for everyone. The HerCity HerStreets, Public Space Improvement Initiative, resulted in a design that focused on two aspects: Streets are a shared resource, and Streets are public spaces. These two concepts illustrate that for streets to be experienced effectively as cultural spaces, they need to be user-friendly, safe and inclusive. This report demonstrates how the HerCity HerStreets as a pilot project can be a benchmark for designing urban spaces in African cities. The project focused on five dimensions to improve the air quality of the space, the space allocation to street vending and bodaboda (passenger motorcycle) stops parking and the green coverage. The process displays how digital tools such as Minecraft and Kobo Toolbox can be utilized to improve citizens’ participation in the development of public spaces, with a special focus on including vulnerable groups such as women, girls and youth.Keywords: urban space, sustainable development, gender and the city, digital tools and urban development
Procedia PDF Downloads 84670 Deepnic, A Method to Transform Each Variable into Image for Deep Learning
Authors: Nguyen J. M., Lucas G., Brunner M., Ruan S., Antonioli D.
Abstract:
Deep learning based on convolutional neural networks (CNN) is a very powerful technique for classifying information from an image. We propose a new method, DeepNic, to transform each variable of a tabular dataset into an image where each pixel represents a set of conditions that allow the variable to make an error-free prediction. The contrast of each pixel is proportional to its prediction performance and the color of each pixel corresponds to a sub-family of NICs. NICs are probabilities that depend on the number of inputs to each neuron and the range of coefficients of the inputs. Each variable can therefore be expressed as a function of a matrix of 2 vectors corresponding to an image whose pixels express predictive capabilities. Our objective is to transform each variable of tabular data into images into an image that can be analysed by CNNs, unlike other methods which use all the variables to construct an image. We analyse the NIC information of each variable and express it as a function of the number of neurons and the range of coefficients used. The predictive value and the category of the NIC are expressed by the contrast and the color of the pixel. We have developed a pipeline to implement this technology and have successfully applied it to genomic expressions on an Affymetrix chip.Keywords: tabular data, deep learning, perfect trees, NICS
Procedia PDF Downloads 91669 Online Yoga Asana Trainer Using Deep Learning
Authors: Venkata Narayana Chejarla, Nafisa Parvez Shaik, Gopi Vara Prasad Marabathula, Deva Kumar Bejjam
Abstract:
Yoga is an advanced, well-recognized method with roots in Indian philosophy. Yoga benefits both the body and the psyche. Yoga is a regular exercise that helps people relax and sleep better while also enhancing their balance, endurance, and concentration. Yoga can be learned in a variety of settings, including at home with the aid of books and the internet as well as in yoga studios with the guidance of an instructor. Self-learning does not teach the proper yoga poses, and doing them without the right instruction could result in significant injuries. We developed "Online Yoga Asana Trainer using Deep Learning" so that people could practice yoga without a teacher. Our project is developed using Tensorflow, Movenet, and Keras models. The system makes use of data from Kaggle that includes 25 different yoga poses. The first part of the process involves applying the movement model for extracting the 17 key points of the body from the dataset, and the next part involves preprocessing, which includes building a pose classification model using neural networks. The system scores a 98.3% accuracy rate. The system is developed to work with live videos.Keywords: yoga, deep learning, movenet, tensorflow, keras, CNN
Procedia PDF Downloads 241668 Evaluating Contextually Targeted Advertising with Attention Measurement
Authors: John Hawkins, Graham Burton
Abstract:
Contextual targeting is a common strategy for advertising that places marketing messages in media locations that are expected to be aligned with the target audience. There are multiple major challenges to contextual targeting: the ideal categorisation scheme needs to be known, as well as the most appropriate subsections of that scheme for a given campaign or creative. In addition, the campaign reach is typically limited when targeting becomes narrow, so a balance must be struck between requirements. Finally, refinement of the process is limited by the use of evaluation methods that are either rapid but non-specific (click through rates), or reliable but slow and costly (conversions or brand recall studies). In this study we evaluate the use of attention measurement as a technique for understanding the performance of targeting on the basis of specific contextual topics. We perform the analysis using a large scale dataset of impressions categorised using the iAB V2.0 taxonomy. We evaluate multiple levels of the categorisation hierarchy, using categories at different positions within an initial creative specific ranking. The results illustrate that measuring attention time is an affective signal for the performance of a specific creative within a specific context. Performance is sustained across a ranking of categories from one period to another.Keywords: contextual targeting, digital advertising, attention measurement, marketing performance
Procedia PDF Downloads 105667 Belief-Based Games: An Appropriate Tool for Uncertain Strategic Situation
Authors: Saied Farham-Nia, Alireza Ghaffari-Hadigheh
Abstract:
Game theory is a mathematical tool to study the behaviors of a rational and strategic decision-makers, that analyze existing equilibrium in interest conflict situation and provides an appropriate mechanisms for cooperation between two or more player. Game theory is applicable for any strategic and interest conflict situation in politics, management and economics, sociology and etc. Real worlds’ decisions are usually made in the state of indeterminacy and the players often are lack of the information about the other players’ payoffs or even his own, which leads to the games in uncertain environments. When historical data for decision parameters distribution estimation is unavailable, we may have no choice but to use expertise belief degree, which represents the strength with that we believe the event will happen. To deal with belief degrees, we have use uncertainty theory which is introduced and developed by Liu based on normality, duality, subadditivity and product axioms to modeling personal belief degree. As we know, the personal belief degree heavily depends on the personal knowledge concerning the event and when personal knowledge changes, cause changes in the belief degree too. Uncertainty theory not only theoretically is self-consistent but also is the best among other theories for modeling belief degree on practical problem. In this attempt, we primarily reintroduced Expected Utility Function in uncertainty environment according to uncertainty theory axioms to extract payoffs. Then, we employed Nash Equilibrium to investigate the solutions. For more practical issues, Stackelberg leader-follower Game and Bertrand Game, as a benchmark models are discussed. Compared to existing articles in the similar topics, the game models and solution concepts introduced in this article can be a framework for problems in an uncertain competitive situation based on experienced expert’s belief degree.Keywords: game theory, uncertainty theory, belief degree, uncertain expected value, Nash equilibrium
Procedia PDF Downloads 416666 Land Use Dynamics of Ikere Forest Reserve, Nigeria Using Geographic Information System
Authors: Akintunde Alo
Abstract:
The incessant encroachments into the forest ecosystem by the farmers and local contractors constitute a major threat to the conservation of genetic resources and biodiversity in Nigeria. To propose a viable monitoring system, this study employed Geographic Information System (GIS) technology to assess the changes that occurred for a period of five years (between 2011 and 2016) in Ikere forest reserve. Landsat imagery of the forest reserve was obtained. For the purpose of geo-referencing the acquired satellite imagery, ground-truth coordinates of some benchmark places within the forest reserve was relied on. Supervised classification algorithm, image processing, vectorization and map production were realized using ArcGIS. Various land use systems within the forest ecosystem were digitized into polygons of different types and colours for 2011 and 2016, roads were represented with lines of different thickness and colours. Of the six land-use delineated, the grassland increased from 26.50 % in 2011 to 45.53% in 2016 of the total land area with a percentage change of 71.81 %. Plantations of Gmelina arborea and Tectona grandis on the other hand reduced from 62.16 % in 2011 to 27.41% in 2016. The farmland and degraded land recorded percentage change of about 176.80 % and 8.70 % respectively from 2011 to 2016. Overall, the rate of deforestation in the study area is on the increase and becoming severe. About 72.59% of the total land area has been converted to non-forestry uses while the remnant 27.41% is occupied by plantations of Gmelina arborea and Tectona grandis. Interestingly, over 55 % of the plantation area in 2011 has changed to grassland, or converted to farmland and degraded land in 2016. The rate of change over time was about 9.79 % annually. Based on the results, rapid actions to prevail on the encroachers to stop deforestation and encouraged re-afforestation in the study area are recommended.Keywords: land use change, forest reserve, satellite imagery, geographical information system
Procedia PDF Downloads 357665 Risk Screening in Digital Insurance Distribution: Evidence and Explanations
Authors: Finbarr Murphy, Wei Xu, Xian Xu
Abstract:
The embedding of digital technologies in the global economy has attracted increasing attention from economists. With a large and detailed dataset, this study examines the specific case where consumers have a choice between offline and digital channels in the context of insurance purchases. We find that digital channels screen consumers with lower unobserved risk. For the term life, endowment, and disease insurance products, the average risk of the policies purchased through digital channels was 75%, 21%, and 31%, respectively, lower than those purchased offline. As a consequence, the lower unobserved risk leads to weaker information asymmetry and higher profitability of digital channels. We highlight three mechanisms of the risk screening effect: heterogeneous marginal influence of channel features on insurance demand, the channel features directly related to risk control, and the link between the digital divide and risk. We also find that the risk screening effect mainly comes from the extensive margin, i.e., from new consumers. This paper contributes to three connected areas in the insurance context: the heterogeneous economic impacts of digital technology adoption, insurer-side risk selection, and insurance marketing.Keywords: digital economy, information asymmetry, insurance, mobile application, risk screening
Procedia PDF Downloads 75664 Dynamic Interaction between Renwable Energy Consumption and Sustainable Development: Evidence from Ecowas Region
Authors: Maman Ali M. Moustapha, Qian Yu, Benjamin Adjei Danquah
Abstract:
This paper investigates the dynamic interaction between renewable energy consumption (REC) and economic growth using dataset from the Economic Community of West African States (ECOWAS) from 2002 to 2016. For this study the Autoregressive Distributed Lag- Bounds test approach (ARDL) was used to examine the long run relationship between real gross domestic product and REC, while VECM based on Granger causality has been used to examine the direction of Granger causality. Our empirical findings indicate that REC has significant and positive impact on real gross domestic product. In addition, we found that REC and the percentage of access to electricity had unidirectional Granger causality to economic growth while carbon dioxide emission has bidirectional Granger causality to economic growth. Our findings indicate also that 1 per cent increase in the REC leads to an increase in Real GDP by 0.009 in long run. Thus, REC can be a means to ensure sustainable economic growth in the ECOWAS sub-region. However, it is necessary to increase further support and investments on renewable energy production in order to speed up sustainable economic development throughout the regionKeywords: Economic Growth, Renewable Energy, Sustainable Development, Sustainable Energy
Procedia PDF Downloads 211663 Advancing in Cricket Analytics: Novel Approaches for Pitch and Ball Detection Employing OpenCV and YOLOV8
Authors: Pratham Madnur, Prathamkumar Shetty, Sneha Varur, Gouri Parashetti
Abstract:
In order to overcome conventional obstacles, this research paper investigates novel approaches for cricket pitch and ball detection that make use of cutting-edge technologies. The research integrates OpenCV for pitch inspection and modifies the YOLOv8 model for cricket ball detection in order to overcome the shortcomings of manual pitch assessment and traditional ball detection techniques. To ensure flexibility in a range of pitch environments, the pitch detection method leverages OpenCV’s color space transformation, contour extraction, and accurate color range defining features. Regarding ball detection, the YOLOv8 model emphasizes the preservation of minor object details to improve accuracy and is specifically trained to the unique properties of cricket balls. The methods are more reliable because of the careful preparation of the datasets, which include novel ball and pitch information. These cutting-edge methods not only improve cricket analytics but also set the stage for flexible methods in more general sports technology applications.Keywords: OpenCV, YOLOv8, cricket, custom dataset, computer vision, sports
Procedia PDF Downloads 84662 Improving Fake News Detection Using K-means and Support Vector Machine Approaches
Authors: Kasra Majbouri Yazdi, Adel Majbouri Yazdi, Saeid Khodayi, Jingyu Hou, Wanlei Zhou, Saeed Saedy
Abstract:
Fake news and false information are big challenges of all types of media, especially social media. There is a lot of false information, fake likes, views and duplicated accounts as big social networks such as Facebook and Twitter admitted. Most information appearing on social media is doubtful and in some cases misleading. They need to be detected as soon as possible to avoid a negative impact on society. The dimensions of the fake news datasets are growing rapidly, so to obtain a better result of detecting false information with less computation time and complexity, the dimensions need to be reduced. One of the best techniques of reducing data size is using feature selection method. The aim of this technique is to choose a feature subset from the original set to improve the classification performance. In this paper, a feature selection method is proposed with the integration of K-means clustering and Support Vector Machine (SVM) approaches which work in four steps. First, the similarities between all features are calculated. Then, features are divided into several clusters. Next, the final feature set is selected from all clusters, and finally, fake news is classified based on the final feature subset using the SVM method. The proposed method was evaluated by comparing its performance with other state-of-the-art methods on several specific benchmark datasets and the outcome showed a better classification of false information for our work. The detection performance was improved in two aspects. On the one hand, the detection runtime process decreased, and on the other hand, the classification accuracy increased because of the elimination of redundant features and the reduction of datasets dimensions.Keywords: clustering, fake news detection, feature selection, machine learning, social media, support vector machine
Procedia PDF Downloads 177