Search results for: digital business models
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 11435

Search results for: digital business models

7445 A Supervised Approach for Detection of Singleton Spam Reviews

Authors: Atefeh Heydari, Mohammadali Tavakoli, Naomie Salim

Abstract:

In recent years, we have witnessed that online reviews are the most important source of customers’ opinion. They are progressively more used by individuals and organisations to make purchase and business decisions. Unfortunately, for the reason of profit or fame, frauds produce deceptive reviews to hoodwink potential customers. Their activities mislead not only potential customers to make appropriate purchasing decisions and organisations to reshape their business, but also opinion mining techniques by preventing them from reaching accurate results. Spam reviews could be divided into two main groups, i.e. multiple and singleton spam reviews. Detecting a singleton spam review that is the only review written by a user ID is extremely challenging due to lack of clue for detection purposes. Singleton spam reviews are very harmful and various features and proofs used in multiple spam reviews detection are not applicable in this case. Current research aims to propose a novel supervised technique to detect singleton spam reviews. To achieve this, various features are proposed in this study and are to be combined with the most appropriate features extracted from literature and employed in a classifier. In order to compare the performance of different classifiers, SVM and naive Bayes classification algorithms were used for model building. The results revealed that SVM was more accurate than naive Bayes and our proposed technique is capable to detect singleton spam reviews effectively.

Keywords: classification algorithms, Naïve Bayes, opinion review spam detection, singleton review spam detection, support vector machine

Procedia PDF Downloads 300
7444 Feasibility of an Extreme Wind Risk Assessment Software for Industrial Applications

Authors: Francesco Pandolfi, Georgios Baltzopoulos, Iunio Iervolino

Abstract:

The impact of extreme winds on industrial assets and the built environment is gaining increasing attention from stakeholders, including the corporate insurance industry. This has led to a progressively more in-depth study of building vulnerability and fragility to wind. Wind vulnerability models are used in probabilistic risk assessment to relate a loss metric to an intensity measure of the natural event, usually a gust or a mean wind speed. In fact, vulnerability models can be integrated with the wind hazard, which consists of associating a probability to each intensity level in a time interval (e.g., by means of return periods) to provide an assessment of future losses due to extreme wind. This has also given impulse to the world- and regional-scale wind hazard studies.Another approach often adopted for the probabilistic description of building vulnerability to the wind is the use of fragility functions, which provide the conditional probability that selected building components will exceed certain damage states, given wind intensity. In fact, in wind engineering literature, it is more common to find structural system- or component-level fragility functions rather than wind vulnerability models for an entire building. Loss assessment based on component fragilities requires some logical combination rules that define the building’s damage state given the damage state of each component and the availability of a consequence model that provides the losses associated with each damage state. When risk calculations are based on numerical simulation of a structure’s behavior during extreme wind scenarios, the interaction of component fragilities is intertwined with the computational procedure. However, simulation-based approaches are usually computationally demanding and case-specific. In this context, the present work introduces the ExtReMe wind risk assESsment prototype Software, ERMESS, which is being developed at the University of Naples Federico II. ERMESS is a wind risk assessment tool for insurance applications to industrial facilities, collecting a wide assortment of available wind vulnerability models and fragility functions to facilitate their incorporation into risk calculations based on in-built or user-defined wind hazard data. This software implements an alternative method for building-specific risk assessment based on existing component-level fragility functions and on a number of simplifying assumptions for their interactions. The applicability of this alternative procedure is explored by means of an illustrative proof-of-concept example, which considers four main building components, namely: the roof covering, roof structure, envelope wall and envelope openings. The application shows that, despite the simplifying assumptions, the procedure can yield risk evaluations that are comparable to those obtained via more rigorous building-level simulation-based methods, at least in the considered example. The advantage of this approach is shown to lie in the fact that a database of building component fragility curves can be put to use for the development of new wind vulnerability models to cover building typologies not yet adequately covered by existing works and whose rigorous development is usually beyond the budget of portfolio-related industrial applications.

Keywords: component wind fragility, probabilistic risk assessment, vulnerability model, wind-induced losses

Procedia PDF Downloads 177
7443 Flicker Detection with Motion Tolerance for Embedded Camera

Authors: Jianrong Wu, Xuan Fu, Akihiro Higashi, Zhiming Tan

Abstract:

CMOS image sensors with a rolling shutter are used broadly in the digital cameras embedded in mobile devices. The rolling shutter suffers the flicker artifacts from the fluorescent lamp, and it could be observed easily. In this paper, the characteristics of illumination flicker in motion case were analyzed, and two efficient detection methods based on matching fragment selection were proposed. According to the experimental results, our methods could achieve as high as 100% accuracy in static scene, and at least 97% in motion scene.

Keywords: illumination flicker, embedded camera, rolling shutter, detection

Procedia PDF Downloads 414
7442 Predicting Wearable Technology Readiness in a South African Government Department: Exploring the Influence of Wearable Technology Acceptance and Positive Attitude

Authors: Henda J Thomas, Cornelia PJ Harmse, Cecile Schultz

Abstract:

Wearables are one of the technologies that will flourish within the fourth industrial revolution and digital transformation arenas, allowing employers to integrate collected data into organisational information systems. The study aimed to investigate whether wearable technology readiness can predict employees’ acceptance to wear wearables in the workplace. The factors of technology readiness predisposition that predict acceptance and positive attitudes towards wearable use in the workplace were examined. A quantitative research approach was used. The population consisted of 8 081 South African Department of Employment and Labour employees (DEL). Census sampling was used, and questionnaires to collect data were sent electronically to all 8 081 employees, 351 questionnaires were received back. The measuring instrument called the Technology Readiness and Acceptance Model (TRAM) was used in this study. Four hypotheses were formulated to investigate the relationship between readiness and acceptance of wearables in the workplace. The results found consistent predictions of technology acceptance (TA) by eagerness, optimism, and discomfort in the technology readiness (TR) scales. The TR scales of optimism and eagerness were consistent positive predictors of the TA scales, while discomfort proved to be a negative predictor for two of the three TA scales. Insecurity was found not to be a predictor of TA. It was recommended that the digital transformation policy of the DEL should be revised. Wearables in the workplace should be embraced from the viewpoint of convenience, automation, and seamless integration with the DEL information systems. The empirical contribution of this study can be seen in the fact that positive attitude emerged as a factor that extends the TRAM. In this study, positive attitude is identified as a new dimension to the TRAM not found in the original TA model and subsequent studies of the TRAM. Furthermore, this study found that Perceived Usefulness (PU) and Behavioural Intention to Use and (BIU) could not be separated but formed one factor. The methodological contribution of this study can lead to the development of a Wearable Readiness and Acceptance Model (WRAM). To the best of our knowledge, no author has yet introduced the WRAM into the body of knowledge.

Keywords: technology acceptance model, technology readiness index, technology readiness and acceptance model, wearable devices, wearable technology, fourth industrial revolution

Procedia PDF Downloads 79
7441 Photocatalytic Eco-Active Ceramic Slabs to Abate Air Pollution under LED Light

Authors: Claudia L. Bianchi, Giuseppina Cerrato, Federico Galli, Federica Minozzi, Valentino Capucci

Abstract:

At the beginning of the industrial productions, porcelain gres tiles were considered as just a technical material, aesthetically not very beautiful. Today thanks to new industrial production methods, both properties, and beauty of these materials completely fit the market requests. In particular, the possibility to prepare slabs of large sizes is the new frontier of building materials. Beside these noteworthy architectural features, new surface properties have been introduced in the last generation of these materials. In particular, deposition of TiO₂ transforms the traditional ceramic into a photocatalytic eco-active material able to reduce polluting molecules present in air and water, to eliminate bacteria and to reduce the surface dirt thanks to the self-cleaning property. The problem of photocatalytic materials resides in the fact that it is necessary a UV light source to activate the oxidation processes on the surface of the material, processes that are turned off inexorably when the material is illuminated by LED lights and, even more so, when we are in darkness. First, it was necessary a thorough study change the existing plants to deposit the photocatalyst very evenly and this has been done thanks to the advent of digital printing and the development of an ink custom-made that stabilizes the powdered TiO₂ in its formulation. In addition, the commercial TiO₂, which is used for the traditional photocatalytic coating, has been doped with metals in order to activate it even in the visible region and thus in the presence of sunlight or LED. Thanks to this active coating, ceramic slabs are able to purify air eliminating odors and VOCs, and also can be cleaned with very soft detergents due to the self-cleaning properties given by the TiO₂ present at the ceramic surface. Moreover, the presence of dopant metals (patent WO2016157155) also allows the material to work as well as antibacterial in the dark, by eliminating one of the negative features of photocatalytic building materials that have so far limited its use on a large scale. Considering that we are constantly in contact with bacteria, some of which are dangerous for health. Active tiles are 99,99% efficient on all bacteria, from the most common such as Escherichia coli to the most dangerous such as Staphilococcus aureus Methicillin-resistant (MRSA). DIGITALIFE project LIFE13 ENV/IT/000140 – award for best project of October 2017.

Keywords: Ag-doped microsized TiO₂, eco-active ceramic, photocatalysis, digital coating

Procedia PDF Downloads 217
7440 Fluid Catalytic Cracking: Zeolite Catalyzed Chemical Industry Processes

Authors: Mithil Pandey, Ragunathan Bala Subramanian

Abstract:

One of the major conversion technologies in the oil refinery industry is Fluid catalytic cracking (FCC) which produces the majority of the world’s gasoline. Some useful products are generated from the vacuum gas oil, heavy gas oil and residue feedstocks by the FCC unit in an oil refinery. Moreover, Zeolite catalysts (zeo-catalysts) have found widespread applications and have proved to be substantial and paradigmatic in oil refining and petrochemical processes, such as FCC because of their porous features. Several famous zeo-catalysts have been fabricated and applied in industrial processes as milestones in history, and have brought on huge changes in petrochemicals. So far, more than twenty types of zeolites have been industrially applied, and their versatile porous architectures with their essential features have contributed to affect the catalytic efficiency. This poster depicts the evolution of pore models in zeolite catalysts which are accompanied by an increase in environmental and demands. The crucial roles of modulating pore models are outlined for zeo-catalysts for the enhancement of their catalytic performances in various industrial processes. The development of industrial processes for the FCC process, aromatic conversions and olefin production, makes it obvious that the pore architecture plays a very important role in zeo-catalysis processes. By looking at the different necessities of industrial processes, rational construction of the pore model is critically essential. Besides, the pore structure of the zeolite would have a substantial and direct effect on the utilization efficiency of the zeo-catalyst.

Keywords: catalysts, fluid catalytic cracking, industrial processes, zeolite

Procedia PDF Downloads 345
7439 Detection and Quantification of Active Pharmaceutical Ingredients as Adulterants in Garcinia cambogia Slimming Preparations Using NIR Spectroscopy Combined with Chemometrics

Authors: Dina Ahmed Selim, Eman Shawky Anwar, Rasha Mohamed Abu El-Khair

Abstract:

A rapid, simple and efficient method with minimal sample treatment was developed for authentication of Garcinia cambogia fruit peel powder, along with determining undeclared active pharmaceutical ingredients (APIs) in its herbal slimming dietary supplements using near infrared spectroscopy combined with chemometrics. Five featured adulterants, including sibutramine, metformin, orlistat, ephedrine, and theophylline are selected as target compounds. The Near infrared spectral data matrix of authentic Garcinia cambogia fruit peel and specimens degraded by intentional contamination with the five selected APIs was subjected to hierarchical clustering analysis to investigate their bundling figure. SIMCA models were established to ensure the genuiness of Garcinia cambogia fruit peel which resulted in perfect classification of all tested specimens. Adulterated samples were utilized for construction of PLSR models based on different APIs contents at minute levels of fraud practices (LOQ < 0.2% w/w).The suggested approach can be applied to enhance and guarantee the safety and quality of Garcinia fruit peel powder as raw material and in dietary supplements.

Keywords: Garcinia cambogia, Quality control, NIR spectroscopy, Chemometrics

Procedia PDF Downloads 72
7438 Real-Time Control of Grid-Connected Inverter Based on labVIEW

Authors: L. Benbaouche, H. E. , F. Krim

Abstract:

In this paper we propose real-time control of grid-connected single phase inverter, which is flexible and efficient. The first step is devoted to the study and design of the controller through simulation, conducted by the LabVIEW software on the computer 'host'. The second step is running the application from PXI 'target'. LabVIEW software, combined with NI-DAQmx, gives the tools to easily build applications using the digital to analog converter to generate the PWM control signals. Experimental results show that the effectiveness of LabVIEW software applied to power electronics.

Keywords: real-time control, labview, inverter, PWM

Procedia PDF Downloads 499
7437 Suitability of Satellite-Based Data for Groundwater Modelling in Southwest Nigeria

Authors: O. O. Aiyelokun, O. A. Agbede

Abstract:

Numerical modelling of groundwater flow can be susceptible to calibration errors due to lack of adequate ground-based hydro-metrological stations in river basins. Groundwater resources management in Southwest Nigeria is currently challenged by overexploitation, lack of planning and monitoring, urbanization and climate change; hence to adopt models as decision support tools for sustainable management of groundwater; they must be adequately calibrated. Since river basins in Southwest Nigeria are characterized by missing data, and lack of adequate ground-based hydro-meteorological stations; the need for adopting satellite-based data for constructing distributed models is crucial. This study seeks to evaluate the suitability of satellite-based data as substitute for ground-based, for computing boundary conditions; by determining if ground and satellite based meteorological data fit well in Ogun and Oshun River basins. The Climate Forecast System Reanalysis (CFSR) global meteorological dataset was firstly obtained in daily form and converted to monthly form for the period of 432 months (January 1979 to June, 2014). Afterwards, ground-based meteorological data for Ikeja (1981-2010), Abeokuta (1983-2010), and Oshogbo (1981-2010) were compared with CFSR data using Goodness of Fit (GOF) statistics. The study revealed that based on mean absolute error (MEA), coefficient of correlation, (r) and coefficient of determination (R²); all meteorological variables except wind speed fit well. It was further revealed that maximum and minimum temperature, relative humidity and rainfall had high range of index of agreement (d) and ratio of standard deviation (rSD), implying that CFSR dataset could be used to compute boundary conditions such as groundwater recharge and potential evapotranspiration. The study concluded that satellite-based data such as the CFSR should be used as input when constructing groundwater flow models in river basins in Southwest Nigeria, where majority of the river basins are partially gaged and characterized with long missing hydro-metrological data.

Keywords: boundary condition, goodness of fit, groundwater, satellite-based data

Procedia PDF Downloads 122
7436 An Evaluation of the Artificial Neural Network and Adaptive Neuro Fuzzy Inference System Predictive Models for the Remediation of Crude Oil-Contaminated Soil Using Vermicompost

Authors: Precious Ehiomogue, Ifechukwude Israel Ahuchaogu, Isiguzo Edwin Ahaneku

Abstract:

Vermicompost is the product of the decomposition process using various species of worms, to create a mixture of decomposing vegetable or food waste, bedding materials, and vemicast. This process is called vermicomposting, while the rearing of worms for this purpose is called vermiculture. Several works have verified the adsorption of toxic metals using vermicompost but the application is still scarce for the retention of organic compounds. This research brings to knowledge the effectiveness of earthworm waste (vermicompost) for the remediation of crude oil contaminated soils. The remediation methods adopted in this study were two soil washing methods namely, batch and column process which represent laboratory and in-situ remediation. Characterization of the vermicompost and crude oil contaminated soil were performed before and after the soil washing using Fourier transform infrared (FTIR), scanning electron microscopy (SEM), X-ray fluorescence (XRF), X-ray diffraction (XRD) and Atomic adsorption spectrometry (AAS). The optimization of washing parameters, using response surface methodology (RSM) based on Box-Behnken Design was performed on the response from the laboratory experimental results. This study also investigated the application of machine learning models [Artificial neural network (ANN), Adaptive neuro fuzzy inference system (ANFIS). ANN and ANFIS were evaluated using the coefficient of determination (R²) and mean square error (MSE)]. Removal efficiency obtained from the Box-Behnken design experiment ranged from 29% to 98.9% for batch process remediation. Optimization of the experimental factors carried out using numerical optimization techniques by applying desirability function method of the response surface methodology (RSM) produce the highest removal efficiency of 98.9% at absorbent dosage of 34.53 grams, adsorbate concentration of 69.11 (g/ml), contact time of 25.96 (min), and pH value of 7.71, respectively. Removal efficiency obtained from the multilevel general factorial design experiment ranged from 56% to 92% for column process remediation. The coefficient of determination (R²) for ANN was (0.9974) and (0.9852) for batch and column process, respectively, showing the agreement between experimental and predicted results. For batch and column precess, respectively, the coefficient of determination (R²) for RSM was (0.9712) and (0.9614), which also demonstrates agreement between experimental and projected findings. For the batch and column processes, the ANFIS coefficient of determination was (0.7115) and (0.9978), respectively. It can be concluded that machine learning models can predict the removal of crude oil from polluted soil using vermicompost. Therefore, it is recommended to use machines learning models to predict the removal of crude oil from contaminated soil using vermicompost.

Keywords: ANFIS, ANN, crude-oil, contaminated soil, remediation and vermicompost

Procedia PDF Downloads 101
7435 Supply Chain Technology Adoption in Textile and Apparel Industry

Authors: Zulkifli Mohamed Udin, Lee Khai-Loon, Mohamad Ghozali Hassan

Abstract:

In today’s dynamic business environment, the competition is no longer between firms, but between supply chains to gain competitive advantages. The global manufacturing sector, especially the textile and apparel industry are essentially known for its supply chain dependency. The delicate nature of its business leads to emphasis on the smooth movement of upstream and downstream supply chain. The nature of this industry, however, result in huge dynamic flow of physical, information, and financial. The dynamic management of these flows requires adoption of supply chain technologies. Even though technology is widely implemented and studied in many industries by researchers, adoption of supply chain technologies in Malaysian textile and apparel industry is limited. There is relatively a handful academic study conducted on recent developments in Malaysian textile and apparel industry and supply chain technology adoption indicate a major gap in supply chain performance studies. Considering the importance given to Third Industrial Master Plan by the government Malaysia, it is necessary to understand the power of supply chain technology adoptions. This study aims to investigate supply chain technology adoption by textile and apparel companies in Malaysia. The result highlighted the benefits perceived by textile and apparel companies from supply chain technologies. The indifference of small and medium enterprises to operation management acts as a major inhibitor to the adoption of supply chain technologies, since they have resource limitations. This study could be used as a precursor for further detailed studies on this issue.

Keywords: supply chain technology adoption, supply chain performance, textile, apparel industry

Procedia PDF Downloads 483
7434 Ecosystem Carbon Stocks Vary in Reference to the Models Used, Socioecological Factors and Agroforestry Practices in Central Ethiopia

Authors: Gadisa Demie, Mesele Negash, Zerihun Asrat, Lojka Bohdan

Abstract:

Deforestation and forest degradation in the tropics have led to significant carbon (C) emissions. Agroforestry (AF) is a suitable land-use option for tackling such declines in ecosystem services, including climate change mitigation. However, it is unclear how biomass models, AF practices, and socio-ecological factors determine these roles, which hinders the implementation of climate change mitigation initiatives. This study aimed to estimate the ecosystem C stocks of the studied AF practices in relation to socio-ecological variables in central Ethiopia. Out of 243 AF farms inventoried, 108 were chosen at random from three AF practices to estimate their biomass and soil organic carbon. A total of 432 soil samples were collected from 0–30 and 30–60 cm soil depths; 216 samples were taken for each soil organic carbon fraction (%C) and bulk density computation. The study found that the currently developed allometric equations were the most accurate to estimate biomass C for trees growing in the landscape when compared to previous models. The study found higher overall biomass C in woodlots (165.62 Mg ha-¹) than in homegardens (134.07 Mg ha-¹) and parklands (19.98 Mg ha-¹). Conversely, overall, SOC was higher for homegardens (143.88 Mg ha-¹), but lower for parklands (53.42 Mg ha-¹). The ecosystem C stock was comparable between homegardens (277.95 Mg ha-¹) and woodlots (275.44 Mg ha-¹). The study found that elevation, wealthy levels, AF farm age, and size have a positive and significant (P < 0.05) effect on overall biomass and ecosystem C stocks but non-significant with slope (P > 0.05). Similarly, SOC increased with increasing elevation, AF farm age, and wealthy status but decreased with slope and non-significant with AF farm size. The study also showed that species diversity had a positive (P <0.05) effect on overall biomass C stocks in homegardens. The overall study highlights that AF practices have a great potential to lock up more carbon in biomass and soils; however, these potentials were determined by socioecological variables. Thus, these factors should be considered in management strategies that preserve trees in agricultural landscapes in order to mitigate climate change and support the livelihoods of farmers.

Keywords: agricultural landscape, biomass, climate change, soil organic carbon

Procedia PDF Downloads 42
7433 Dynamic Reliability for a Complex System and Process: Application on Offshore Platform in Mozambique

Authors: Raed KOUTA, José-Alcebiades-Ernesto HLUNGUANE, Eric Châtele

Abstract:

The search for and exploitation of new fossil energy resources is taking place in the context of the gradual depletion of existing deposits. Despite the adoption of international targets to combat global warming, the demand for fuels continues to grow, contradicting the movement towards an energy-efficient society. The increase in the share of offshore in global hydrocarbon production tends to compensate for the depletion of terrestrial reserves, thus constituting a major challenge for the players in the sector. Through the economic potential it represents, and the energy independence it provides, offshore exploitation is also a challenge for States such as Mozambique, which have large maritime areas and whose environmental wealth must be considered. The exploitation of new reserves on economically viable terms depends on available technologies. The development of deep and ultra-deep offshore requires significant research and development efforts. Progress has also been made in managing the multiple risks inherent in this activity. Our study proposes a reliability approach to develop products and processes designed to live at sea. Indeed, the context of an offshore platform requires highly reliable solutions to overcome the difficulties of access to the system for regular maintenance and quick repairs and which must resist deterioration and degradation processes. One of the characteristics of failures that we consider is the actual conditions of use that are considered 'extreme.' These conditions depend on time and the interactions between the different causes. These are the two factors that give the degradation process its dynamic character, hence the need to develop dynamic reliability models. Our work highlights mathematical models that can explicitly manage interactions between components and process variables. These models are accompanied by numerical resolution methods that help to structure a dynamic reliability approach in a physical and probabilistic context. The application developed makes it possible to evaluate the reliability, availability, and maintainability of a floating storage and unloading platform for liquefied natural gas production.

Keywords: dynamic reliability, offshore plateform, stochastic process, uncertainties

Procedia PDF Downloads 115
7432 How Virtualization, Decentralization, and Network-Building Change the Manufacturing Landscape: An Industry 4.0 Perspective

Authors: Malte Brettel, Niklas Friederichsen, Michael Keller, Marius Rosenberg

Abstract:

The German manufacturing industry has to withstand an increasing global competition on product quality and production costs. As labor costs are high, several industries have suffered severely under the relocation of production facilities towards aspiring countries, which have managed to close the productivity and quality gap substantially. Established manufacturing companies have recognized that customers are not willing to pay large price premiums for incremental quality improvements. As a consequence, many companies from the German manufacturing industry adjust their production focusing on customized products and fast time to market. Leveraging the advantages of novel production strategies such as Agile Manufacturing and Mass Customization, manufacturing companies transform into integrated networks, in which companies unite their core competencies. Hereby, virtualization of the process- and supply-chain ensures smooth inter-company operations providing real-time access to relevant product and production information for all participating entities. Boundaries of companies deteriorate, as autonomous systems exchange data, gained by embedded systems throughout the entire value chain. By including Cyber-Physical-Systems, advanced communication between machines is tantamount to their dialogue with humans. The increasing utilization of information and communication technology allows digital engineering of products and production processes alike. Modular simulation and modeling techniques allow decentralized units to flexibly alter products and thereby enable rapid product innovation. The present article describes the developments of Industry 4.0 within the literature and reviews the associated research streams. Hereby, we analyze eight scientific journals with regards to the following research fields: Individualized production, end-to-end engineering in a virtual process chain and production networks. We employ cluster analysis to assign sub-topics into the respective research field. To assess the practical implications, we conducted face-to-face interviews with managers from the industry as well as from the consulting business using a structured interview guideline. The results reveal reasons for the adaption and refusal of Industry 4.0 practices from a managerial point of view. Our findings contribute to the upcoming research stream of Industry 4.0 and support decision-makers to assess their need for transformation towards Industry 4.0 practices.

Keywords: Industry 4.0., mass customization, production networks, virtual process-chain

Procedia PDF Downloads 269
7431 An Inquiry into the Usage of Complex Systems Models to Examine the Effects of the Agent Interaction in a Political Economic Environment

Authors: Ujjwall Sai Sunder Uppuluri

Abstract:

Group theory is a powerful tool that researchers can use to provide a structural foundation for their Agent Based Models. These Agent Based models are argued by this paper to be the future of the Social Science Disciplines. More specifically, researchers can use them to apply evolutionary theory to the study of complex social systems. This paper illustrates one such example of how theoretically an Agent Based Model can be formulated from the application of Group Theory, Systems Dynamics, and Evolutionary Biology to analyze the strategies pursued by states to mitigate risk and maximize usage of resources to achieve the objective of economic growth. This example can be applied to other social phenomena and this makes group theory so useful to the analysis of complex systems, because the theory provides the mathematical formulaic proof for validating the complex system models that researchers build and this will be discussed by the paper. The aim of this research, is to also provide researchers with a framework that can be used to model political entities such as states on a 3-dimensional plane. The x-axis representing resources (tangible and intangible) available to them, y the risks, and z the objective. There also exist other states with different constraints pursuing different strategies to climb the mountain. This mountain’s environment is made up of risks the state faces and resource endowments. This mountain is also layered in the sense that it has multiple peaks that must be overcome to reach the tallest peak. A state that sticks to a single strategy or pursues a strategy that is not conducive to the climbing of that specific peak it has reached is not able to continue advancement. To overcome the obstacle in the state’s path, it must innovate. Based on the definition of a group, we can categorize each state as being its own group. Each state is a closed system, one which is made up of micro level agents who have their own vectors and pursue strategies (actions) to achieve some sub objectives. The state also has an identity, the inverse being anarchy and/or inaction. Finally, the agents making up a state interact with each other through competition and collaboration to mitigate risks and achieve sub objectives that fall within the primary objective. Thus, researchers can categorize the state as an organism that reflects the sum of the output of the interactions pursued by agents at the micro level. When states compete, they employ a strategy and that state which has the better strategy (reflected by the strategies pursued by her parts) is able to out-compete her counterpart to acquire some resource, mitigate some risk or fulfil some objective. This paper will attempt to illustrate how group theory combined with evolutionary theory and systems dynamics can allow researchers to model the long run development, evolution, and growth of political entities through the use of a bottom up approach.

Keywords: complex systems, evolutionary theory, group theory, international political economy

Procedia PDF Downloads 125
7430 Simplified Modelling of Visco-Elastic Fluids for Use in Recoil Damping Systems

Authors: Prasad Pokkunuri

Abstract:

Visco-elastic materials combine the stress response properties of both solids and fluids and have found use in a variety of damping applications – both vibrational and acoustic. Defense and automotive applications, in particular, are subject to high impact and shock loading – for example: aircraft landing gear, firearms, and shock absorbers. Field responsive fluids – a class of smart materials – are the preferred choice of energy absorbents because of their controllability. These fluids’ stress response can be controlled by the application of a magnetic or electric field, in a closed loop. Their rheological properties – elasticity, plasticity, and viscosity – can be varied all the way from that of a liquid such as water to a hard solid. This work presents a simplified model to study the impulse response behavior of such fluids for use in recoil damping systems. The well-known Burger’s equation, in conjunction with various visco-elastic constitutive models, is used to represent fluid behavior. The Kelvin-Voigt, Upper Convected Maxwell (UCM), and Oldroyd-B constitutive models are implemented in this study. Using these models in a one-dimensional framework eliminates additional complexities due to geometry, pressure, body forces, and other source terms. Using a finite difference formulation to numerically solve the governing equation(s), the response to an initial impulse is studied. The disturbance is confined within the problem domain with no-inflow, no-outflow boundary conditions, and its decay characteristics studied. Visco-elastic fluids typically involve a time-dependent stress relaxation which gives rise to interesting behavior when subjected to an impulsive load. For particular values of viscous damping and elastic modulus, the fluid settles into a stable oscillatory state, absorbing and releasing energy without much decay. The simplified formulation enables a comprehensive study of different modes of system response, by varying relevant parameters. Using the insights gained from this study, extension to a more detailed multi-dimensional model is considered.

Keywords: Burgers Equation, Impulse Response, Recoil Damping Systems, Visco-elastic Fluids

Procedia PDF Downloads 287
7429 Expression of Stance in Lower- and Upper- Level Students’ Writing in Business Administration at English-Medium University in Burundi

Authors: Clement Ndoricimpa

Abstract:

The expression of stance is highly expected in writing at tertiary level. Through a selection of linguistic and rhetorical elements, writers express commitment, critical distance and build a critically discerning reader in texts. Despite many studies on patterns of stance in students’ academic writing, little may not be known about how English as a Foreign Language students learns to build a critically discerning reader in their texts. Therefore, this study examines patterns of stance in essays written by students majoring in business administration at English-medium University in Burundi as part of classroom assignments. It draws on systemic functional linguistics to analyze qualitatively and quantitatively the data. The quantitative analysis is used to identify the differences in frequency of stance patterns in the essays. The results show a significant difference in the use of boosters by lower- and upper-level students. Lower-level students’ writing contains more boosters and many idiosyncratic sentence structures than do upper-level students’ writing, and upper-level students’ essays contain more hedging and few grammatical mistakes than do lower-level students’ essays. No significant difference in the use of attitude markers and concessive and contrastive expressions. Students in lower- and upper-level do not use attitude markers and disclaimer markers appropriately and accurately. These findings suggest that students should be taught the use of stance patterns in academic writing.

Keywords: academic writing, metadiscourse, stance, student corpora

Procedia PDF Downloads 133
7428 Analysis of Operating Speed on Four-Lane Divided Highways under Mixed Traffic Conditions

Authors: Chaitanya Varma, Arpan Mehar

Abstract:

The present study demonstrates the procedure to analyse speed data collected on various four-lane divided sections in India. Field data for the study was collected at different straight and curved sections on rural highways with the help of radar speed gun and video camera. The data collected at the sections were analysed and parameters pertain to speed distributions were estimated. The different statistical distribution was analysed on vehicle type speed data and for mixed traffic speed data. It was found that vehicle type speed data was either follows the normal distribution or Log-normal distribution, whereas the mixed traffic speed data follows more than one type of statistical distribution. The most common fit observed on mixed traffic speed data were Beta distribution and Weibull distribution. The separate operating speed model based on traffic and roadway geometric parameters were proposed in the present study. The operating speed model with traffic parameters and curve geometry parameters were established. Two different operating speed models were proposed with variables 1/R and Ln(R) and were found to be realistic with a different range of curve radius. The models developed in the present study are simple and realistic and can be used for forecasting operating speed on four-lane highways.

Keywords: highway, mixed traffic flow, modeling, operating speed

Procedia PDF Downloads 458
7427 Forecasting Future Demand for Energy Efficient Vehicles: A Review of Methodological Approaches

Authors: Dimitrios I. Tselentis, Simon P. Washington

Abstract:

Considerable literature has been focused over the last few decades on forecasting the consumer demand of Energy Efficient Vehicles (EEVs). These methodological issues range from how to capture recent purchase decisions in revealed choice studies and how to set up experiments in stated preference (SP) studies, and choice of analysis method for analyzing such data. This paper reviews the plethora of published studies on the field of forecasting demand of EEVs since 1980, and provides a review and annotated bibliography of that literature as it pertains to this particular demand forecasting problem. This detailed review addresses the literature not only to Transportation studies, but specifically to the problem and methodologies around forecasting to the time horizons of planning studies which may represent 10 to 20 year forecasts. The objectives of the paper are to identify where existing gaps in literature exist and to articulate where promising methodologies might guide longer term forecasting. One of the key findings of this review is that there are many common techniques used both in the field of new product demand forecasting and the field of predicting future demand for EEV. Apart from SP and RP methods, some of these new techniques that have emerged in the literature in the last few decades are survey related approaches, product diffusion models, time-series modelling, computational intelligence models and other holistic approaches.

Keywords: demand forecasting, Energy Efficient Vehicles (EEVs), forecasting methodologies review, methodological approaches

Procedia PDF Downloads 483
7426 Competitive Adsorption of Al, Ga and In by Gamma Irradiation Induced Pectin-Acrylamide-(Vinyl Phosphonic Acid) Hydrogel

Authors: Md Murshed Bhuyan, Hirotaka Okabe, Yoshiki Hidaka, Kazuhiro Hara

Abstract:

Pectin-Acrylamide- (Vinyl Phosphonic Acid) Hydrogels were prepared from their blend by using gamma radiation of various doses. It was found that the gel fraction of hydrogel increases with increasing the radiation dose reaches a maximum and then started decreasing with increasing the dose. The optimum radiation dose and the composition of raw materials were determined on the basis of equilibrium swelling which resulted in 20 kGy absorbed dose and 1:2:4 (Pectin:AAm:VPA) composition. Differential scanning calorimetry reveals the gel strength for using them as the adsorbent. The FTIR-spectrum confirmed the grafting/ crosslinking of the monomer on the backbone of pectin chain. The hydrogels were applied in adsorption of Al, Ga, and In from multielement solution where the adsorption capacity order for those three elements was found as – In>Ga>Al. SEM images of hydrogels and metal adsorbed hydrogels indicate the gel network and adherence of the metal ions in the interpenetrating network of the hydrogel which were supported by EDS spectra. The adsorption isotherm models were studied and found that the Langmuir adsorption isotherm model was well fitted with the data. Adsorption data were also fitted to different adsorption kinetic and diffusion models. Desorption of metal adsorbed hydrogels was performed in 5% nitric acid where desorption efficiency was found around 90%.

Keywords: hydrogel, gamma radiation, vinyl phosphonic acid, metal adsorption

Procedia PDF Downloads 148
7425 Multimedia Design in Tactical Play Learning and Acquisition for Elite Gaelic Football Practitioners

Authors: Michael McMahon

Abstract:

The use of media (video/animation/graphics) has long been used by athletes, coaches, and sports scientists to analyse and improve performance in technical skills and team tactics. Sports educators are increasingly open to the use of technology to support coach and learner development. However, an overreliance is a concern., This paper is part of a larger Ph.D. study looking into these new challenges for Sports Educators. Most notably, how to exploit the deep-learning potential of Digital Media among expert learners, how to instruct sports educators to create effective media content that fosters deep learning, and finally, how to make the process manageable and cost-effective. Central to the study is Richard Mayers Cognitive Theory of Multimedia Learning. Mayers Multimedia Learning Theory proposes twelve principles that shape the design and organization of multimedia presentations to improve learning and reduce cognitive load. For example, the Prior Knowledge principle suggests and highlights different learning outcomes for Novice and Non-Novice learners, respectively. Little research, however, is available to support this principle in modified domains (e.g., sports tactics and strategy). As a foundation for further research, this paper compares and contrasts a range of contemporary multimedia sports coaching content and assesses how they perform as learning tools for Strategic and Tactical Play Acquisition among elite sports practitioners. The stress tests applied are guided by Mayers's twelve Multimedia Learning Principles. The focus is on the elite athletes and whether current coaching digital media content does foster improved sports learning among this cohort. The sport of Gaelic Football was selected as it has high strategic and tactical play content, a wide range of Practitioner skill levels (Novice to Elite), and also a significant volume of Multimedia Coaching Content available for analysis. It is hoped the resulting data will help identify and inform the future instructional content design and delivery for Sports Practitioners and help promote best design practices optimal for different levels of expertise.

Keywords: multimedia learning, e-learning, design for learning, ICT

Procedia PDF Downloads 95
7424 Exploration of Building Information Modelling Software to Develop Modular Coordination Design Tool for Architects

Authors: Muhammad Khairi bin Sulaiman

Abstract:

The utilization of Building Information Modelling (BIM) in the construction industry has provided an opportunity for designers in the Architecture, Engineering and Construction (AEC) industry to proceed from the conventional method of using manual drafting to a way that creates alternative designs quickly, produces more accurate, reliable and consistent outputs. By using BIM Software, designers can create digital content that manipulates the use of data using the parametric model of BIM. With BIM software, more alternative designs can be created quickly and design problems can be explored further to produce a better design faster than conventional design methods. Generally, BIM is used as a documentation mechanism and has not been fully explored and utilised its capabilities as a design tool. Relative to the current issue, Modular Coordination (MC) design as a sustainable design practice is encouraged since MC design will reduce material wastage through standard dimensioning, pre-fabrication, repetitive, modular construction and components. However, MC design involves a complex process of rules and dimensions. Therefore, a tool is needed to make this process easier. Since the parameters in BIM can easily be manipulated to follow MC rules and dimensioning, thus, the integration of BIM software with MC design is proposed for architects during the design stage. With this tool, there will be an improvement in acceptance and practice in the application of MC design effectively. Consequently, this study will analyse and explore the function and customization of BIM objects and the capability of BIM software to expedite the application of MC design during the design stage for architects. With this application, architects will be able to create building models and locate objects within reference modular grids that adhere to MC rules and dimensions. The parametric modeling capabilities of BIM will also act as a visual tool that will further enhance the automation of the 3-Dimensional space planning modeling process. (Method) The study will first analyze and explore the parametric modeling capabilities of rule-based BIM objects, which eventually customize a reference grid within the rules and dimensioning of MC. Eventually, the approach will further enhance the architect's overall design process and enable architects to automate complex modeling, which was nearly impossible before. A prototype using a residential quarter will be modeled. A set of reference grids guided by specific MC rules and dimensions will be used to develop a variety of space planning and configuration. With the use of the design, the tool will expedite the design process and encourage the use of MC Design in the construction industry.

Keywords: building information modeling, modular coordination, space planning, customization, BIM application, MC space planning

Procedia PDF Downloads 80
7423 Evaluating Models Through Feature Selection Methods Using Data Driven Approach

Authors: Shital Patil, Surendra Bhosale

Abstract:

Cardiac diseases are the leading causes of mortality and morbidity in the world, from recent few decades accounting for a large number of deaths have emerged as the most life-threatening disorder globally. Machine learning and Artificial intelligence have been playing key role in predicting the heart diseases. A relevant set of feature can be very helpful in predicting the disease accurately. In this study, we proposed a comparative analysis of 4 different features selection methods and evaluated their performance with both raw (Unbalanced dataset) and sampled (Balanced) dataset. The publicly available Z-Alizadeh Sani dataset have been used for this study. Four feature selection methods: Data Analysis, minimum Redundancy maximum Relevance (mRMR), Recursive Feature Elimination (RFE), Chi-squared are used in this study. These methods are tested with 8 different classification models to get the best accuracy possible. Using balanced and unbalanced dataset, the study shows promising results in terms of various performance metrics in accurately predicting heart disease. Experimental results obtained by the proposed method with the raw data obtains maximum AUC of 100%, maximum F1 score of 94%, maximum Recall of 98%, maximum Precision of 93%. While with the balanced dataset obtained results are, maximum AUC of 100%, F1-score 95%, maximum Recall of 95%, maximum Precision of 97%.

Keywords: cardio vascular diseases, machine learning, feature selection, SMOTE

Procedia PDF Downloads 109
7422 Continuous Fixed Bed Reactor Application for Decolourization of Textile Effluent by Adsorption on NaOH Treated Eggshell

Authors: M. Chafi, S. Akazdam, C. Asrir, L. Sebbahi, B. Gourich, N. Barka, M. Essahli

Abstract:

Fixed bed adsorption has become a frequently used industrial application in wastewater treatment processes. Various low cost adsorbents have been studied for their applicability in treatment of different types of effluents. In this work, the intention of the study was to explore the efficacy and feasibility for azo dye, Acid Orange 7 (AO7) adsorption onto fixed bed column of NaOH Treated eggshell (TES). The effect of various parameters like flow rate, initial dye concentration, and bed height were exploited in this study. The studies confirmed that the breakthrough curves were dependent on flow rate, initial dye concentration solution of AO7 and bed depth. The Thomas, Yoon–Nelson, and Adams and Bohart models were analysed to evaluate the column adsorption performance. The adsorption capacity, rate constant and correlation coefficient associated to each model for column adsorption was calculated and mentioned. The column experimental data were fitted well with Thomas model with coefficients of correlation R2 ≥0.93 at different conditions but the Yoon–Nelson, BDST and Bohart–Adams model (R2=0.911), predicted poor performance of fixed-bed column. The (TES) was shown to be suitable adsorbent for adsorption of AO7 using fixed-bed adsorption column.

Keywords: adsorption models, acid orange 7, bed depth, breakthrough, dye adsorption, fixed-bed column, treated eggshell

Procedia PDF Downloads 367
7421 Causal Estimation for the Left-Truncation Adjusted Time-Varying Covariates under the Semiparametric Transformation Models of a Survival Time

Authors: Yemane Hailu Fissuh, Zhongzhan Zhang

Abstract:

In biomedical researches and randomized clinical trials, the most commonly interested outcomes are time-to-event so-called survival data. The importance of robust models in this context is to compare the effect of randomly controlled experimental groups that have a sense of causality. Causal estimation is the scientific concept of comparing the pragmatic effect of treatments conditional to the given covariates rather than assessing the simple association of response and predictors. Hence, the causal effect based semiparametric transformation model was proposed to estimate the effect of treatment with the presence of possibly time-varying covariates. Due to its high flexibility and robustness, the semiparametric transformation model which shall be applied in this paper has been given much more attention for estimation of a causal effect in modeling left-truncated and right censored survival data. Despite its wide applications and popularity in estimating unknown parameters, the maximum likelihood estimation technique is quite complex and burdensome in estimating unknown parameters and unspecified transformation function in the presence of possibly time-varying covariates. Thus, to ease the complexity we proposed the modified estimating equations. After intuitive estimation procedures, the consistency and asymptotic properties of the estimators were derived and the characteristics of the estimators in the finite sample performance of the proposed model were illustrated via simulation studies and Stanford heart transplant real data example. To sum up the study, the bias of covariates was adjusted via estimating the density function for truncation variable which was also incorporated in the model as a covariate in order to relax the independence assumption of failure time and truncation time. Moreover, the expectation-maximization (EM) algorithm was described for the estimation of iterative unknown parameters and unspecified transformation function. In addition, the causal effect was derived by the ratio of the cumulative hazard function of active and passive experiments after adjusting for bias raised in the model due to the truncation variable.

Keywords: causal estimation, EM algorithm, semiparametric transformation models, time-to-event outcomes, time-varying covariate

Procedia PDF Downloads 119
7420 An Empirical Analysis of the Effects of Corporate Derivatives Use on the Underlying Stock Price Exposure: South African Evidence

Authors: Edson Vengesai

Abstract:

Derivative products have become essential instruments in portfolio diversification, price discovery, and, most importantly, risk hedging. Derivatives are complex instruments; their valuation, volatility implications, and real impact on the underlying assets' behaviour are not well understood. Little is documented empirically, with conflicting conclusions on how these instruments affect firm risk exposures. Given the growing interest in using derivatives in risk management and portfolio engineering, this study examines the practical impact of derivative usage on the underlying stock price exposure and systematic risk. The paper uses data from South African listed firms. The study employs GARCH models to understand the effect of derivative uses on conditional stock volatility. The GMM models are used to estimate the effect of derivatives use on stocks' systematic risk as measured by Beta and on the total risk of stocks as measured by the standard deviation of returns. The results provide evidence on whether derivatives use is instrumental in reducing stock returns' systematic and total risk. The results are subjected to numerous controls for robustness, including financial leverage, firm size, growth opportunities, and macroeconomic effects.

Keywords: derivatives use, hedging, volatility, stock price exposure

Procedia PDF Downloads 102
7419 Efficient Principal Components Estimation of Large Factor Models

Authors: Rachida Ouysse

Abstract:

This paper proposes a constrained principal components (CnPC) estimator for efficient estimation of large-dimensional factor models when errors are cross sectionally correlated and the number of cross-sections (N) may be larger than the number of observations (T). Although principal components (PC) method is consistent for any path of the panel dimensions, it is inefficient as the errors are treated to be homoskedastic and uncorrelated. The new CnPC exploits the assumption of bounded cross-sectional dependence, which defines Chamberlain and Rothschild’s (1983) approximate factor structure, as an explicit constraint and solves a constrained PC problem. The CnPC method is computationally equivalent to the PC method applied to a regularized form of the data covariance matrix. Unlike maximum likelihood type methods, the CnPC method does not require inverting a large covariance matrix and thus is valid for panels with N ≥ T. The paper derives a convergence rate and an asymptotic normality result for the CnPC estimators of the common factors. We provide feasible estimators and show in a simulation study that they are more accurate than the PC estimator, especially for panels with N larger than T, and the generalized PC type estimators, especially for panels with N almost as large as T.

Keywords: high dimensionality, unknown factors, principal components, cross-sectional correlation, shrinkage regression, regularization, pseudo-out-of-sample forecasting

Procedia PDF Downloads 144
7418 Automated Feature Extraction and Object-Based Detection from High-Resolution Aerial Photos Based on Machine Learning and Artificial Intelligence

Authors: Mohammed Al Sulaimani, Hamad Al Manhi

Abstract:

With the development of Remote Sensing technology, the resolution of optical Remote Sensing images has greatly improved, and images have become largely available. Numerous detectors have been developed for detecting different types of objects. In the past few years, Remote Sensing has benefited a lot from deep learning, particularly Deep Convolution Neural Networks (CNNs). Deep learning holds great promise to fulfill the challenging needs of Remote Sensing and solving various problems within different fields and applications. The use of Unmanned Aerial Systems in acquiring Aerial Photos has become highly used and preferred by most organizations to support their activities because of their high resolution and accuracy, which make the identification and detection of very small features much easier than Satellite Images. And this has opened an extreme era of Deep Learning in different applications not only in feature extraction and prediction but also in analysis. This work addresses the capacity of Machine Learning and Deep Learning in detecting and extracting Oil Leaks from Flowlines (Onshore) using High-Resolution Aerial Photos which have been acquired by UAS fixed with RGB Sensor to support early detection of these leaks and prevent the company from the leak’s losses and the most important thing environmental damage. Here, there are two different approaches and different methods of DL have been demonstrated. The first approach focuses on detecting the Oil Leaks from the RAW Aerial Photos (not processed) using a Deep Learning called Single Shoot Detector (SSD). The model draws bounding boxes around the leaks, and the results were extremely good. The second approach focuses on detecting the Oil Leaks from the Ortho-mosaiced Images (Georeferenced Images) by developing three Deep Learning Models using (MaskRCNN, U-Net and PSP-Net Classifier). Then, post-processing is performed to combine the results of these three Deep Learning Models to achieve a better detection result and improved accuracy. Although there is a relatively small amount of datasets available for training purposes, the Trained DL Models have shown good results in extracting the extent of the Oil Leaks and obtaining excellent and accurate detection.

Keywords: GIS, remote sensing, oil leak detection, machine learning, aerial photos, unmanned aerial systems

Procedia PDF Downloads 26
7417 Social Media Data Analysis for Personality Modelling and Learning Styles Prediction Using Educational Data Mining

Authors: Srushti Patil, Preethi Baligar, Gopalkrishna Joshi, Gururaj N. Bhadri

Abstract:

In designing learning environments, the instructional strategies can be tailored to suit the learning style of an individual to ensure effective learning. In this study, the information shared on social media like Facebook is being used to predict learning style of a learner. Previous research studies have shown that Facebook data can be used to predict user personality. Users with a particular personality exhibit an inherent pattern in their digital footprint on Facebook. The proposed work aims to correlate the user's’ personality, predicted from Facebook data to the learning styles, predicted through questionnaires. For Millennial learners, Facebook has become a primary means for information sharing and interaction with peers. Thus, it can serve as a rich bed for research and direct the design of learning environments. The authors have conducted this study in an undergraduate freshman engineering course. Data from 320 freshmen Facebook users was collected. The same users also participated in the learning style and personality prediction survey. The Kolb’s Learning style questionnaires and Big 5 personality Inventory were adopted for the survey. The users have agreed to participate in this research and have signed individual consent forms. A specific page was created on Facebook to collect user data like personal details, status updates, comments, demographic characteristics and egocentric network parameters. This data was captured by an application created using Python program. The data captured from Facebook was subjected to text analysis process using the Linguistic Inquiry and Word Count dictionary. An analysis of the data collected from the questionnaires performed reveals individual student personality and learning style. The results obtained from analysis of Facebook, learning style and personality data were then fed into an automatic classifier that was trained by using the data mining techniques like Rule-based classifiers and Decision trees. This helps to predict the user personality and learning styles by analysing the common patterns. Rule-based classifiers applied for text analysis helps to categorize Facebook data into positive, negative and neutral. There were totally two models trained, one to predict the personality from Facebook data; another one to predict the learning styles from the personalities. The results show that the classifier model has high accuracy which makes the proposed method to be a reliable one for predicting the user personality and learning styles.

Keywords: educational data mining, Facebook, learning styles, personality traits

Procedia PDF Downloads 224
7416 From Clients to Colleagues: Supporting the Professional Development of Survivor Social Work Students

Authors: Stephanie Jo Marchese

Abstract:

This oral presentation is a reflective piece regarding current social work teaching methods that value and devalue the lived experiences of survivor students. This presentation grounds the term ‘survivor’ in feminist frameworks. A survivor-defined approach to feminist advocacy assumes an individual’s agency, considers each case and needs independent of generalizations, and provides resources and support to empower victims. Feminist ideologies are ripe arenas to update and influence the rapport-building schools of social work have with these students. Survivor-based frameworks are rooted in nuanced understandings of intersectional realities, staunchly combat both conscious and unconscious deficit lenses wielded against victims, elevate lived experiences to the realm of experiential expertise, and offer alternatives to traditional power structures and knowledge exchanges. Actively importing a survivor framework into the methodology of social work teaching breaks open barriers many survivor students have faced in institutional settings, this author included. The profession of social work is at an important crux of change, both in the United States and globally. The United States is currently undergoing a radical change in its citizenry and outlier communities have taken to the streets again in opposition to their othered-ness. New waves of students are entering this field, emboldened by their survival of personal and systemic oppressions- heavily influenced by third-wave feminism, critical race theory, queer theory, among other post-structuralist ideologies. Traditional models of sociological and psychological studies are actively being challenged. The profession of social work was not founded on the diagnosis of disorders but rather a grassroots-level activism that heralded and demanded resources for oppressed communities. Institutional and classroom acceptance and celebration of survivor narratives can catapult the resurgence of these values needed in the profession’s service-delivery models and put social workers back in the driver's seat of social change (a combined advocacy and policy perspective), moving away from outsider-based intervention models. Survivor students should be viewed as agents of change, not solely former victims and clients. The ideas of this presentation proposal are supported through various qualitative interviews, as well as reviews of ‘best practices’ in the field of education that incorporate feminist methods of inclusion and empowerment. Curriculum and policy recommendations are also offered.

Keywords: deficit lens bias, empowerment theory, feminist praxis, inclusive teaching models, strengths-based approaches, social work teaching methods

Procedia PDF Downloads 282