Search results for: dynamic bending
251 Landing Performance Improvement Using Genetic Algorithm for Electric Vertical Take Off and Landing Aircrafts
Authors: Willian C. De Brito, Hernan D. C. Munoz, Erlan V. C. Carvalho, Helder L. C. De Oliveira
Abstract:
In order to improve commute time for small distance trips and relieve large cities traffic, a new transport category has been the subject of research and new designs worldwide. The air taxi travel market promises to change the way people live and commute by using the concept of vehicles with the ability to take-off and land vertically and to provide passenger’s transport equivalent to a car, with mobility within large cities and between cities. Today’s civil air transport remains costly and accounts for 2% of the man-made CO₂ emissions. Taking advantage of this scenario, many companies have developed their own Vertical Take Off and Landing (VTOL) design, seeking to meet comfort, safety, low cost and flight time requirements in a sustainable way. Thus, the use of green power supplies, especially batteries, and fully electric power plants is the most common choice for these arising aircrafts. However, it is still a challenge finding a feasible way to handle with the use of batteries rather than conventional petroleum-based fuels. The batteries are heavy and have an energy density still below from those of gasoline, diesel or kerosene. Therefore, despite all the clear advantages, all electric aircrafts (AEA) still have low flight autonomy and high operational cost, since the batteries must be recharged or replaced. In this sense, this paper addresses a way to optimize the energy consumption in a typical mission of an aerial taxi aircraft. The approach and landing procedure was chosen to be the subject of an optimization genetic algorithm, while final programming can be adapted for take-off and flight level changes as well. A real tilt rotor aircraft with fully electric power plant data was used to fit the derived dynamic equations of motion. Although a tilt rotor design is used as a proof of concept, it is possible to change the optimization to be applied for other design concepts, even those with independent motors for hover and cruise flight phases. For a given trajectory, the best set of control variables are calculated to provide the time history response for aircraft´s attitude, rotors RPM and thrust direction (or vertical and horizontal thrust, for independent motors designs) that, if followed, results in the minimum electric power consumption through that landing path. Safety, comfort and design constraints are assumed to give representativeness to the solution. Results are highly dependent on these constraints. For the tested cases, performance improvement ranged from 5 to 10% changing initial airspeed, altitude, flight path angle, and attitude.Keywords: air taxi travel, all electric aircraft, batteries, energy consumption, genetic algorithm, landing performance, optimization, performance improvement, tilt rotor, VTOL design
Procedia PDF Downloads 115250 Distribution, Seasonal Phenology and Infestation Dispersal of the Chickpea Leafminer Liriomyza cicerina (Diptera: Agromizidae) on Two Winter and Spring Chickpea Varieties
Authors: Abir Soltani, Moez Amri, Jouda Mediouni Ben Jemâa
Abstract:
In North Africa, the chickpea leafminer Liriomyza cicerina (Rondani) (Diptera: Agromizidae) is one of the major damaging pests affecting both spring and winter-planted chickpea. Damage is caused by the larvae which feed in the leaf mesophyll tissue, resulting in desiccation and premature leaf fall that can cause severe yield losses. In the present work, the distribution and the seasonal phenology of L. cicerina were studied on two chickpea varieties; a winter variety Beja 1 which is the most cultivated variety in Tunisia and a spring-sown variety Amdoun 1. The experiment was conducted during the cropping season 2015-2016. In the experimental research station Oued Beja, in the Beja region (36°44’N; 9°13’E). To determine the distribution and seasonal phenology of L. cicerina in both studied varieties Beja 1 and Amdoun 1, respectively 100 leave samples (50 from the top and 50 from the base) were collected from 10 chickpea plants randomly chosen from each field. The sampling was done during three development stages (i) 20-25 days before flowering (BFL), (ii) at flowering (FL) and (ii) at pod setting stage (PS). For each plant, leaves were checked from the base till the upper ones for the insect infestation progress into the plant in correlation with chickpea growth Stages. Fly adult populations were monitored using 8 yellow sticky traps together with weekly leaves sampling in each field. The traps were placed 70 cm above ground. Trap catches were collected once a week over the cropping season period. Results showed that L. cicerina distribution varied among both studied chickpea varieties and crop development stage all with seasonal phenology. For the winter chickpea variety Beja 1, infestation levels of 2%, 10.3% and 20.3% were recorded on the bases plant part for BFL, FL and PS stages respectively against 0%, 8.1% and 45.8% recorded for the upper plant part leaves for the same stages respectively. For the spring-sown variety Amdoun 1 the infestation level reached 71.5% during flowering stage. Population dynamic study revealed that for Beja 1 variety, L. cicerina accomplished three annual generations over the cropping season period with the third one being the most important with a capture level of 85 adult/trap by mid-May against a capture level of 139 adult/trap at the end May recorded for cv. Amdoun 1. Also, results showed that L. cicerina field infestation dispersal depends on the field part and on the crop growth stage. The border areas plants were more infested than the plants placed inside the plots. For cv. Beja 1, border areas infestations were 11%, 28% and 91.2% for BFL, FL and PS stages respectively, against 2%, 10.73% and 69.2% recorded on the on the inside plot plants during the for the same growth stages respectively. For the cv. Amdoun1 infestation level of 90% was observed on the border plants at FL and PS stages against an infestation level less than 65% recorded inside the plot.Keywords: leaf miner, liriomyza cicerina, chickpea, distribution, seasonal phenology, Tunisia
Procedia PDF Downloads 282249 Exploring the Neural Correlates of Different Interaction Types: A Hyperscanning Investigation Using the Pattern Game
Authors: Beata Spilakova, Daniel J. Shaw, Radek Marecek, Milan Brazdil
Abstract:
Hyperscanning affords a unique insight into the brain dynamics underlying human interaction by simultaneously scanning two or more individuals’ brain responses while they engage in dyadic exchange. This provides an opportunity to observe dynamic brain activations in all individuals participating in interaction, and possible interbrain effects among them. The present research aims to provide an experimental paradigm for hyperscanning research capable of delineating among different forms of interaction. Specifically, the goal was to distinguish between two dimensions: (1) interaction structure (concurrent vs. turn-based) and (2) goal structure (competition vs cooperation). Dual-fMRI was used to scan 22 pairs of participants - each pair matched on gender, age, education and handedness - as they played the Pattern Game. In this simple interactive task, one player attempts to recreate a pattern of tokens while the second player must either help (cooperation) or prevent the first achieving the pattern (competition). Each pair played the game iteratively, alternating their roles every round. The game was played in two consecutive sessions: first the players took sequential turns (turn-based), but in the second session they placed their tokens concurrently (concurrent). Conventional general linear model (GLM) analyses revealed activations throughout a diffuse collection of brain regions: The cooperative condition engaged medial prefrontal cortex (mPFC) and posterior cingulate cortex (PCC); in the competitive condition, significant activations were observed in frontal and prefrontal areas, insula cortices and the thalamus. Comparisons between the turn-based and concurrent conditions revealed greater precuneus engagement in the former. Interestingly, mPFC, PCC and insulae are linked repeatedly to social cognitive processes. Similarly, the thalamus is often associated with a cognitive empathy, thus its activation may reflect the need to predict the opponent’s upcoming moves. Frontal and prefrontal activation most likely represent the higher attentional and executive demands of the concurrent condition, whereby subjects must simultaneously observe their co-player and place his own tokens accordingly. The activation of precuneus in the turn-based condition may be linked to self-other distinction processes. Finally, by performing intra-pair correlations of brain responses we demonstrate condition-specific patterns of brain-to-brain coupling in mPFC and PCC. Moreover, the degree of synchronicity in these neural signals related to performance on the game. The present results, then, show that different types of interaction recruit different brain systems implicated in social cognition, and the degree of inter-player synchrony within these brain systems is related to nature of the social interaction.Keywords: brain-to-brain coupling, hyperscanning, pattern game, social interaction
Procedia PDF Downloads 339248 The Inclusive Human Trafficking Checklist: A Dialectical Measurement Methodology
Authors: Maria C. Almario, Pam Remer, Jeff Resse, Kathy Moran, Linda Theander Adam
Abstract:
The identification of victims of human trafficking and consequential service provision is characterized by a significant disconnection between the estimated prevalence of this issue and the number of cases identified. This poses as tremendous problem for human rights advocates as it prevents data collection, information sharing, allocation of resources and opportunities for international dialogues. The current paper introduces the Inclusive Human Trafficking Checklist (IHTC) as a measurement methodology with theoretical underpinnings derived from dialectic theory. The presence of human trafficking in a person’s life is conceptualized as a dynamic and dialectic interaction between vulnerability and exploitation. The current papers explores the operationalization of exploitation and vulnerability, evaluates the metric qualities of the instrument, evaluates whether there are differences in assessment based on the participant’s profession, level of knowledge, and training, and assesses if users of the instrument perceive it as useful. A total of 201 participants were asked to rate three vignettes predetermined by experts to qualify as a either human trafficking case or not. The participants were placed in three conditions: business as usual, utilization of the IHTC with and without training. The results revealed a statistically significant level of agreement between the expert’s diagnostic and the application of the IHTC with an improvement of 40% on identification when compared with the business as usual condition While there was an improvement in identification in the group with training, the difference was found to have a small effect size. Participants who utilized the IHTC showed an increased ability to identify elements of identity-based vulnerabilities as well as elements of fraud, which according to the results, are distinctive variables in cases of human trafficking. In terms of the perceived utility, the results revealed higher mean scores for the groups utilizing the IHTC when compared to the business as usual condition. These findings suggest that the IHTC improves appropriate identification of cases and that it is perceived as a useful instrument. The application of the IHTC as a multidisciplinary instrumentation that can be utilized in legal and human services settings is discussed as a pivotal piece of helping victims restore their sense of dignity, and advocate for legal, physical and psychological reparations. It is noteworthy that this study was conducted with a sample in the United States and later re-tested in Colombia. The implications of the instrument for treatment conceptualization and intervention in human trafficking cases are discussed as opportunities for enhancement of victim well-being, restoration engagement and activism. With the idea that what is personal is also political, we believe that the careful observation and data collection in specific cases can inform new areas of human rights activism.Keywords: exploitation, human trafficking, measurement, vulnerability, screening
Procedia PDF Downloads 330247 Neoliberalism and Environmental Justice: A Critical Examination of Corporate Greenwashing
Authors: Arnav M. Raval
Abstract:
This paper critically examines the neoliberal economic model and its role in enabling corporate greenwashing, a practice where corporations deceptively market themselves as environmentally responsible while continuing harmful environmental practices. Through a rigorous focus on the neoliberal emphasis of free markets, deregulation, and minimal government intervention, this paper explores how these policies have set the stage for corporations to externalize environmental costs and engage in superficial sustainability initiatives. Within this framework, companies often bypass meaningful environmental reform, opting for strategies that enhance their public image without addressing their actual environmental impacts. The paper also draws on the works of critical theorists Theodor Adorno, Max Horkheimer, and Herbert Marcuse, particularly their critiques of capitalist society and its tendency to commodify social values. This paper argues that neoliberal capitalism has commodified environmentalism, transforming genuine ecological responsibility into a marketable product. Through corporate social responsibility initiatives, corporations have created the illusion of sustainability while masking deeper environmental harm. Under neoliberalism, these initiatives often serve as public relations tools rather than genuine commitments to environmental justice and sustainability. This commodification has become particularly dangerous because as it manipulates consumer perceptions and diverts attention away from the structural causes of environmental degradation. The analysis also examines how greenwashing practices have disproportionately affected marginalized communities, particularly in the global South, where environmental costs are often externalized. As these corporations promote their “sustainability” in wealthier markets, these marginalized communities bear the brunt of their pollution, resource depletion, and other forms of environmental degradation. This dynamic underscores the inherent injustice within neoliberal environmental policies, as those most vulnerable to environmental risks are often neglected, as companies reap the benefits of corporate sustainability efforts at their expense. Finally, this paper calls for a fundamental transition away from neoliberal market-driven solutions, which prioritize corporate profit over genuine ecological reform. It advocates for stronger regulatory frameworks, transparent third-party certifications, and a more collective approach to environmental governance. In order to ensure genuine corporate accountability, governments and institutions must move beyond superficial green initiatives and market-based solutions, shifting toward policies that enforce real environmental responsibility and prioritize environmental justice for all communities. Through the critique of the neoliberal system and its commodification of environmentalism, this paper has highlighted the urgent need to rethink how environmental responsibility is defined and enacted in the corporate world. Without systemic change, greenwashing will continue to undermine both ecological sustainability and social justice, leaving the most vulnerable populations to suffer the consequences.Keywords: critical theory, environmental justice, greenwashing, neoliberalism
Procedia PDF Downloads 17246 A Generalized Framework for Adaptive Machine Learning Deployments in Algorithmic Trading
Authors: Robert Caulk
Abstract:
A generalized framework for adaptive machine learning deployments in algorithmic trading is introduced, tested, and released as open-source code. The presented software aims to test the hypothesis that recent data contains enough information to form a probabilistically favorable short-term price prediction. Further, the framework contains various adaptive machine learning techniques that are geared toward generating profit during strong trends and minimizing losses during trend changes. Results demonstrate that this adaptive machine learning approach is capable of capturing trends and generating profit. The presentation also discusses the importance of defining the parameter space associated with the dynamic training data-set and using the parameter space to identify and remove outliers from prediction data points. Meanwhile, the generalized architecture enables common users to exploit the powerful machinery while focusing on high-level feature engineering and model testing. The presentation also highlights common strengths and weaknesses associated with the presented technique and presents a broad range of well-tested starting points for feature set construction, target setting, and statistical methods for enforcing risk management and maintaining probabilistically favorable entry and exit points. The presentation also describes the end-to-end data processing tools associated with FreqAI, including automatic data fetching, data aggregation, feature engineering, safe and robust data pre-processing, outlier detection, custom machine learning and statistical tools, data post-processing, and adaptive training backtest emulation, and deployment of adaptive training in live environments. Finally, the generalized user interface is also discussed in the presentation. Feature engineering is simplified so that users can seed their feature sets with common indicator libraries (e.g. TA-lib, pandas-ta). The user also feeds data expansion parameters to fill out a large feature set for the model, which can contain as many as 10,000+ features. The presentation describes the various object-oriented programming techniques employed to make FreqAI agnostic to third-party libraries and external data sources. In other words, the back-end is constructed in such a way that users can leverage a broad range of common regression libraries (Catboost, LightGBM, Sklearn, etc) as well as common Neural Network libraries (TensorFlow, PyTorch) without worrying about the logistical complexities associated with data handling and API interactions. The presentation finishes by drawing conclusions about the most important parameters associated with a live deployment of the adaptive learning framework and provides the road map for future development in FreqAI.Keywords: machine learning, market trend detection, open-source, adaptive learning, parameter space exploration
Procedia PDF Downloads 89245 Impact of Agricultural Infrastructure on Diffusion of Technology of the Sample Farmers in North 24 Parganas District, West Bengal
Authors: Saikat Majumdar, D. C. Kalita
Abstract:
The Agriculture sector plays an important role in the rural economy of India. It is the backbone of our Indian economy and is the dominant sector in terms of employment and livelihood. Agriculture still contributes significantly to export earnings and is an important source of raw materials as well as of demand for many industrial products particularly fertilizers, pesticides, agricultural implements and a variety of consumer goods, etc. The performance of the agricultural sector influences the growth of Indian economy. According to the 2011 Agricultural Census of India, an estimated 61.5 percentage of rural populations are dependent on agriculture. Proper Agricultural infrastructure has the potential to transform the existing traditional agriculture into a most modern, commercial and dynamic farming system in India through its diffusion of technology. The rate of adoption of modern technology reflects the progress of development in agricultural sector. The adoption of any improved agricultural technology is also dependent on the development of road infrastructure or road network. The present study was consisting of 300 sample farmers out which 150 samples was taken from the developed area and rest 150 samples was taken from underdeveloped area. The samples farmers under develop and underdeveloped areas were collected by using Multistage Random Sampling procedure. In the first stage, North 24 Parganas District have been selected purposively. Then from the district, one developed and one underdeveloped block was selected randomly. In the third phase, 10 villages have been selected randomly from each block. Finally, from each village 15 sample farmers was selected randomly. The extents of adoption of technology in different areas were calculated through various parameters. These are percentage area under High Yielding Variety Cereals, percentage area under High Yielding Variety pulses, area under hybrids vegetables, irrigated area, mechanically operated area, amount spent on fertilizer and pesticides, etc. in both developed and underdeveloped areas of North 24 Parganas District, West Bengal. The percentage area under High Yielding Variety Cereals in the developed and underdeveloped areas was 34.86 and 22.59. 42.07 percentages and 31.46 percentages for High Yielding Variety pulses respectively. In the case the area under irrigation it was 57.66 and 35.71 percent while for the mechanically operated area it was 10.60 and 3.13 percent respectively in developed and underdeveloped areas of North 24 Parganas district, West Bengal. It clearly showed that the extent of adoption of technology was significantly higher in the developed area over underdeveloped area. Better road network system helps the farmers in increasing his farm income, farm assets, cropping intensity, marketed surplus and the rate of adoption of new technology. With this background, an attempt is made in this paper to study the impact of Agricultural Infrastructure on the adoption of modern technology in agriculture in North 24 Parganas District, West Bengal.Keywords: agricultural infrastructure, adoption of technology, farm income, road network
Procedia PDF Downloads 101244 Use of Socially Assistive Robots in Early Rehabilitation to Promote Mobility for Infants with Motor Delays
Authors: Elena Kokkoni, Prasanna Kannappan, Ashkan Zehfroosh, Effrosyni Mavroudi, Kristina Strother-Garcia, James C. Galloway, Jeffrey Heinz, Rene Vidal, Herbert G. Tanner
Abstract:
Early immobility affects the motor, cognitive, and social development. Current pediatric rehabilitation lacks the technology that will provide the dosage needed to promote mobility for young children at risk. The addition of socially assistive robots in early interventions may help increase the mobility dosage. The aim of this study is to examine the feasibility of an early intervention paradigm where non-walking infants experience independent mobility while socially interacting with robots. A dynamic environment is developed where both the child and the robot interact and learn from each other. The environment involves: 1) a range of physical activities that are goal-oriented, age-appropriate, and ability-matched for the child to perform, 2) the automatic functions that perceive the child’s actions through novel activity recognition algorithms, and decide appropriate actions for the robot, and 3) a networked visual data acquisition system that enables real-time assessment and provides the means to connect child behavior with robot decision-making in real-time. The environment was tested by bringing a two-year old boy with Down syndrome for eight sessions. The child presented delays throughout his motor development with the current being on the acquisition of walking. During the sessions, the child performed physical activities that required complex motor actions (e.g. climbing an inclined platform and/or staircase). During these activities, a (wheeled or humanoid) robot was either performing the action or was at its end point 'signaling' for interaction. From these sessions, information was gathered to develop algorithms to automate the perception of activities which the robot bases its actions on. A Markov Decision Process (MDP) is used to model the intentions of the child. A 'smoothing' technique is used to help identify the model’s parameters which are a critical step when dealing with small data sets such in this paradigm. The child engaged in all activities and socially interacted with the robot across sessions. With time, the child’s mobility was increased, and the frequency and duration of complex and independent motor actions were also increased (e.g. taking independent steps). Simulation results on the combination of the MDP and smoothing support the use of this model in human-robot interaction. Smoothing facilitates learning MDP parameters from small data sets. This paradigm is feasible and provides an insight on how social interaction may elicit mobility actions suggesting a new early intervention paradigm for very young children with motor disabilities. Acknowledgment: This work has been supported by NIH under grant #5R01HD87133.Keywords: activity recognition, human-robot interaction, machine learning, pediatric rehabilitation
Procedia PDF Downloads 292243 Seismic Response of Reinforced Concrete Buildings: Field Challenges and Simplified Code Formulas
Authors: Michel Soto Chalhoub
Abstract:
Building code-related literature provides recommendations on normalizing approaches to the calculation of the dynamic properties of structures. Most building codes make a distinction among types of structural systems, construction material, and configuration through a numerical coefficient in the expression for the fundamental period. The period is then used in normalized response spectra to compute base shear. The typical parameter used in simplified code formulas for the fundamental period is overall building height raised to a power determined from analytical and experimental results. However, reinforced concrete buildings which constitute the majority of built space in less developed countries pose additional challenges to the ones built with homogeneous material such as steel, or with concrete under stricter quality control. In the present paper, the particularities of reinforced concrete buildings are explored and related to current methods of equivalent static analysis. A comparative study is presented between the Uniform Building Code, commonly used for buildings within and outside the USA, and data from the Middle East used to model 151 reinforced concrete buildings of varying number of bays, number of floors, overall building height, and individual story height. The fundamental period was calculated using eigenvalue matrix computation. The results were also used in a separate regression analysis where the computed period serves as dependent variable, while five building properties serve as independent variables. The statistical analysis shed light on important parameters that simplified code formulas need to account for including individual story height, overall building height, floor plan, number of bays, and concrete properties. Such inclusions are important for reinforced concrete buildings of special conditions due to the level of concrete damage, aging, or materials quality control during construction. Overall results of the present analysis show that simplified code formulas for fundamental period and base shear may be applied but they require revisions to account for multiple parameters. The conclusion above is confirmed by the analytical model where fundamental periods were computed using numerical techniques and eigenvalue solutions. This recommendation is particularly relevant to code upgrades in less developed countries where it is customary to adopt, and mildly adapt international codes. We also note the necessity of further research using empirical data from buildings in Lebanon that were subjected to severe damage due to impulse loading or accelerated aging. However, we excluded this study from the present paper and left it for future research as it has its own peculiarities and requires a different type of analysis.Keywords: seismic behaviour, reinforced concrete, simplified code formulas, equivalent static analysis, base shear, response spectra
Procedia PDF Downloads 232242 Gender Quotas in Italy: Effects on Corporate Performance
Authors: G. Bruno, A. Ciavarella, N. Linciano
Abstract:
The proportion of women in boardroom has traditionally been low around the world. Over the last decades, several jurisdictions opted for active intervention, which triggered a tangible progress in female representation. In Europe, many countries have implemented boardroom diversity policies in the form of legal quotas (Norway, Italy, France, Germany) or governance code amendments (United Kingdom, Finland). Policy actions rest, among other things, on the assumption that gender balanced boards result in improved corporate governance and performance. The investigation of the relationship between female boardroom representation and firm value is therefore key on policy grounds. The evidence gathered so far, however, has not produced conclusive results also because empirical studies on the impact of voluntary female board representation had to tackle with endogeneity, due to either differences in unobservable characteristics across firms that may affect their gender policies and governance choices, or potential reverse causality. In this paper, we study the relationship between the presence of female directors and corporate performance in Italy, where the Law 120/2011 envisaging mandatory quotas has introduced an exogenous shock in board composition which may enable to overcome reverse causality. Our sample comprises Italian firms listed on the Italian Stock Exchange and the members of their board of directors over the period 2008-2016. The study relies on two different databases, both drawn from CONSOB, referring respectively to directors and companies’ characteristics. On methodological grounds, information on directors is treated at the individual level, by matching each company with its directors every year. This allows identifying all time-invariant, possibly correlated, elements of latent heterogeneity that vary across firms and board members, such as the firm immaterial assets and the directors’ skills and commitment. Moreover, we estimate dynamic panel data specifications, so accommodating non-instantaneous adjustments of firm performance and gender diversity to institutional and economic changes. In all cases, robust inference is carried out taking into account the bidimensional clustering of observations over companies and over directors. The study shows the existence of a U-shaped impact of the percentage of women in the boardroom on profitability, as measured by Return On Equity (ROE) and Return On Assets. Female representation yields a positive impact when it exceeds a certain threshold, ranging between about 18% and 21% of the board members, depending on the specification. Given the average board size, i.e., around ten members over the time period considered, this would imply that a significant effect of gender diversity on corporate performance starts to emerge when at least two women hold a seat. This evidence supports the idea underpinning the critical mass theory, i.e., the hypothesis that women may influence.Keywords: gender diversity, quotas, firms performance, corporate governance
Procedia PDF Downloads 170241 Calculation of A Sustainable Quota Harvesting of Long-tailed Macaque (Macaca fascicularis Raffles) in Their Natural Habitats
Authors: Yanto Santosa, Dede Aulia Rahman, Cory Wulan, Abdul Haris Mustari
Abstract:
The global demand for long-tailed macaques for medical experimentation has continued to increase. Fulfillment of Indonesian export demands has been mostly from natural habitats, based on a harvesting quota. This quota has been determined according to the total catch for a given year, and not based on consideration of any demographic parameters or physical environmental factors with regard to the animal; hence threatening the sustainability of the various populations. It is therefore necessary to formulate a method for calculating a sustainable harvesting quota, based on population parameters in natural habitats. Considering the possibility of variations in habitat characteristics and population parameters, a time series observation of demographic and physical/biotic parameters, in various habitats, was performed on 13 groups of long-tailed macaques, distributed throughout the West Java, Lampung and Yogyakarta areas of Indonesia. These provinces were selected for comparison of the influence of human/tourism activities. Data on population parameters that was collected included data on life expectancy according to age class, numbers of individuals by sex and age class, and ‘ratio of infants to reproductive females’. The estimation of population growth was based on a population dynamic growth model: the Leslie matrix. The harvesting quota was calculated as being the difference between the actual population size and the MVP (minimum viable population) for each sex and age class. Observation indicated that there were variations within group size (24 – 106 individuals), gender (sex) ratio (1:1 to 1:1.3), life expectancy value (0.30 to 0.93), and ‘ratio of infants to reproductive females’ (0.23 to 1.56). Results of subsequent calculations showed that sustainable harvesting quotas for each studied group of long-tailed macaques, ranged from 29 to 110 individuals. An estimation model of the MVP for each age class was formulated as Log Y = 0.315 + 0.884 Log Ni (number of individual on ith age class). This study also found that life expectancy for the juvenile age class was affected by the humidity under tree stands, and dietary plants’ density at sapling, pole and tree stages (equation: Y= 2.296 – 1.535 RH + 0.002 Kpcg – 0.002 Ktg – 0.001 Kphn, R2 = 89.6% with a significance value of 0.001). By contrast, for the sub-adult-adult age class, life expectancy was significantly affected by slope (equation: Y=0.377 = 0.012 Kml, R2 = 50.4%, with significance level of 0.007). The infant to reproductive female ratio was affected by humidity under tree stands, and dietary plant density at sapling and pole stages (equation: Y = -1.432 + 2.172 RH – 0.004 Kpcg + 0.003 Ktg, R2 = 82.0% with significance level of 0.001). This research confirmed the importance of population parameters in determining the minimum viable population, and that MVP varied according to habitat characteristics (especially food availability). It would be difficult therefore, to formulate a general mathematical equation model for determining a harvesting quota for the species as a whole.Keywords: harvesting, long-tailed macaque, population, quota
Procedia PDF Downloads 424240 Exploring Nature and Pattern of Mentoring Practices: A Study on Mentees' Perspectives
Authors: Nahid Parween Anwar, Sadia Muzaffar Bhutta, Takbir Ali
Abstract:
Mentoring is a structured activity which is designed to facilitate engagement between mentor and mentee to enhance mentee’s professional capability as an effective teacher. Both mentor and mentee are important elements of the ‘mentoring equation’ and play important roles in nourishing this dynamic, collaborative and reciprocal relationship. Cluster-Based Mentoring Programme (CBMP) provides an indigenous example of a project which focused on development of primary school teachers in selected clusters with a particular focus on their classroom practice. A study was designed to examine the efficacy of CBMP as part of Strengthening Teacher Education in Pakistan (STEP) project. This paper presents results of one of the components of this study. As part of the larger study, a cross-sectional survey was employed to explore nature and patterns of mentoring process from mentees’ perspectives in the selected districts of Sindh and Balochistan. This paper focuses on the results of the study related to the question: What are mentees’ perceptions of their mentors’ support for enhancing their classroom practice during mentoring process? Data were collected from mentees (n=1148) using a 5-point scale -‘Mentoring for Effective Primary Teaching’ (MEPT). MEPT focuses on seven factors of mentoring: personal attributes, pedagogical knowledge, modelling, feedback, system requirement, development and use of material, and gender equality. Data were analysed using SPSS 20. Mentees perceptions of mentoring practice of their mentors were summarized using mean and standard deviation. Results showed that mean scale scores on mentees’ perceptions of their mentors’ practices fell between 3.58 (system requirement) and 4.55 (personal attributes). Mentees’ perceives personal attribute of the mentor as the most significant factor (M=4.55) towards streamlining mentoring process by building good relationship between mentor and mentees. Furthermore, mentees have shared positive views about their mentors efforts towards promoting gender impartiality (M=4.54) during workshop and follow up visit. Contrary to this, mentees felt that more could have been done by their mentors in sharing knowledge about system requirement (e.g. school policies, national curriculum). Furthermore, some of the aspects in high scoring factors were highlighted by the mentees as areas for further improvement (e.g. assistance in timetabling, written feedback, encouragement to develop learning corners). Mentees’ perceptions of their mentors’ practices may assist in determining mentoring needs. The results may prove useful for the professional development programme for the mentors and mentees for specific mentoring programme in order to enhance practices in primary classrooms in Pakistan. Results would contribute into the body of much-needed knowledge from developing context.Keywords: cluster-based mentoring programme, mentoring for effective primary teaching (MEPT), professional development, survey
Procedia PDF Downloads 233239 Effects of Virtual Reality Treadmill Training on Gait and Balance Performance of Patients with Stroke: Review
Authors: Hanan Algarni
Abstract:
Background: Impairment of walking and balance skills has negative impact on functional independence and community participation after stroke. Gait recovery is considered a primary goal in rehabilitation by both patients and physiotherapists. Treadmill training coupled with virtual reality technology is a new emerging approach that offers patients with feedback, open and random skills practice while walking and interacting with virtual environmental scenes. Objectives: To synthesize the evidence around the effects of the VR treadmill training on gait speed and balance primarily, functional independence and community participation secondarily in stroke patients. Methods: Systematic review was conducted; search strategy included electronic data bases: MEDLINE, AMED, Cochrane, CINAHL, EMBASE, PEDro, Web of Science, and unpublished literature. Inclusion criteria: Participant: adult >18 years, stroke, ambulatory, without severe visual or cognitive impartments. Intervention: VR treadmill training alone or with physiotherapy. Comparator: any other interventions. Outcomes: gait speed, balance, function, community participation. Characteristics of included studies were extracted for analysis. Risk of bias assessment was performed using Cochrane's ROB tool. Narrative synthesis of findings was undertaken and summary of findings in each outcome was reported using GRADEpro. Results: Four studies were included involving 84 stroke participants with chronic hemiparesis. Interventions intensity ranged (6-12 sessions, 20 minutes-1 hour/session). Three studies investigated the effects on gait speed and balance. 2 studies investigated functional outcomes and one study assessed community participation. ROB assessment showed 50% unclear risk of selection bias and 25% of unclear risk of detection bias across the studies. Heterogeneity was identified in the intervention effects at post training and follow up. Outcome measures, training intensity and durations also varied across the studies, grade of evidence was low for balance, moderate for speed and function outcomes, and high for community participation. However, it is important to note that grading was done on few numbers of studies in each outcome. Conclusions: The summary of findings suggests positive and statistically significant effects (p<0.05) of VR treadmill training compared to other interventions on gait speed, dynamic balance skills, function and participation directly after training. However, the effects were not sustained at follow up in two studies (2 weeks-1 month) and other studies did not perform follow up measurements. More RCTs with larger sample sizes and higher methodological quality are required to examine the long term effects of VR treadmill effects on function independence and community participation after stroke, in order to draw conclusions and produce stronger robust evidence.Keywords: virtual reality, treadmill, stroke, gait rehabilitation
Procedia PDF Downloads 274238 A New Model to Perform Preliminary Evaluations of Complex Systems for the Production of Energy for Buildings: Case Study
Authors: Roberto de Lieto Vollaro, Emanuele de Lieto Vollaro, Gianluca Coltrinari
Abstract:
The building sector is responsible, in many industrialized countries, for about 40% of the total energy requirements, so it seems necessary to devote some efforts in this area in order to achieve a significant reduction of energy consumption and of greenhouse gases emissions. The paper presents a study aiming at providing a design methodology able to identify the best configuration of the system building/plant, from a technical, economic and environmentally point of view. Normally, the classical approach involves a building's energy loads analysis under steady state conditions, and subsequent selection of measures aimed at improving the energy performance, based on previous experience made by architects and engineers in the design team. Instead, the proposed approach uses a sequence of two well known scientifically validated calculation methods (TRNSYS and RETScreen), that allow quite a detailed feasibility analysis. To assess the validity of the calculation model, an existing, historical building in Central Italy, that will be the object of restoration and preservative redevelopment, was selected as a case-study. The building is made of a basement and three floors, with a total floor area of about 3,000 square meters. The first step has been the determination of the heating and cooling energy loads of the building in a dynamic regime by means of TRNSYS, which allows to simulate the real energy needs of the building in function of its use. Traditional methodologies, based as they are on steady-state conditions, cannot faithfully reproduce the effects of varying climatic conditions and of inertial properties of the structure. With TRNSYS it is possible to obtain quite accurate and reliable results, that allow to identify effective combinations building-HVAC system. The second step has consisted of using output data obtained with TRNSYS as input to the calculation model RETScreen, which enables to compare different system configurations from the energy, environmental and financial point of view, with an analysis of investment, and operation and maintenance costs, so allowing to determine the economic benefit of possible interventions. The classical methodology often leads to the choice of conventional plant systems, while RETScreen provides a financial-economic assessment for innovative energy systems and low environmental impact. Computational analysis can help in the design phase, particularly in the case of complex structures with centralized plant systems, by comparing the data returned by the calculation model RETScreen for different design options. For example, the analysis performed on the building, taken as a case study, found that the most suitable plant solution, taking into account technical, economic and environmental aspects, is the one based on a CCHP system (Combined Cooling, Heating, and Power) using an internal combustion engine.Keywords: energy, system, building, cooling, electrical
Procedia PDF Downloads 573237 Computer Aide Discrimination of Benign and Malignant Thyroid Nodules by Ultrasound Imaging
Authors: Akbar Gharbali, Ali Abbasian Ardekani, Afshin Mohammadi
Abstract:
Introduction: Thyroid nodules have an incidence of 33-68% in the general population. More than 5-15% of these nodules are malignant. Early detection and treatment of thyroid nodules increase the cure rate and provide optimal treatment. Between the medical imaging methods, Ultrasound is the chosen imaging technique for assessment of thyroid nodules. The confirming of the diagnosis usually demands repeated fine-needle aspiration biopsy (FNAB). So, current management has morbidity and non-zero mortality. Objective: To explore diagnostic potential of automatic texture analysis (TA) methods in differentiation benign and malignant thyroid nodules by ultrasound imaging in order to help for reliable diagnosis and monitoring of the thyroid nodules in their early stages with no need biopsy. Material and Methods: The thyroid US image database consists of 70 patients (26 benign and 44 malignant) which were reported by Radiologist and proven by the biopsy. Two slices per patient were loaded in Mazda Software version 4.6 for automatic texture analysis. Regions of interests (ROIs) were defined within the abnormal part of the thyroid nodules ultrasound images. Gray levels within an ROI normalized according to three normalization schemes: N1: default or original gray levels, N2: +/- 3 Sigma or dynamic intensity limited to µ+/- 3σ, and N3: present intensity limited to 1% - 99%. Up to 270 multiscale texture features parameters per ROIs per each normalization schemes were computed from well-known statistical methods employed in Mazda software. From the statistical point of view, all calculated texture features parameters are not useful for texture analysis. So, the features based on maximum Fisher coefficient and the minimum probability of classification error and average correlation coefficients (POE+ACC) eliminated to 10 best and most effective features per normalization schemes. We analyze this feature under two standardization states (standard (S) and non-standard (NS)) with Principle Component Analysis (PCA), Linear Discriminant Analysis (LDA) and Non-Linear Discriminant Analysis (NDA). The 1NN classifier was performed to distinguish between benign and malignant tumors. The confusion matrix and Receiver operating characteristic (ROC) curve analysis were used for the formulation of more reliable criteria of the performance of employed texture analysis methods. Results: The results demonstrated the influence of the normalization schemes and reduction methods on the effectiveness of the obtained features as a descriptor on discrimination power and classification results. The selected subset features under 1%-99% normalization, POE+ACC reduction and NDA texture analysis yielded a high discrimination performance with the area under the ROC curve (Az) of 0.9722, in distinguishing Benign from Malignant Thyroid Nodules which correspond to sensitivity of 94.45%, specificity of 100%, and accuracy of 97.14%. Conclusions: Our results indicate computer-aided diagnosis is a reliable method, and can provide useful information to help radiologists in the detection and classification of benign and malignant thyroid nodules.Keywords: ultrasound imaging, thyroid nodules, computer aided diagnosis, texture analysis, PCA, LDA, NDA
Procedia PDF Downloads 279236 Planckian Dissipation in Bi₂Sr₂Ca₂Cu₃O₁₀₋δ
Authors: Lalita, Niladri Sarkar, Subhasis Ghosh
Abstract:
Since the discovery of high temperature superconductivity (HTSC) in cuprates, several aspects of this phenomena have fascinated physics community. The most debated one is the linear temperature dependence of normal state resistivity over wide range of temperature in violation of with Fermi liquid theory. The linear-in-T resistivity (LITR) is the indication of strongly correlated metallic, known as “strange metal”, attributed to non Fermi liquid theory (NFL). The proximity of superconductivity to LITR suggests that there may be underlying common origin. The LITR has been shown to be due to unknown dissipative phenomena, restricted by quantum mechanics and commonly known as ‘‘Planckian dissipation” , the term first coined by Zaanen and the associated inelastic scattering time τ and given by 1/τ=αkBT/ℏ, where ℏ, kB and α are reduced Planck’s constant, Boltzmann constant and a dimensionless constant of order of unity, respectively. Since the first report, experimental support for α ~ 1 is appearing in literature. There are several striking issues which remain to be resolved if we desire to find out or at least get a clue towards microscopic origin of maximal dissipation in cuprates. (i) Universality of α ~ 1, recently some doubts have been raised in some cases. (ii) So far, Planckian dissipation has been demonstrated in overdoped Cuprates, but if the proximity to quantum criticality is important, then Planckian dissipation should be observed in optimally doped and marginally underdoped cuprates. The link between Planckian dissipation and quantum criticality still remains an open problem. (iii) Validity of Planckian dissipation in all cuprates is an important issue. Here, we report reversible change in the superconducting behavior of high temperature superconductor Bi2Sr2Ca2Cu3O10+δ (Bi-2223) under dynamic doping induced by photo-excitation. Two doped Bi-223 samples, which are x = 0.16 (optimal-doped), x = 0.145 (marginal-doped) have been used for this investigation. It is realized that steady state photo-excitation converts magnetic Cu2+ ions to nonmagnetic Cu1+ ions which reduces superconducting transition temperature (Tc) by killing superfluid density. In Bi-2223, one would expect the maximum of suppression of Tc should be at charge transfer gap. We have observed suppression of Tc starts at 2eV, which is the charge transfer gap in Bi-2223. We attribute this transition due to Cu-3d9(Cu2+) to Cu-3d10(Cu+), known as d9 − d10 L transition, photoexcitation makes some Cu ions in CuO2 planes as spinless non-magnetic potential perturbation as Zn2+ does in CuO2 plane in case Zn-doped cuprates. The resistivity varies linearly with temperature with or without photo-excitation. Tc can be varied by almost by 40K be photoexcitation. Superconductivity can be destroyed completely by introducing ≈ 2% of Cu1+ ions for this range of doping. With this controlled variation of Tc and resistivity, detailed investigation has been carried out to reveal Planckian dissipation underdoped to optimally doped Bi-2223. The most important aspect of this investigation is that we could vary Tc dynamically and reversibly, so that LITR and associated Planckian dissipation can be studied over wide ranges of Tc without changing the doping chemically.Keywords: linear resistivity, HTSC, Planckian dissipation, strange metal
Procedia PDF Downloads 59235 Evaluation of Coupled CFD-FEA Simulation for Fire Determination
Authors: Daniel Martin Fellows, Sean P. Walton, Jennifer Thompson, Oubay Hassan, Ella Quigley, Kevin Tinkham
Abstract:
Fire performance is a crucial aspect to consider when designing cladding products, and testing this performance is extremely expensive. Appropriate use of numerical simulation of fire performance has the potential to reduce the total number of fire tests required when designing a product by eliminating poor-performing design ideas early in the design phase. Due to the complexity of fire and the large spectrum of failures it can cause, multi-disciplinary models are needed to capture the complex fire behavior and its structural effects on its surroundings. Working alongside Tata Steel U.K., the authors have focused on completing a coupled CFD-FEA simulation model suited to test Polyisocyanurate (PIR) based sandwich panel products to gain confidence before costly experimental standards testing. The sandwich panels are part of a thermally insulating façade system primarily for large non-domestic buildings. The work presented in this paper compares two coupling methodologies of a replicated physical experimental standards test LPS 1181-1, carried out by Tata Steel U.K. The two coupling methodologies that are considered within this research are; one-way and two-way. A one-way coupled analysis consists of importing thermal data from the CFD solver into the FEA solver. A two-way coupling analysis consists of continuously importing the updated changes in thermal data, due to the fire's behavior, to the FEA solver throughout the simulation. Likewise, the mechanical changes will also be updated back to the CFD solver to include geometric changes within the solution. For CFD calculations, a solver called Fire Dynamic Simulator (FDS) has been chosen due to its adapted numerical scheme to focus solely on fire problems. Validation of FDS applicability has been achieved in past benchmark cases. In addition, an FEA solver called ABAQUS has been chosen to model the structural response to the fire due to its crushable foam plasticity model, which can accurately model the compressibility of PIR foam. An open-source code called FDS-2-ABAQUS is used to couple the two solvers together, using several python modules to complete the process, including failure checks. The coupling methodologies and experimental data acquired from Tata Steel U.K are compared using several variables. The comparison data includes; gas temperatures, surface temperatures, and mechanical deformation of the panels. Conclusions are drawn, noting improvements to be made on the current coupling open-source code FDS-2-ABAQUS to make it more applicable to Tata Steel U.K sandwich panel products. Future directions for reducing the computational cost of the simulation are also considered.Keywords: fire engineering, numerical coupling, sandwich panels, thermo fluids
Procedia PDF Downloads 89234 Business Intelligent to a Decision Support Tool for Green Entrepreneurship: Meso and Macro Regions
Authors: Anishur Rahman, Maria Areias, Diogo Simões, Ana Figeuiredo, Filipa Figueiredo, João Nunes
Abstract:
The circular economy (CE) has gained increased awareness among academics, businesses, and decision-makers as it stimulates resource circularity in the production and consumption systems. A large epistemological study has explored the principles of CE, but scant attention eagerly focused on analysing how CE is evaluated, consented to, and enforced using economic metabolism data and business intelligent framework. Economic metabolism involves the ongoing exchange of materials and energy within and across socio-economic systems and requires the assessment of vast amounts of data to provide quantitative analysis related to effective resource management. Limited concern, the present work has focused on the regional flows pilot region from Portugal. By addressing this gap, this study aims to promote eco-innovation and sustainability in the regions of Intermunicipal Communities Região de Coimbra, Viseu Dão Lafões and Beiras e Serra da Estrela, using this data to find precise synergies in terms of material flows and give companies a competitive advantage in form of valuable waste destinations, access to new resources and new markets, cost reduction and risk sharing benefits. In our work, emphasis on applying artificial intelligence (AI) and, more specifically, on implementing state-of-the-art deep learning algorithms is placed, contributing to construction a business intelligent approach. With the emergence of new approaches generally highlighted under the sub-heading of AI and machine learning (ML), the methods for statistical analysis of complex and uncertain production systems are facing significant changes. Therefore, various definitions of AI and its differences from traditional statistics are presented, and furthermore, ML is introduced to identify its place in data science and the differences in topics such as big data analytics and in production problems that using AI and ML are identified. A lifecycle-based approach is then taken to analyse the use of different methods in each phase to identify the most useful technologies and unifying attributes of AI in manufacturing. Most of macroeconomic metabolisms models are mainly direct to contexts of large metropolis, neglecting rural territories, so within this project, a dynamic decision support model coupled with artificial intelligence tools and information platforms will be developed, focused on the reality of these transition zones between the rural and urban. Thus, a real decision support tool is under development, which will surpass the scientific developments carried out to date and will allow to overcome imitations related to the availability and reliability of data.Keywords: circular economy, artificial intelligence, economic metabolisms, machine learning
Procedia PDF Downloads 72233 Bacterial Diversity in Vaginal Microbiota in Patients with Different Levels of Cervical Lesions Related to Human Papillomavirus Infection
Authors: Michelle S. Pereira, Analice C. Azevedo, Julliane D. Medeiros, Ana Claudia S. Martins, Didier S. Castellano-Filho, Claudio G. Diniz, Vania L. Silva
Abstract:
Vaginal microbiota is a complex ecosystem, composed by aerobic and anaerobic bacteria, living in a dynamic equilibrium. Lactobacillus spp. are predominant in vaginal ecosystem, and factors such as immunity and hormonal variations may lead to disruptions, resulting in proliferation of opportunistic pathogens. Bacterial vaginosis (BV) is a polymicrobial syndrome, caused by an increasing of anaerobic bacteria replacing Lactobacillus spp. Microorganisms such as Gardnerella vaginalis, Mycoplasma hominis, Mobiluncus spp., and Atopobium vaginae can be found in BV, which may also be associated to other infections such as by Human Papillomavirus (HPV). HPV is highly prevalent in sexually active women, and is considered a risk factor for development of cervical cancer. As long as few data is available on vaginal microbiota of women with HPV-associated cervical lesions, our objectives were to evaluate the diversity in vaginal ecosystem in these women. To all patients, clinical and socio-demographic data were collected after gynecological examination. This study was approved by the Ethics Committee from Federal University of Juiz de Fora, Minas Gerais, Brazil. Vaginal secretion and cervical scraping were collected. Gram-stained smears were evaluated to establish Nugent score for BV determination. Viral and bacterial DNA obtained was used as template for HPV genotyping (PCR) and bacterial fingerprint (REP-PCR). In total 31 patients were included (mean age 35 and 93.6% sexually active). The Nugent score showed that 38.7% were BV. From the medical records, Pap smear tests showed that 32.3% had low grade squamous epithelial lesion (LSIL), 29% had high grade squamous epithelial lesion (HSIL), 25.8% had atypical squamous cells of undetermined significance (ASC-US) and 12.9% with atypical squamous cells that would not exclude high-grade lesion (ASC-H). All participants were HPV+. HPV-16 was the most frequent (87.1%), followed by HPV-18 (61.3%). HPV-31, HPV-52 and HPV-58 were also detected. Coinfection HPV-16/HPV-18 was observed in 75%. In the 18-30 age group, HPV-16 was detected in 40%, and HPV-16/HPV-18 coinfection in 35%. HPV-16 was associated to 30% of ASC-H and 20% of HSIL patients. BV was observed in 50% of HPV-16+ participants and in 45% of HPV-16/HPV-18+. Fingerprints of bacterial communities showed clusters with low similarity suggesting high heterogeneity in vaginal microbiota within the sampled group. Overall, the data is worrisome once cervical-cancer highly risk-associated HPV-types were identified. The high microbial diversity observed may be related to the different levels of cellular lesions, and different physiological conditions of the participants (age, social behavior, education). Further prospective studies are needed to better address correlations and BV and microbial imbalance in vaginal ecosystems which would be related to the different cellular lesions in women with HPV infections. Supported by FAPEMIG, CNPq, CAPES, PPGCBIO/UFJF.Keywords: human papillomavirus, bacterial vaginosis, bacterial diversity, cervical cancer
Procedia PDF Downloads 195232 Synthesis, Computational Studies, Antioxidant and Anti-Inflammatory Bio-Evaluation of 2,5-Disubstituted- 1,3,4-Oxadiazole Derivatives
Authors: Sibghat Mansoor Rana, Muhammad Islam, Hamid Saeed, Hummera Rafique, Muhammad Majid, Muhammad Tahir Aqeel, Fariha Imtiaz, Zaman Ashraf
Abstract:
The 1,3,4-oxadiazole derivatives Ox-6a-f have been synthesized by incorporating flur- biprofen moiety with the aim to explore the potential of target molecules to decrease the oxidative stress. The title compounds Ox-6a-f were prepared by simple reactions in which a flurbiprofen –COOH group was esterified with methanol in an acid-catalyzed medium, which was then reacted with hydrazine to afford the corresponding hydrazide. The acid hydrazide was then cyclized into 1,3,4-oxadiazole-2-thiol by reacting with CS2 in the presence of KOH. The title compounds Ox-6a-f were synthesized by the reaction of an –SH group with various alkyl/aryl chlorides, which involves an S-alkylation reaction. The structures of the synthesized Ox-6a-f derivatives were ascer- tained by spectroscopic data. The in silico molecular docking was performed against target proteins cyclooxygenase-2 COX-2 (PDBID 5KIR) and cyclooxygenase-1 COX-1 (PDBID 6Y3C) to determine the binding affinity of the synthesized compounds with these structures. It has been inferred that most of the synthesized compounds bind well with an active binding site of 5KIR compared to 6Y3C, and especially compound Ox-6f showed excellent binding affinity (7.70 kcal/mol) among all synthesized compounds Ox-6a-f. The molecular dynamic (MD) simulation has also been performed to check the stability of docking complexes of ligands with COX-2 by determining their root mean square deviation and root mean square fluctuation. Little fluctuation was observed in case of Ox-6f, which forms the most stable complex with COX-2. The comprehensive antioxidant potential of the synthesized compounds has been evaluated by determining their free radical scavenging activity, including DPPH, OH, nitric oxide (NO), and iron chelation assay. The derivative Ox-6f showed promising results with 80.23% radical scavenging potential at a dose of 100 μg/mL while ascorbic acid exhibited 87.72% inhibition at the same dose. The anti-inflammatory activity of the final products has also been performed, and inflammatory markers were assayed, such as a thiobarbituric acid-reducing substance, nitric oxide, interleukin-6 (IL-6), and COX-2. The derivatives Ox-6d and Ox-6f displayed higher anti-inflammatory activity, exhibiting 70.56% and 74.16% activity, respectively. The results were compared with standard ibuprofen, which showed 84.31% activity at the same dose, 200 μg/mL. The anti-inflammatory potential has been performed by following the carrageen-induced hind paw edema model, and results showed that derivative Ox-6f exhibited 79.83% reduction in edema volume compared to standard ibuprofen, which reduced 84.31% edema volume. As dry lab and wet lab results confirm each other, it has been deduced that derivative Ox-6f may serve as the lead structure to design potent compounds to address oxidative stress.Keywords: synthetic chemistry, pharmaceutical chemistry, oxadiazole derivatives, anti-inflammatory, anti-cancer compounds
Procedia PDF Downloads 15231 Investigation of Ground Disturbance Caused by Pile Driving: Case Study
Authors: Thayalan Nall, Harry Poulos
Abstract:
Piling is the most widely used foundation method for heavy structures in poor soil conditions. The geotechnical engineer can choose among a variety of piling methods, but in most cases, driving piles by impact hammer is the most cost-effective alternative. Under unfavourable conditions, driving piles can cause environmental problems, such as noise, ground movements and vibrations, with the risk of ground disturbance leading to potential damage to proposed structures. In one of the project sites in which the authors were involved, three offshore container terminals, namely CT1, CT2 and CT3, were constructed over thick compressible marine mud. The seabed was around 6m deep and the soft clay thickness within the project site varied between 9m and 20m. CT2 and CT3 were connected together and rectangular in shape and were 2600mx800m in size. CT1 was 400m x 800m in size and was located on south opposite of CT2 towards its eastern end. CT1 was constructed first and due to time and environmental limitations, it was supported on a “forest” of large diameter driven piles. CT2 and CT3 are now under construction and are being carried out using a traditional dredging and reclamation approach with ground improvement by surcharging with vertical drains. A few months after the installation of the CT1 piles, a 2600m long sand bund to 2m above mean sea level was constructed along the southern perimeter of CT2 and CT3 to contain the dredged mud that was expected to be pumped. The sand bund was constructed by sand spraying and pumping using a dredging vessel. About 2000m length of the sand bund in the west section was constructed without any major stability issues or any noticeable distress. However, as the sand bund approached the section parallel to CT1, it underwent a series of deep seated failures leading the displaced soft clay materials to heave above the standing water level. The crest of the sand bund was about 100m away from the last row of piles. There were no plausible geological reasons to conclude that the marine mud only across the CT1 region was weaker than over the rest of the site. Hence it was suspected that the pile driving by impact hammer may have caused ground movements and vibrations, leading to generation of excess pore pressures and cyclic softening of the marine mud. This paper investigates the probable cause of failure by reviewing: (1) All ground investigation data within the region; (2) Soil displacement caused by pile driving, using theories similar to spherical cavity expansion; (3) Transfer of stresses and vibrations through the entire system, including vibrations transmitted from the hammer to the pile, and the dynamic properties of the soil; and (4) Generation of excess pore pressure due to ground vibration and resulting cyclic softening. The evidence suggests that the problems encountered at the site were primarily caused by the “side effects” of the pile driving operations.Keywords: pile driving, ground vibration, excess pore pressure, cyclic softening
Procedia PDF Downloads 235230 Comparative Investigation of Two Non-Contact Prototype Designs Based on a Squeeze-Film Levitation Approach
Authors: A. Almurshedi, M. Atherton, C. Mares, T. Stolarski, M. Miyatake
Abstract:
Transportation and handling of delicate and lightweight objects is currently a significant issue in some industries. Two common contactless movement prototype designs, ultrasonic transducer design and vibrating plate design, are compared. Both designs are based on the method of squeeze-film levitation, and this study aims to identify the limitations, and challenges of each. The designs are evaluated in terms of levitation capabilities, and characteristics. To this end, theoretical and experimental explorations are made. It is demonstrated that the ultrasonic transducer prototype design is better suited to the terms of levitation capabilities. However, the design has some operating and mechanical designing difficulties. For making accurate industrial products in micro-fabrication and nanotechnology contexts, such as semiconductor silicon wafers, micro-components and integrated circuits, non-contact oil-free, ultra-precision and low wear transport along the production line is crucial for enabling. One of the designs (design A) is called the ultrasonic chuck, for which an ultrasonic transducer (Langevin, FBI 28452 HS) comprises the main part. Whereas the other (design B), is a vibrating plate design, which consists of a plain rectangular plate made of Aluminium firmly fastened at both ends. The size of the rectangular plate is 200x100x2 mm. In addition, four rounded piezoelectric actuators of size 28 mm diameter with 0.5 mm thickness are glued to the underside of the plate. The vibrating plate is clamped at both ends in the horizontal plane through a steel supporting structure. In addition, the dynamic of levitation using the designs (A and B) has been investigated based on the squeeze film levitation (SFL). The input apparatus that is used with designs consist of a sine wave signal generator connected to an amplifier type ENP-1-1U (Echo Electronics). The latter has to be utilised to magnify the sine wave voltage that is produced by the signal generator. The measurements of the maximum levitation for three different semiconductor wafers of weights 52, 70 and 88 [g] for design A are 240, 205 and 187 [um], respectively. Whereas the physical results show that the average separation distance for a disk of 5 [g] weight for design B reaches 70 [um]. By using the methodology of squeeze film levitation, it is possible to hold an object in a non-contact manner. The analyses of the investigation outcomes signify that the non-contact levitation of design A provides more improvement than design B. However, design A is more complicated than design B in terms of its manufacturing. In order to identify an adequate non-contact SFL design, a comparison between two common such designs has been adopted for the current investigation. Specifically, the study will involve making comparisons in terms of the following issues: floating component geometries and material type constraints; final created pressure distributions; dangerous interactions with the surrounding space; working environment constraints; and complication and compactness of the mechanical design. Considering all these matters is essential for proficiently distinguish the better SFL design.Keywords: ANSYS, floating, piezoelectric, squeeze-film
Procedia PDF Downloads 149229 Seismic Assessment of Flat Slab and Conventional Slab System for Irregular Building Equipped with Shear Wall
Authors: Muhammad Aji Fajari, Ririt Aprilin Sumarsono
Abstract:
Particular instability of structural building under lateral load (e.g earthquake) will rise due to irregularity in vertical and horizontal direction as stated in SNI 03-1762-2012. The conventional slab has been considered for its less contribution in increasing the stability of the structure, except special slab system such as flat slab turned into account. In this paper, the analysis of flat slab system at Sequis Tower located in South Jakarta will be assessed its performance under earthquake. It consists of 6 floors of the basement where the flat slab system is applied. The flat slab system will be the main focus in this paper to be compared for its performance with conventional slab system under earthquake. Regarding the floor plan of Sequis Tower basement, re-entrant corner signed for this building is 43.21% which exceeded the allowable re-entrant corner is 15% as stated in ASCE 7-05 Based on that, the horizontal irregularity will be another concern for analysis, otherwise vertical irregularity does not exist for this building. Flat slab system is a system where the slabs use drop panel with shear head as their support instead of using beams. Major advantages of flat slab application are decreasing dead load of structure, removing beams so that the clear height can be maximized, and providing lateral resistance due to lateral load. Whilst, deflection at middle strip and punching shear are problems to be detail considered. Torsion usually appears when the structural member under flexure such as beam or column dimension is improper in ratio. Considering flat slab as alternative slab system will keep the collapse due to torsion down. Common seismic load resisting system applied in the building is a shear wall. Installation of shear wall will keep the structural system stronger and stiffer affecting in reduced displacement under earthquake. Eccentricity of shear wall location of this building resolved the instability due to horizontal irregularity so that the earthquake load can be absorbed. Performing linear dynamic analysis such as response spectrum and time history analysis due to earthquake load is suitable as the irregularity arise so that the performance of structure can be significantly observed. Utilization of response spectrum data for South Jakarta which PGA 0.389g is basic for the earthquake load idealization to be involved in several load combinations stated on SNI 03-1726-2012. The analysis will result in some basic seismic parameters such as period, displacement, and base shear of the system; besides the internal forces of the critical member will be presented. Predicted period of a structure under earthquake load is 0.45 second, but as different slab system applied in the analysis then the period will show a different value. Flat slab system will probably result in better performance for the displacement parameter compare to conventional slab system due to higher contribution of stiffness to the whole system of the building. In line with displacement, the deflection of the slab will result smaller for flat slab than a conventional slab. Henceforth, shear wall will be effective to strengthen the conventional slab system than flat slab system.Keywords: conventional slab, flat slab, horizontal irregularity, response spectrum, shear wall
Procedia PDF Downloads 191228 Electrophoretic Light Scattering Based on Total Internal Reflection as a Promising Diagnostic Method
Authors: Ekaterina A. Savchenko, Elena N. Velichko, Evgenii T. Aksenov
Abstract:
The development of pathological processes, such as cardiovascular and oncological diseases, are accompanied by changes in molecular parameters in cells, tissues, and serum. The study of the behavior of protein molecules in solutions is of primarily importance for diagnosis of such diseases. Various physical and chemical methods are used to study molecular systems. With the advent of the laser and advances in electronics, optical methods, such as scanning electron microscopy, sedimentation analysis, nephelometry, static and dynamic light scattering, have become the most universal, informative and accurate tools for estimating the parameters of nanoscale objects. The electrophoretic light scattering is the most effective technique. It has a high potential in the study of biological solutions and their properties. This technique allows one to investigate the processes of aggregation and dissociation of different macromolecules and obtain information on their shapes, sizes and molecular weights. Electrophoretic light scattering is an analytical method for registration of the motion of microscopic particles under the influence of an electric field by means of quasi-elastic light scattering in a homogeneous solution with a subsequent registration of the spectral or correlation characteristics of the light scattered from a moving object. We modified the technique by using the regime of total internal reflection with the aim of increasing its sensitivity and reducing the volume of the sample to be investigated, which opens the prospects of automating simultaneous multiparameter measurements. In addition, the method of total internal reflection allows one to study biological fluids on the level of single molecules, which also makes it possible to increase the sensitivity and the informativeness of the results because the data obtained from an individual molecule is not averaged over an ensemble, which is important in the study of bimolecular fluids. To our best knowledge the study of electrophoretic light scattering in the regime of total internal reflection is proposed for the first time, latex microspheres 1 μm in size were used as test objects. In this study, the total internal reflection regime was realized on a quartz prism where the free electrophoresis regime was set. A semiconductor laser with a wavelength of 655 nm was used as a radiation source, and the light scattering signal was registered by a pin-diode. Then the signal from a photodetector was transmitted to a digital oscilloscope and to a computer. The autocorrelation functions and the fast Fourier transform in the regime of Brownian motion and under the action of the field were calculated to obtain the parameters of the object investigated. The main result of the study was the dependence of the autocorrelation function on the concentration of microspheres and the applied field magnitude. The effect of heating became more pronounced with increasing sample concentrations and electric field. The results obtained in our study demonstrated the applicability of the method for the examination of liquid solutions, including biological fluids.Keywords: light scattering, electrophoretic light scattering, electrophoresis, total internal reflection
Procedia PDF Downloads 214227 Assessing the Outcomes of Collaboration with Students on Curriculum Development and Design on an Undergraduate Art History Module
Authors: Helen Potkin
Abstract:
This paper presents a practice-based case study of a project in which the student group designed and planned the curriculum content, classroom activities and assessment briefs in collaboration with the tutor. It focuses on the co-creation of the curriculum within a history and theory module, Researching the Contemporary, which runs for BA (Hons) Fine Art and Art History and for BA (Hons) Art Design History Practice at Kingston University, London. The paper analyses the potential of collaborative approaches to engender students’ investment in their own learning and to encourage reflective and self-conscious understandings of themselves as learners. It also addresses some of the challenges of working in this way, attending to the risks involved and feelings of uncertainty produced in experimental, fluid and open situations of learning. Alongside this, it acknowledges the tensions inherent in adopting such practices within the framework of the institution and within the wider of context of the commodification of higher education in the United Kingdom. The concept underpinning the initiative was to test out co-creation as a creative process and to explore the possibilities of altering the traditional hierarchical relationship between teacher and student in a more active, participatory environment. In other words, the project asked about: what kind of learning could be imagined if we were all in it together? It considered co-creation as producing different ways of being, or becoming, as learners, involving us reconfiguring multiple relationships: to learning, to each other, to research, to the institution and to our emotions. The project provided the opportunity for students to bring their own research and wider interests into the classroom, take ownership of sessions, collaborate with each other and to define the criteria against which they would be assessed. Drawing on students’ reflections on their experience of co-creation alongside theoretical considerations engaging with the processual nature of learning, concepts of equality and the generative qualities of the interrelationships in the classroom, the paper suggests that the dynamic nature of collaborative and participatory modes of engagement have the potential to foster relevant and significant learning experiences. The findings as a result of the project could be quantified in terms of the high level of student engagement in the project, specifically investment in the assessment, alongside the ambition and high quality of the student work produced. However, reflection on the outcomes of the experiment prompts a further set of questions about the nature of positionality in connection to learning, the ways our identities as learners are formed in and through our relationships in the classroom and the potential and productive nature of creative practice in education. Overall, the paper interrogates questions of what it means to work with students to invent and assemble the curriculum and it assesses the benefits and challenges of co-creation. Underpinning it is the argument that, particularly in the current climate of higher education, it is increasingly important to ask what it means to teach and to envisage what kinds of learning can be possible.Keywords: co-creation, collaboration, learning, participation, risk
Procedia PDF Downloads 120226 The Touch Sensation: Ageing and Gender Influences
Authors: A. Abdouni, C. Thieulin, M. Djaghloul, R. Vargiolu, H. Zahouani
Abstract:
A decline in the main sensory modalities (vision, hearing, taste, and smell) is well reported to occur with advancing age, it is expected a similar change to occur with touch sensation and perception. In this study, we have focused on the touch sensations highlighting ageing and gender influences with in vivo systems. The touch process can be divided into two main phases: The first phase is the first contact between the finger and the object, during this contact, an adhesive force has been created which is the needed force to permit an initial movement of the finger. In the second phase, the finger mechanical properties with their surface topography play an important role in the obtained sensation. In order to understand the age and gender effects on the touch sense, we develop different ideas and systems for each phase. To better characterize the contact, the mechanical properties and the surface topography of human finger, in vivo studies on the pulp of 40 subjects (20 of each gender) of four age groups of 26±3, 35+-3, 45+-2 and 58±6 have been performed. To understand the first touch phase a classical indentation system has been adapted to measure the finger contact properties. The normal force load, the indentation speed, the contact time, the penetration depth and the indenter geometry have been optimized. The penetration depth of a glass indenter is recorded as a function of the applied normal force. Main assessed parameter is the adhesive force F_ad. For the second phase, first, an innovative approach is proposed to characterize the dynamic finger mechanical properties. A contactless indentation test inspired from the techniques used in ophthalmology has been used. The test principle is to blow an air blast to the finger and measure the caused deformation by a linear laser. The advantage of this test is the real observation of the skin free return without any outside influence. Main obtained parameters are the wave propagation speed and the Young's modulus E. Second, negative silicon replicas of subject’s fingerprint have been analyzed by a probe laser defocusing. A laser diode transmits a light beam on the surface to be measured, and the reflected signal is returned to a set of four photodiodes. This technology allows reconstructing three-dimensional images. In order to study the age and gender effects on the roughness properties, a multi-scale characterization of roughness has been realized by applying continuous wavelet transform. After determining the decomposition of the surface, the method consists of quantifying the arithmetic mean of surface topographic at each scale SMA. Significant differences of the main parameters are shown with ageing and gender. The comparison between men and women groups reveals that the adhesive force is higher for women. The results of mechanical properties show a Young’s modulus higher for women and also increasing with age. The roughness analysis shows a significant difference in function of age and gender.Keywords: ageing, finger, gender, touch
Procedia PDF Downloads 265225 Killing for the Great Peace: An Internal Perspective on the Anti-Manchu Theme in the Taiping Movement
Authors: Zihao He
Abstract:
The majority of existing studies on the Taiping Movement (1851-1864) viewed their anti-Manchu attitudes as nationalist agendas: Taiping was aimed at revolting against the Manchu government and establishing a new political regime. To explain these aggressive and violent attitudes towards Manchu, these studies mainly found socio-economic factors and stressed the status of “being deprived”. Even the ‘demon-slaying’ narrative of the Taiping to dehumanize the Manchu tends to be viewed as a “religious tool” to achieve their political, nationalist aim. This paper argues that these studies on Taiping’s anti-Manchu attitudes and behaviors are analyzed from an external angle and have two major problems. Firstly, they distinguished “religion” from “nationalist” or “political”, focusing on the “political” nature of the movement. “Religion” and the religious experience within Taiping were largely ignored. This paper argues that there was no separable and independent “religion” in the Taiping Movement, as opposed to secular, nationalist politics. Secondly, these analyses held an external perspective on Taiping’s anti-Manchu agenda. Demonizing and killing Manchu were viewed as purely political actions. On the contrary, this paper focuses on the internal perspective of anti-Manchu narratives in the Taiping Movement. The method of this paper is mainly textual analysis, focusing on the official documents, edicts, and proclamations of the Taiping movement. It views the writing of the Taiping as a coherent narrative and rhetoric, which was attractive and convincing for its followers. In terms of the main findings, firstly, internal and external perspectives on anti-Manchu violence are different. Externally, violence was viewed as a tool and necessary process to achieve the political goal. However, internally speaking, in Taiping’s writing, violence was a result of Godlessness, which would be solved as far as the faith in God is restored in China. Having a framework of universal love among human beings as sons and daughters of the Heavenly Father and killing was forbidden, the Taiping excluded Manchus from the family of human beings and demonized them. “Demon-slaying” was not violence. It was constructed as a necessary process to achieve the Great Peace. Moreover, Taiping’s anti-Manchu violence was not merely “political.” Rather, the category “religion” and its binary opposition, “secular,” is not suitable for Taiping. A key point related to this argument is the revolutionary violence against the Manchu government, which inherited the traditional “Heavenly Mandate” model. From an internal, theological perspective, anti-Manchu was ordained and commanded by the Heavenly Father. Manchu, as a regime, was standing as a hindrance in the path toward God. Besides, Manchu was not only viewed as a regime, but they were also “demons.” Therefore, the paper examines how Manchus were dehumanized in Taiping’s writings and were situated outside of the consideration of nonviolent and love. Manchu as a regime and Manchu as demons are in a dynamic relationship. As a regime, the Manchu government was preventing Chinese people from worshipping the Heavenly Father, so they were demonized. As they were demons, killing Manchus during the revolt was justified and not viewed as being contradicted the universal love among human beings.Keywords: anti-manchu, demon-slaying, heavenly mandate, religion and violence, the taiping movement.
Procedia PDF Downloads 71224 Enhancing the Performance of Automatic Logistic Centers by Optimizing the Assignment of Material Flows to Workstations and Flow Racks
Authors: Sharon Hovav, Ilya Levner, Oren Nahum, Istvan Szabo
Abstract:
In modern large-scale logistic centers (e.g., big automated warehouses), complex logistic operations performed by human staff (pickers) need to be coordinated with the operations of automated facilities (robots, conveyors, cranes, lifts, flow racks, etc.). The efficiency of advanced logistic centers strongly depends on optimizing picking technologies in synch with the facility/product layout, as well as on optimal distribution of material flows (products) in the system. The challenge is to develop a mathematical operations research (OR) tool that will optimize system cost-effectiveness. In this work, we propose a model that describes an automatic logistic center consisting of a set of workstations located at several galleries (floors), with each station containing a known number of flow racks. The requirements of each product and the working capacity of stations served by a given set of workers (pickers) are assumed as predetermined. The goal of the model is to maximize system efficiency. The proposed model includes two echelons. The first is the setting of the (optimal) number of workstations needed to create the total processing/logistic system, subject to picker capacities. The second echelon deals with the assignment of the products to the workstations and flow racks, aimed to achieve maximal throughputs of picked products over the entire system given picker capacities and budget constraints. The solutions to the problems at the two echelons interact to balance the overall load in the flow racks and maximize overall efficiency. We have developed an operations research model within each echelon. In the first echelon, the problem of calculating the optimal number of workstations is formulated as a non-standard bin-packing problem with capacity constraints for each bin. The problem arising in the second echelon is presented as a constrained product-workstation-flow rack assignment problem with non-standard mini-max criteria in which the workload maximum is calculated across all workstations in the center and the exterior minimum is calculated across all possible product-workstation-flow rack assignments. The OR problems arising in each echelon are proved to be NP-hard. Consequently, we find and develop heuristic and approximation solution algorithms based on exploiting and improving local optimums. The LC model considered in this work is highly dynamic and is recalculated periodically based on updated demand forecasts that reflect market trends, technological changes, seasonality, and the introduction of new items. The suggested two-echelon approach and the min-max balancing scheme are shown to work effectively on illustrative examples and real-life logistic data.Keywords: logistics center, product-workstation, assignment, maximum performance, load balancing, fast algorithm
Procedia PDF Downloads 228223 Geovisualisation for Defense Based on a Deep Learning Monocular Depth Reconstruction Approach
Authors: Daniel R. dos Santos, Mateus S. Maldonado, Estevão J. R. Batista
Abstract:
The military commanders increasingly dependent on spatial awareness, as knowing where enemy are, understanding how war battle scenarios change over time, and visualizing these trends in ways that offer insights for decision-making. Thanks to advancements in geospatial technologies and artificial intelligence algorithms, the commanders are now able to modernize military operations on a universal scale. Thus, geovisualisation has become an essential asset in the defense sector. It has become indispensable for better decisionmaking in dynamic/temporal scenarios, operation planning and management for the war field, situational awareness, effective planning, monitoring, and others. For example, a 3D visualization of war field data contributes to intelligence analysis, evaluation of postmission outcomes, and creation of predictive models to enhance decision-making and strategic planning capabilities. However, old-school visualization methods are slow, expensive, and unscalable. Despite modern technologies in generating 3D point clouds, such as LIDAR and stereo sensors, monocular depth values based on deep learning can offer a faster and more detailed view of the environment, transforming single images into visual information for valuable insights. We propose a dedicated monocular depth reconstruction approach via deep learning techniques for 3D geovisualisation of satellite images. It introduces scalability in terrain reconstruction and data visualization. First, a dataset with more than 7,000 satellite images and associated digital elevation model (DEM) is created. It is based on high resolution optical and radar imageries collected from Planet and Copernicus, on which we fuse highresolution topographic data obtained using technologies such as LiDAR and the associated geographic coordinates. Second, we developed an imagery-DEM fusion strategy that combine feature maps from two encoder-decoder networks. One network is trained with radar and optical bands, while the other is trained with DEM features to compute dense 3D depth. Finally, we constructed a benchmark with sparse depth annotations to facilitate future research. To demonstrate the proposed method's versatility, we evaluated its performance on no annotated satellite images and implemented an enclosed environment useful for Geovisualisation applications. The algorithms were developed in Python 3.0, employing open-source computing libraries, i.e., Open3D, TensorFlow, and Pythorch3D. The proposed method provides fast and accurate decision-making with GIS for localization of troops, position of the enemy, terrain and climate conditions. This analysis enhances situational consciousness, enabling commanders to fine-tune the strategies and distribute the resources proficiently.Keywords: depth, deep learning, geovisualisation, satellite images
Procedia PDF Downloads 10222 Elastoplastic Modified Stillinger Weber-Potential Based Discretized Virtual Internal Bond and Its Application to the Dynamic Fracture Propagation
Authors: Dina Kon Mushid, Kabutakapua Kakanda, Dibu Dave Mbako
Abstract:
The failure of material usually involves elastoplastic deformation and fracturing. Continuum mechanics can effectively deal with plastic deformation by using a yield function and the flow rule. At the same time, it has some limitations in dealing with the fracture problem since it is a theory based on the continuous field hypothesis. The lattice model can simulate the fracture problem very well, but it is inadequate for dealing with plastic deformation. Based on the discretized virtual internal bond model (DVIB), this paper proposes a lattice model that can account for plasticity. DVIB is a lattice method that considers material to comprise bond cells. Each bond cell may have any geometry with a finite number of bonds. The two-body or multi-body potential can characterize the strain energy of a bond cell. The two-body potential leads to the fixed Poisson ratio, while the multi-body potential can overcome the limitation of the fixed Poisson ratio. In the present paper, the modified Stillinger-Weber (SW), a multi-body potential, is employed to characterize the bond cell energy. The SW potential is composed of two parts. One part is the two-body potential that describes the interatomic interactions between particles. Another is the three-body potential that represents the bond angle interactions between particles. Because the SW interaction can represent the bond stretch and bond angle contribution, the SW potential-based DVIB (SW-DVIB) can represent the various Poisson ratios. To embed the plasticity in the SW-DVIB, the plasticity is considered in the two-body part of the SW potential. It is done by reducing the bond stiffness to a lower level once the bond reaches the yielding point. While before the bond reaches the yielding point, the bond is elastic. When the bond deformation exceeds the yielding point, the bond stiffness is softened to a lower value. When unloaded, irreversible deformation occurs. With the bond length increasing to a critical value, termed the failure bond length, the bond fails. The critical failure bond length is related to the cell size and the macro fracture energy. By this means, the fracture energy is conserved so that the cell size sensitivity problem is relieved to a great extent. In addition, the plasticity and the fracture are also unified at the bond level. To make the DVIB able to simulate different Poisson ratios, the three-body part of the SW potential is kept elasto-brittle. The bond angle can bear the moment before the bond angle increment is smaller than a critical value. By this method, the SW-DVIB can simulate the plastic deformation and the fracturing process of material with various Poisson ratios. The elastoplastic SW-DVIB is used to simulate the plastic deformation of a material, the plastic fracturing process, and the tunnel plastic deformation. It has been shown that the current SW-DVIB method is straightforward in simulating both elastoplastic deformation and plastic fracture.Keywords: lattice model, discretized virtual internal bond, elastoplastic deformation, fracture, modified stillinger-weber potential
Procedia PDF Downloads 98