Search results for: state support
394 Empirical Decomposition of Time Series of Power Consumption
Authors: Noura Al Akkari, Aurélie Foucquier, Sylvain Lespinats
Abstract:
Load monitoring is a management process for energy consumption towards energy savings and energy efficiency. Non Intrusive Load Monitoring (NILM) is one method of load monitoring used for disaggregation purposes. NILM is a technique for identifying individual appliances based on the analysis of the whole residence data retrieved from the main power meter of the house. Our NILM framework starts with data acquisition, followed by data preprocessing, then event detection, feature extraction, then general appliance modeling and identification at the final stage. The event detection stage is a core component of NILM process since event detection techniques lead to the extraction of appliance features. Appliance features are required for the accurate identification of the household devices. In this research work, we aim at developing a new event detection methodology with accurate load disaggregation to extract appliance features. Time-domain features extracted are used for tuning general appliance models for appliance identification and classification steps. We use unsupervised algorithms such as Dynamic Time Warping (DTW). The proposed method relies on detecting areas of operation of each residential appliance based on the power demand. Then, detecting the time at which each selected appliance changes its states. In order to fit with practical existing smart meters capabilities, we work on low sampling data with a frequency of (1/60) Hz. The data is simulated on Load Profile Generator software (LPG), which was not previously taken into consideration for NILM purposes in the literature. LPG is a numerical software that uses behaviour simulation of people inside the house to generate residential energy consumption data. The proposed event detection method targets low consumption loads that are difficult to detect. Also, it facilitates the extraction of specific features used for general appliance modeling. In addition to this, the identification process includes unsupervised techniques such as DTW. To our best knowledge, there exist few unsupervised techniques employed with low sampling data in comparison to the many supervised techniques used for such cases. We extract a power interval at which falls the operation of the selected appliance along with a time vector for the values delimiting the state transitions of the appliance. After this, appliance signatures are formed from extracted power, geometrical and statistical features. Afterwards, those formed signatures are used to tune general model types for appliances identification using unsupervised algorithms. This method is evaluated using both simulated data on LPG and real-time Reference Energy Disaggregation Dataset (REDD). For that, we compute performance metrics using confusion matrix based metrics, considering accuracy, precision, recall and error-rate. The performance analysis of our methodology is then compared with other detection techniques previously used in the literature review, such as detection techniques based on statistical variations and abrupt changes (Variance Sliding Window and Cumulative Sum).Keywords: general appliance model, non intrusive load monitoring, events detection, unsupervised techniques;
Procedia PDF Downloads 82393 Embedded Test Framework: A Solution Accelerator for Embedded Hardware Testing
Authors: Arjun Kumar Rath, Titus Dhanasingh
Abstract:
Embedded product development requires software to test hardware functionality during development and finding issues during manufacturing in larger quantities. As the components are getting integrated, the devices are tested for their full functionality using advanced software tools. Benchmarking tools are used to measure and compare the performance of product features. At present, these tests are based on a variety of methods involving varying hardware and software platforms. Typically, these tests are custom built for every product and remain unusable for other variants. A majority of the tests goes undocumented, not updated, unusable when the product is released. To bridge this gap, a solution accelerator in the form of a framework can address these issues for running all these tests from one place, using an off-the-shelf tests library in a continuous integration environment. There are many open-source test frameworks or tools (fuego. LAVA, AutoTest, KernelCI, etc.) designed for testing embedded system devices, with each one having several unique good features, but one single tool and framework may not satisfy all of the testing needs for embedded systems, thus an extensible framework with the multitude of tools. Embedded product testing includes board bring-up testing, test during manufacturing, firmware testing, application testing, and assembly testing. Traditional test methods include developing test libraries and support components for every new hardware platform that belongs to the same domain with identical hardware architecture. This approach will have drawbacks like non-reusability where platform-specific libraries cannot be reused, need to maintain source infrastructure for individual hardware platforms, and most importantly, time is taken to re-develop test cases for new hardware platforms. These limitations create challenges like environment set up for testing, scalability, and maintenance. A desirable strategy is certainly one that is focused on maximizing reusability, continuous integration, and leveraging artifacts across the complete development cycle during phases of testing and across family of products. To get over the stated challenges with the conventional method and offers benefits of embedded testing, an embedded test framework (ETF), a solution accelerator, is designed, which can be deployed in embedded system-related products with minimal customizations and maintenance to accelerate the hardware testing. Embedded test framework supports testing different hardwares including microprocessor and microcontroller. It offers benefits such as (1) Time-to-Market: Accelerates board brings up time with prepacked test suites supporting all necessary peripherals which can speed up the design and development stage(board bring up, manufacturing and device driver) (2) Reusability-framework components isolated from the platform-specific HW initialization and configuration makes the adaptability of test cases across various platform quick and simple (3) Effective build and test infrastructure with multiple test interface options and preintegrated with FUEGO framework (4) Continuos integration - pre-integrated with Jenkins which enabled continuous testing and automated software update feature. Applying the embedded test framework accelerator throughout the design and development phase enables to development of the well-tested systems before functional verification and improves time to market to a large extent.Keywords: board diagnostics software, embedded system, hardware testing, test frameworks
Procedia PDF Downloads 145392 Addressing Housing Issue at Regional Level Planning: A Case Study of Mumbai Metropolitan Region
Authors: Bhakti Chitale
Abstract:
Mumbai city, which is the business capital of India and one of the most crowded cities in the world, holds the biggest slum in Asia. The Mumbai Metropolitan Region (MMR) occupies an area of 4035 sq.km. with a population of 22.8 million people. This population is mostly urban with 91% of this population living in areas of Municipal Corporations and Councils. Another 3% live in Census Towns. The region has 9 Municipal Corporations, 8 Municipal councils, and around 1000 villages. On the one hand MMR reflects the highest contribution to the Nations overall economy and on the other hand it shows the horrible and intolerable picture of about 2 million people, who are living in slums/without even slum with totally unhygienic conditions and with total loss of hope. The generations are about to get affected adversely if the solution is not worked out. This study is an attempt towards working out the solution. Mumbai Metropolitan Region Development Authority (MMRDA) is state government's authority, specially formed to govern the development of MMR. MMRDA is engaged in long term planning, promotion of new growth centres, implementation of strategic projects and financing infrastructure development. While preparing the master plan for MMR for next 20 years MMRDA conducted a detail study regarding Housing scenario in MMR and possible options for improvement. The author was the in charge officer for the said assignment. This paper puts light on the interesting outcomes of the research study, which ranges from the adverse effects of government policies, automatic responses of housing market, effects on planning processes, and overall changing needs of housing patterns in the world due to changes in the social mechanism. It alarms the urban planners who usually focus on smart infrastructure development, about allied future dangers. This housing study will explain the complexities, realities and needs of innovations in the housing policies all over the world. The paper will explain further few success stories and failure stories of government initiatives with reasons. It gives the clear idea about the differences in needs of housing for people from different economic groups and direct and indirect market pressures on low cost housing. Magical phenomenon came in front like a large percentage of vacant houses is present in spite of the huge need. Housing market gets affected by the developments or any other physical and financial changes taking place in the nearby areas or cities, also by changes in cities which are located far from the region and also by the international investments or policy changes. Instead of just depending on governments actions in case of generation of affordable housing, it becomes equally important to make the housing markets automatically generate such stock and still make them sustainable is the aim of all the movement. In summary, we may say that the paper will sequentially elaborate the complete dynamics of housing in one of the most crowded urban area in the world that is Mumbai Metropolitan Region, with a lot of data, analysis, case studies, and recommendations.Keywords: Mumbai India, slum housing, region planning, market recommendations
Procedia PDF Downloads 280391 Attracting Tourists: Architecture for Tourism during the Period of Korean Empire, 1897–1910
Authors: Lina Shinhwa Koo
Abstract:
The Korean Empire, or Daehanjeguk, was proclaimed by King Gojong (1852–1919) in 1897 with the aim of promoting its sovereignty as a nation-state amid the political situation with threats from neighbouring countries, such as Japan and Russia. The Korean Empire period (1897–1910), which lasted until 1910, when Japan annexed Korea, is a pivotal time in the modern history of Korea. It was also during the period when many infrastructures for tourism, including transportation and lodging systems, were established. Throughout the Korean Empire period, tourists from Japan and Euro-American countries popularly visited Korea after it opened its doors relatively recently. The government of the Korean Empire also actively engaged with foreign officials and professionals. Train stations were built to connect Busan, where foreigners first arrived through the port of Jemulpo, with Seoul, the capital of Korea. In addition, hotels were built to accommodate the increasing number of tourists. Shedding new light on the modern architectural history of Korea, this paper discusses buildings that were made for tourism during the Korean Empire period to examine the historical background behind the tourism development in Korea and the concept of travelling related to architecture history. Foreigners came to Korea for varying reasons, from ethnographic research and diplomacy to business and missionary. They also played a key role in the transportation and hotel businesses. For instance, American entrepreneur James R. Morse received a concession to construct a railway between Busan and Seoul in 1896, which was later granted to a Japanese firm. Japanese entrepreneurs came to Korea and built hotels, such as Daebul Hotel in Incheon and Paseonggwan in Seoul. Sontag Hotel, Station Hotel and Hotel du Palais, all located in central areas of Seoul, were owned by German, British and French entrepreneurs, respectively. Each building showed distinctive architectural elements. For example, Sontag Hotel was built in Russian architectural style, whereas Paseonggwan was created with a combination of Japanese and European styles. Such various architectural designs indicated the multicultural urban scenes of the Korean Empire at the time. The existing scholarship has paid more attention to the royal buildings built during the Korean Empire period, such as Seokjojeon of the Duksu Palace. However, it is important to study the tourism-related architecture that reflected the societal situation of the Korean Empire when contrasting ideologies, landscapes, historical narratives and political tensions intertwined and co-existed. Examining both textual and visual resources, such as news articles and photographs, this paper surveys architectural styles and the trajectories of selective examples of hotels and train stations within the discussion of temporality and spatiality in the discipline of social science. In doing so, one can re-assess the history of the Korean Empire as the intersection of modern and traditional, intrinsic and extrinsic and national and international.Keywords: Korean empire, modern Korean architecture, tourism, hotel, train station
Procedia PDF Downloads 73390 The United States Film Industry and Its Impact on Latin American Identity Rationalizations
Authors: Alfonso J. García Osuna
Abstract:
Background and Significance: The objective of this paper is to analyze the inception and development of identity archetypes in early XX century Latin America, to explore their roots in United States culture, to discuss the influences that came to bear upon Latin Americans as the United States began to export images of standard identity paradigms through its film industry, and to survey how these images evolved and impacted Latin Americans’ ideas of national distinctiveness from the early 1900s to the present. Therefore, the general hypothesis of this work is that United States film in many ways influenced national identity patterning in its neighbors, especially in those nations closest to its borders, Cuba and Mexico. Very little research has been done on the social impact of the United States film industry on the country’s southern neighbors. From a historical perspective, the US’s influence has been examined as the projection of political and economic power, that is to say, that American influence is seen as a catalyst to align the forces that the US wants to see wield the power of the State. But the subtle yet powerful cultural influence exercised by film, the eminent medium for exporting ideas and ideals in the XX century, has not been significantly explored. Basic Methodologies and Description: Gramscian Marxist theory underpins the study, where it is argued that film, as an exceptional vehicle for culture, is an important site of political and social struggle; in this context, it aims to show how United States capitalist structures of power not only use brute force to generate and maintain control of overseas markets, but also promote their ideas through artistic products such as film in order to infiltrate the popular culture of subordinated peoples. In this same vein, the work of neo-Marxist theoreticians of popular culture is employed in order to contextualize the agency of subordinated peoples in the process of cultural assimilations. Indication of the Major Findings of the Study: The study has yielded much data of interest. The salient finding is that each particular nation receives United States film according to its own particular social and political context, regardless of the amount of pressure exerted upon it. An example of this is the unmistakable dissimilarity between Cuban and Mexican reception of US films. The positive reception given in Cuba to American film has to do with the seamless acceptance of identity paradigms that, for historical reasons discussed herein, were incorporated into the national identity grid quite unproblematically. Such is not the case with Mexico, whose express rejection of identity paradigms offered by the United States reflects not only past conflicts with the northern neighbor, but an enduring recognition of the country’s indigenous roots, one that precluded such paradigms. Concluding Statement: This paper is an endeavor to elucidate the ways in which US film contributed to the outlining of Latin American identity blueprints, offering archetypes that would be accepted or rejected according to each nation’s particular social requirements, constraints and ethnic makeup.Keywords: film studies, United States, Latin America, identity studies
Procedia PDF Downloads 298389 Vortex Generation to Model the Airflow Downstream of a Piezoelectric Fan Array
Authors: Alastair Hales, Xi Jiang, Siming Zhang
Abstract:
Numerical methods are used to generate vortices in a domain. Through considered design, two counter-rotating vortices may interact and effectively drive one another downstream. This phenomenon is comparable to the vortex interaction that occurs in a region immediately downstream from two counter-oscillating piezoelectric (PE) fan blades. PE fans are small blades clamped at one end and driven to oscillate at their first natural frequency by an extremely low powered actuator. In operation, the high oscillation amplitude and frequency generate sufficient blade tip speed through the surrounding air to create downstream air flow. PE fans are considered an ideal solution for low power hot spot cooling in a range of small electronic devices, but a single blade does not typically induce enough air flow to be considered a direct alternative to conventional air movers, such as axial fans. The development of face-to-face PE fan arrays containing multiple blades oscillating in counter-phase to one another is essential for expanding the range of potential PE fan applications regarding the cooling of power electronics. Even in an unoptimised state, these arrays are capable of moving air volumes comparable to axial fans with less than 50% of the power demand. Replicating the airflow generated by face-to-face PE fan arrays without including the actual blades in the model reduces the process’s computational demands and enhances the rate of innovation and development in the field. Vortices are generated at a defined inlet using a time-dependent velocity profile function, which pulsates the inlet air velocity magnitude. This induces vortex generation in the considered domain, and these vortices are shown to separate and propagate downstream in a regular manner. The generation and propagation of a single vortex are compared to an equivalent vortex generated from a PE fan blade in a previous experimental investigation. Vortex separation is found to be accurately replicated in the present numerical model. Additionally, the downstream trajectory of the vortices’ centres vary by just 10.5%, and size and strength of the vortices differ by a maximum of 10.6%. Through non-dimensionalisation, the numerical method is shown to be valid for PE fan blades with differing parameters to the specific case investigated. The thorough validation methods presented verify that the numerical model may be used to replicate vortex formation from an oscillating PE fans blade. An investigation is carried out to evaluate the effects of varying the distance between two PE fan blade, pitch. At small pitch, the vorticity in the domain is maximised, along with turbulence in the near vicinity of the inlet zones. It is proposed that face-to-face PE fan arrays, oscillating in counter-phase, should have a minimal pitch to optimally cool nearby heat sources. On the other hand, downstream airflow is maximised at a larger pitch, where the vortices can fully form and effectively drive one another downstream. As such, this should be implemented when bulk airflow generation is the desired result.Keywords: piezoelectric fans, low energy cooling, vortex formation, computational fluid dynamics
Procedia PDF Downloads 182388 Optimization and Coordination of Organic Product Supply Chains under Competition: An Analytical Modeling Perspective
Authors: Mohammadreza Nematollahi, Bahareh Mosadegh Sedghy, Alireza Tajbakhsh
Abstract:
The last two decades have witnessed substantial attention to organic and sustainable agricultural supply chains. Motivated by real-world practices, this paper aims to address two main challenges observed in organic product supply chains: decentralized decision-making process between farmers and their retailers, and competition between organic products and their conventional counterparts. To this aim, an agricultural supply chain consisting of two farmers, a conventional farmer and an organic farmer who offers an organic version of the same product, is considered. Both farmers distribute their products through a single retailer, where there exists competition between the organic and the conventional product. The retailer, as the market leader, sets the wholesale price, and afterward, the farmers set their production quantity decisions. This paper first models the demand functions of the conventional and organic products by incorporating the effect of asymmetric brand equity, which captures the fact that consumers usually pay a premium for organic due to positive perceptions regarding their health and environmental benefits. Then, profit functions with consideration of some characteristics of organic farming, including crop yield gap and organic cost factor, are modeled. Our research also considers both economies and diseconomies of scale in farming production as well as the effects of organic subsidy paid by the government to support organic farming. This paper explores the investigated supply chain in three scenarios: decentralized, centralized, and coordinated decision-making structures. In the decentralized scenario, the conventional and organic farmers and the retailer maximize their own profits individually. In this case, the interaction between the farmers is modeled under the Bertrand competition, while analyzing the interaction between the retailer and farmers under the Stackelberg game structure. In the centralized model, the optimal production strategies are obtained from the entire supply chain perspective. Analytical models are developed to derive closed-form optimal solutions. Moreover, analytical sensitivity analyses are conducted to explore the effects of main parameters like the crop yield gap, organic cost factor, organic subsidy, and percent price premium of the organic product on the farmers’ and retailer’s optimal strategies. Afterward, a coordination scenario is proposed to convince the three supply chain members to shift from the decentralized to centralized decision-making structure. The results indicate that the proposed coordination scenario provides a win-win-win situation for all three members compared to the decentralized model. Moreover, our paper demonstrates that the coordinated model respectively increases and decreases the production and price of organic produce, which in turn motivates the consumption of organic products in the market. Moreover, the proposed coordination model helps the organic farmer better handle the challenges of organic farming, including the additional cost and crop yield gap. Last but not least, our results highlight the active role of the organic subsidy paid by the government as a means of promoting sustainable organic product supply chains. Our paper shows that although the amount of organic subsidy plays a significant role in the production and sales price of organic products, the allocation method of subsidy between the organic farmer and retailer is not of that importance.Keywords: analytical game-theoretic model, product competition, supply chain coordination, sustainable organic supply chain
Procedia PDF Downloads 111387 Mapping the State of the Art of European Companies Doing Social Business at the Base of the Economic Pyramid as an Advanced Form of Strategic Corporate Social Responsibility
Authors: Claudio Di Benedetto, Irene Bengo
Abstract:
The objective of the paper is to study how large European companies develop social business (SB) at the base of the economic pyramid (BoP). BoP markets are defined as the four billions people living with an annual income below $3,260 in local purchasing power. Despite they are heterogeneous in terms of geographic range they present some common characteristics: the presence of significant unmet (social) needs, high level of informal economy and the so-called ‘poverty penalty’. As a result, most people living at BoP are excluded from the value created by the global market economy. But it is worth noting, that BoP population with an aggregate purchasing power of around $5 trillion a year, represent a huge opportunity for companies that want to enhance their long-term profitability perspective. We suggest that in this context, the development of SB is, for companies, an innovative and promising way to satisfy unmet social needs and to experience new forms of value creation. Indeed, SB can be considered a strategic model to develop CSR programs that fully integrate the social dimension into the business to create economic and social value simultaneously. Despite in literature many studies have been conducted on social business, only few have explicitly analyzed such phenomenon from a company perspective and their role in the development of such initiatives remains understudied with fragmented results. To fill this gap the paper analyzes the key characteristics of the social business initiatives developed by European companies at BoP. The study was performed analyzing 1475 European companies participating in the United Nation Global Compact, the world’s leading corporate social responsibility program. Through the analysis of the corporate websites the study identifies companies that actually do SB at BoP. For SB initiatives identified, information were collected according to a framework adapted from the SB model developed by preliminary results show that more than one hundred European companies have already implemented social businesses at BoP accounting for the 6,5% of the total. This percentage increases to 15% if the focus is on companies with more than 10.440 employees. In terms of geographic distribution 80% of companies doing SB at BoP are located in western and southern Europe. The companies more active in promoting SB belong to financial sector (20%), energy sector (17%) and food and beverage sector (12%). In terms of social needs addressed almost 30% of the companies develop SB to provide access to energy and WASH, 25% of companies develop SB to reduce local unemployment or to promote local entrepreneurship and 21% of companies develop SB to promote financial inclusion of poor. In developing SB companies implement different social business configurations ranging from forms of outsourcing to internal development models. The study identifies seven main configurations through which company develops social business and each configuration present distinguishing characteristics respect to the involvement of the company in the management, the resources provided and the benefits achieved. By performing different analysis on data collected the paper provides detailed insights on how European companies develop SB at BoP.Keywords: base of the economic pyramid, corporate social responsibility, social business, social enterprise
Procedia PDF Downloads 226386 Explanation of Sentinel-1 Sigma 0 by Sentinel-2 Products in Terms of Crop Water Stress Monitoring
Authors: Katerina Krizova, Inigo Molina
Abstract:
The ongoing climate change affects various natural processes resulting in significant changes in human life. Since there is still a growing human population on the planet with more or less limited resources, agricultural production became an issue and a satisfactory amount of food has to be reassured. To achieve this, agriculture is being studied in a very wide context. The main aim here is to increase primary production on a spatial unit while consuming as low amounts of resources as possible. In Europe, nowadays, the staple issue comes from significantly changing the spatial and temporal distribution of precipitation. Recent growing seasons have been considerably affected by long drought periods that have led to quantitative as well as qualitative yield losses. To cope with such kind of conditions, new techniques and technologies are being implemented in current practices. However, behind assessing the right management, there is always a set of the necessary information about plot properties that need to be acquired. Remotely sensed data had gained attention in recent decades since they provide spatial information about the studied surface based on its spectral behavior. A number of space platforms have been launched carrying various types of sensors. Spectral indices based on calculations with reflectance in visible and NIR bands are nowadays quite commonly used to describe the crop status. However, there is still the staple limit by this kind of data - cloudiness. Relatively frequent revisit of modern satellites cannot be fully utilized since the information is hidden under the clouds. Therefore, microwave remote sensing, which can penetrate the atmosphere, is on its rise today. The scientific literature describes the potential of radar data to estimate staple soil (roughness, moisture) and vegetation (LAI, biomass, height) properties. Although all of these are highly demanded in terms of agricultural monitoring, the crop moisture content is the utmost important parameter in terms of agricultural drought monitoring. The idea behind this study was to exploit the unique combination of SAR (Sentinel-1) and optical (Sentinel-2) data from one provider (ESA) to describe potential crop water stress during dry cropping season of 2019 at six winter wheat plots in the central Czech Republic. For the period of January to August, Sentinel-1 and Sentinel-2 images were obtained and processed. Sentinel-1 imagery carries information about C-band backscatter in two polarisations (VV, VH). Sentinel-2 was used to derive vegetation properties (LAI, FCV, NDWI, and SAVI) as support for Sentinel-1 results. For each term and plot, summary statistics were performed, including precipitation data and soil moisture content obtained through data loggers. Results were presented as summary layouts of VV and VH polarisations and related plots describing other properties. All plots performed along with the principle of the basic SAR backscatter equation. Considering the needs of practical applications, the vegetation moisture content may be assessed using SAR data to predict the drought impact on the final product quality and yields independently of cloud cover over the studied scene.Keywords: precision agriculture, remote sensing, Sentinel-1, SAR, water content
Procedia PDF Downloads 125385 Understanding Responses of the Bee Community to an Urbanizing Landscape in Bengaluru, South India
Authors: Chethana V. Casiker, Jagadishakumara B., Sunil G. M., Chaithra K., M. Soubadra Devy
Abstract:
A majority of the world’s food crops depends on insects for pollination, among which bees are the most dominant taxon. Bees pollinate vegetables, fruits and oilseeds which are rich in essential micronutrients. Besides being a prerequisite for a nutritionally secure diet, agrarian economies such as India depend heavily on pollination for good yield and quality of the product. As cities all over the world expand rapidly, large tracts of green spaces are being built up. This, along with high usage of agricultural chemicals has reduced floral diversity and shrunk bee habitats. Indeed, pollinator decline is being reported from various parts of the world. Further, the FAO has reported a huge increase in the area of land under cultivation of pollinator-dependent crops. In the light of increasing demand for pollination and disappearing natural habitats, it is critical to understand whether and how urban spaces can support pollinators. To this end, this study investigates the influence of landscape and local habitat quality on bee community dynamics. To capture the dynamics of expanding cityscapes, the study employs a space for time substitution, wherein a transect along the gradient of urbanization substitutes a timeframe of increasing urbanization. This will help understand how pollinators would respond to changes induced by increasing intensity of urbanization in the future. Bengaluru, one of the fastest growing cities of Southern India, is an excellent site to study impacts associated with urbanization. With sites moving away from the Bengaluru’s centre and towards its peripheries, this study captures the changes in bee species diversity and richness along a gradient of urbanization. Bees were sampled under different land use types as well as in different types of vegetation, including plantations, croplands, fallow land, parks, lake embankments, and private gardens. The relationship between bee community metrics and key drivers such as a percentage of built-up area, land use practices, and floral resources was examined. Additionally, data collected using questionnaire interviews were used to understand people’s perceptions towards and level of dependence on pollinators. Our results showed that urban areas are capable of supporting bees. In fact, a greater diversity of bees was recorded in urban sites compared to adjoining rural areas. This suggests that bees are able to seek out patchy resources and survive in small fragments of habitat. Bee abundance and species richness correlated positively with floral abundance and richness, indicating the role of vegetation in providing forage and nesting sites which are crucial to their survival. Bee numbers were seen to decrease with increase in built-up area demonstrating that impervious surfaces could act as deterrents. Findings from this study challenge the popular notion of cities being biodiversity-bare spaces. There is indeed scope for conserving bees in urban landscapes, provided that there are city-scale planning and local initiative. Bee conservation can go hand in hand with efforts such as urban gardening and terrace farming that could help cities urbanize sustainably.Keywords: bee, landscape ecology, urbanization, urban pollination
Procedia PDF Downloads 167384 Determination of Gross Alpha and Gross Beta Activity in Water Samples by iSolo Alpha/Beta Counting System
Authors: Thiwanka Weerakkody, Lakmali Handagiripathira, Poshitha Dabare, Thisari Guruge
Abstract:
The determination of gross alpha and beta activity in water is important in a wide array of environmental studies and these parameters are considered in international legislations on the quality of water. This technique is commonly applied as screening method in radioecology, environmental monitoring, industrial applications, etc. Measuring of Gross Alpha and Beta emitters by using iSolo alpha beta counting system is an adequate nuclear technique to assess radioactivity levels in natural and waste water samples due to its simplicity and low cost compared with the other methods. Twelve water samples (Six samples of commercially available bottled drinking water and six samples of industrial waste water) were measured by standard method EPA 900.0 consisting of the gas-less, firm wear based, single sample, manual iSolo alpha beta counter (Model: SOLO300G) with solid state silicon PIPS detector. Am-241 and Sr90/ Y90 calibration standards were used to calibrate the detector. The minimum detectable activities are 2.32mBq/L and 406mBq/L, for alpha and beta activity, respectively. Each of the 2L water samples was evaporated (at low heat) to a small volume and transferred into 50mm stainless steel counting planchet evenly (for homogenization) and heated by IR lamp and the constant weighted residue was obtained. Then the samples were counted for gross alpha and beta. Sample density on the planchet area was maintained below 5mg/cm. Large quantities of solid wastes sludges and waste water are generated every year due to various industries. This water can be reused for different applications. Therefore implementation of water treatment plants and measuring water quality parameters in industrial waste water discharge is very important before releasing them into the environment. This waste may contain different types of pollutants, including radioactive substances. All these measured waste water samples having gross alpha and beta activities, lower than the maximum tolerance limits for industrial waste water discharge of industrial waste in to inland surface water, that is 10-9µCi/mL and 10-8µCi/mL for gross alpha and beta respectively (National Environmental Act, No. 47 of 1980). This is according to extraordinary gazette of the democratic socialist republic of Sri Lanka in February 2008. The measured water samples were below the recommended radioactivity levels and do not pose any radiological hazard when releasing the environment. Drinking water is an essential requirement of life. All the drinking water samples were below the permissible levels of 0.5Bq/L for gross alpha activity and 1Bq/L for gross beta activity. The values have been proposed by World Health Organization in 2011; therefore the water is acceptable for consumption of humans without any further clarification with respect to their radioactivity. As these screening levels are very low, the individual dose criterion (IDC) would usually not be exceeded (0.1mSv y⁻¹). IDC is a criterion for evaluating health risks from long term exposure to radionuclides in drinking water. Recommended level of 0.1mSv/y expressed a very low level of health risk. This monitoring work will be continued further for environmental protection purposes.Keywords: drinking water, gross alpha, gross beta, waste water
Procedia PDF Downloads 198383 Gas-Phase Noncovalent Functionalization of Pristine Single-Walled Carbon Nanotubes with 3D Metal(II) Phthalocyanines
Authors: Vladimir A. Basiuk, Laura J. Flores-Sanchez, Victor Meza-Laguna, Jose O. Flores-Flores, Lauro Bucio-Galindo, Elena V. Basiuk
Abstract:
Noncovalent nanohybrid materials combining carbon nanotubes (CNTs) with phthalocyanines (Pcs) is a subject of increasing research effort, with a particular emphasis on the design of new heterogeneous catalysts, efficient organic photovoltaic cells, lithium batteries, gas sensors, field effect transistors, among other possible applications. The possibility of using unsubstituted Pcs for CNT functionalization is very attractive due to their very moderate cost and easy commercial availability. However, unfortunately, the deposition of unsubstituted Pcs onto nanotube sidewalls through the traditional liquid-phase protocols turns to be very problematic due to extremely poor solubility of Pcs. On the other hand, unsubstituted free-base H₂Pc phthalocyanine ligand, as well as many of its transition metal complexes, exhibit very high thermal stability and considerable volatility under reduced pressure, which opens the possibility for their physical vapor deposition onto solid surfaces, including nanotube sidewalls. In the present work, we show the possibility of simple, fast and efficient noncovalent functionalization of single-walled carbon nanotubes (SWNTs) with a series of 3d metal(II) phthalocyanines Me(II)Pc, where Me= Co, Ni, Cu, and Zn. The functionalization can be performed in a temperature range of 400-500 °C under moderate vacuum and requires about 2-3 h only. The functionalized materials obtained were characterized by means of Fourier-transform infrared (FTIR), Raman, UV-visible and energy-dispersive X-ray spectroscopy (EDS), scanning and transmission electron microscopy (SEM and TEM, respectively) and thermogravimetric analysis (TGA). TGA suggested that Me(II)Pc weight content is 30%, 17% and 35% for NiPc, CuPc, and ZnPc, respectively (CoPc exhibited anomalous thermal decomposition behavior). The above values are consistent with those estimated from EDS spectra, namely, of 24-39%, 27-36% and 27-44% for CoPc, CuPc, and ZnPc, respectively. A strong increase in intensity of D band in the Raman spectra of SWNT‒Me(II)Pc hybrids, as compared to that of pristine nanotubes, implies very strong interactions between Pc molecules and SWNT sidewalls. Very high absolute values of binding energies of 32.46-37.12 kcal/mol and the highest occupied and lowest unoccupied molecular orbital (HOMO and LUMO, respectively) distribution patterns, calculated with density functional theory by using Perdew-Burke-Ernzerhof general gradient approximation correlation functional in combination with the Grimme’s empirical dispersion correction (PBE-D) and the double numerical basis set (DNP), also suggested that the interactions between Me(II) phthalocyanines and nanotube sidewalls are very strong. The authors thank the National Autonomous University of Mexico (grant DGAPA-IN200516) and the National Council of Science and Technology of Mexico (CONACYT, grant 250655) for financial support. The authors are also grateful to Dr. Natalia Alzate-Carvajal (CCADET of UNAM), Eréndira Martínez (IF of UNAM) and Iván Puente-Lee (Faculty of Chemistry of UNAM) for technical assistance with FTIR, TGA measurements, and TEM imaging, respectively.Keywords: carbon nanotubes, functionalization, gas-phase, metal(II) phthalocyanines
Procedia PDF Downloads 129382 International Indigenous Employment Empirical Research: A Community-Based Participatory Research Content Analysis
Authors: Melanie Grier, Adam Murry
Abstract:
Objective: Worldwide, Indigenous Peoples experience underemployment and poverty at disproportionately higher rates than non-Indigenous people, despite similar rates of employment seeking. Euro-colonial conquest and genocidal assimilation policies are implicated as perpetuating poverty, which research consistently links to health and wellbeing disparities. Many of the contributors to poverty, such as inadequate income and lack of access to medical care, can be directly or indirectly linked to underemployment. Calls have been made to prioritize Indigenous perspectives in Industrial-Organizational (I/O) psychology research, yet the literature on Indigenous employment remains scarce. What does exist is disciplinarily diverse, topically scattered, and lacking evidence of community-based participatory research (CBPR) practices, a research project approach which prioritizes community leadership, partnership, and betterment and reduces the potential for harm. Due to the harmful colonial legacy of extractive scientific inquiry "on" rather than "with" Indigenous groups, Indigenous leaders and research funding agencies advocate for academic researchers to adopt reparative research methodologies such as CBPR to be used when studying issues pertaining to Indigenous Peoples or individuals. However, the frequency and consistency of CBPR implementation within scholarly discourse are unknown. Therefore, this project’s goal is two-fold: (1) to understand what comprises CBPR in Indigenous research and (2) to determine if CBPR has been historically used in Indigenous employment research. Method: Using a systematic literature review process, sixteen articles about CBPR use with Indigenous groups were selected, and content was analyzed to identify key components comprising CBPR usage. An Indigenous CBPR components framework was constructed and subsequently utilized to analyze the Indigenous employment empirical literature. A similar systematic literature review process was followed to search for relevant empirical articles on Indigenous employment. A total of 120 articles were identified in six global regions: Australia, New Zealand, Canada, America, the Pacific Islands, and Greenland/Norway. Each empirical study was procedurally examined and coded for criteria inclusion using content analysis directives. Results: Analysis revealed that, in total, CBPR elements were used 14% of the time in Indigenous employment research. Most studies (n=69; 58%) neglected to mention using any CBPR components, while just two studies discussed implementing all sixteen (2%). The most significant determinant of overall CBPR use was community member partnership (CP) in the research process. Studies from New Zealand were most likely to use CBPR components, followed by Canada, Australia, and America. While CBPR use did increase slowly over time, meaningful temporal trends were not found. Further, CBPR use did not directly correspond with the total number of topical articles published that year. Conclusions: Community-initiated and engaged research approaches must be better utilized in employment studies involving Indigenous Peoples. Future research efforts must be particularly attentive to community-driven objectives and research protocols, emphasizing specific areas of concern relevant to the field of I/O psychology, such as organizational support, recruitment, and selection.Keywords: community-based participatory research, content analysis, employment, indigenous research, international, reconciliation, recruitment, reparative research, selection, systematic literature review
Procedia PDF Downloads 74381 Stent Surface Functionalisation via Plasma Treatment to Promote Fast Endothelialisation
Authors: Irene Carmagnola, Valeria Chiono, Sandra Pacharra, Jochen Salber, Sean McMahon, Chris Lovell, Pooja Basnett, Barbara Lukasiewicz, Ipsita Roy, Xiang Zhang, Gianluca Ciardelli
Abstract:
Thrombosis and restenosis after stenting procedure can be prevented by promoting fast stent wall endothelialisation. It is well known that surface functionalisation with antifouling molecules combining with extracellular matrix proteins is a promising strategy to design biomimetic surfaces able to promote fast endothelialization. In particular, REDV has gained much attention for the ability to enhance rapid endothelialization due to its specific affinity with endothelial cells (ECs). In this work, a two-step plasma treatment was performed to polymerize a thin layer of acrylic acid, used to subsequently graft PEGylated-REDV and polyethylene glycol (PEG) at different molar ratio with the aim to selectively promote endothelial cell adhesion avoiding platelet activation. PEGylate-REDV was provided by Biomatik and it is formed by 6 PEG monomer repetitions (Chempep Inc.), with an NH2 terminal group. PEG polymers were purchased from Chempep Inc. with two different chain lengths: m-PEG6-NH2 (295.4 Da) with 6 monomer repetitions and m-PEG12-NH2 (559.7 Da) with 12 monomer repetitions. Plasma activation was obtained by operating at 50W power, 5 min of treatment and at an Ar flow rate of 20 sccm. Pure acrylic acid (99%, AAc) vapors were diluted in Ar (flow = 20 sccm) and polymerized by a pulsed plasma discharge applying a discharge RF power of 200 W, a duty cycle of 10% (on time = 10 ms, off time = 90 ms) for 10 min. After plasma treatment, samples were dipped into an 1-(3-dimethylaminopropyl)-3- ethylcarbodiimide (EDC)/N-hydroxysuccinimide (NHS) solution (ratio 4:1, pH 5.5) for 1 h at 4°C and subsequently dipped in PEGylate-REDV and PEGylate-REDV:PEG solutions at different molar ratio (100 μg/mL in PBS) for 20 h at room temperature. Surface modification was characterized through physico-chemical analyses and in vitro cell tests. PEGylated-REDV peptide and PEG were successfully bound to the carboxylic groups that are formed on the polymer surface after plasma reaction. FTIR-ATR spectroscopy, X -ray Photoelectron Spectroscopy (XPS) and contact angle measurement gave a clear indication of the presence of the grafted molecules. The use of PEG as a spacer allowed for an increase in wettability of the surface, and the effect was more evident by increasing the amount of PEG. Endothelial cells adhered and spread well on the surfaces functionalized with the REDV sequence. In conclusion, a selective coating able to promote a new endothelial cell layer on polymeric stent surface was developed. In particular, a thin AAc film was polymerised on the polymeric surface in order to expose –COOH groups, and PEGylate-REDV and PEG were successful grafted on the polymeric substrates. The REDV peptide demonstrated to encourage cell adhesion with a consequent, expected improvement of the hemocompatibility of these polymeric surfaces in vivo. Acknowledgements— This work was funded by the European Commission 7th Framework Programme under grant agreement number 604251- ReBioStent (Reinforced Bioresorbable Biomaterials for Therapeutic Drug Eluting Stents). The authors thank all the ReBioStent partners for their support in this work.Keywords: endothelialisation, plasma treatment, stent, surface functionalisation
Procedia PDF Downloads 311380 A Conceptual Model of Sex Trafficking Dynamics in the Context of Pandemics and Provisioning Systems
Authors: Brian J. Biroscak
Abstract:
In the United States (US), “sex trafficking” is defined at the federal level in the Trafficking Victims Protection Act of 2000 as encompassing a number of processes such as recruitment, transportation, and provision of a person for the purpose of a commercial sex act. It involves the use of force, fraud, or coercion, or in which the person induced to perform such act has not attained 18 years of age. Accumulating evidence suggests that sex trafficking is exacerbated by social and environmental stressors (e.g., pandemics). Given that “provision” is a key part of the definition, “provisioning systems” may offer a useful lens through which to study sex trafficking dynamics. Provisioning systems are the social systems connecting individuals, small groups, entities, and embedded communities as they seek to satisfy their needs and wants for goods, services, experiences and ideas through value-based exchange in communities. This project presents a conceptual framework for understanding sex trafficking dynamics in the context of the COVID pandemic. The framework is developed as a system dynamics simulation model based on published evidence, social and behavioral science theory, and key informant interviews with stakeholders from the Protection, Prevention, Prosecution, and Partnership sectors in one US state. This “4 P Paradigm” has been described as fundamental to the US government’s anti-trafficking strategy. The present research question is: “How do sex trafficking systems (e.g., supply, demand and price) interact with other provisioning systems (e.g., networks of organizations that help sexually exploited persons) to influence trafficking over time vis-à-vis the COVID pandemic?” Semi-structured interviews with stakeholders (n = 19) were analyzed based on grounded theory and combined for computer simulation. The first step (Problem Definition) was completed by open coding video-recorded interviews, supplemented by a literature review. The model depicts provision of sex trafficking services for victims and survivors as declining in March 2020, coincidental with COVID, but eventually rebounding. The second modeling step (Dynamic Hypothesis Formulation) was completed by open- and axial coding of interview segments, as well as consulting peer-reviewed literature. Part of the hypothesized explanation for changes over time is that the sex trafficking system behaves somewhat like a commodities market, with each of the other subsystems exhibiting delayed responses but collectively keeping trafficking levels below what they would be otherwise. Next steps (Model Building & Testing) led to a ‘proof of concept’ model that can be used to conduct simulation experiments and test various action ideas, by taking model users outside the entire system and seeing it whole. If sex trafficking dynamics unfold as hypothesized, e.g., oscillated post-COVID, then one potential leverage point is to address the lack of information feedback loops between the actual occurrence and consequences of sex trafficking and those who seek to prevent its occurrence, prosecute the traffickers, protect the victims and survivors, and partner with the other anti-trafficking advocates. Implications for researchers, administrators, and other stakeholders are discussed.Keywords: pandemics, provisioning systems, sex trafficking, system dynamics modeling
Procedia PDF Downloads 79379 Predictive Analytics for Theory Building
Authors: Ho-Won Jung, Donghun Lee, Hyung-Jin Kim
Abstract:
Predictive analytics (data analysis) uses a subset of measurements (the features, predictor, or independent variable) to predict another measurement (the outcome, target, or dependent variable) on a single person or unit. It applies empirical methods in statistics, operations research, and machine learning to predict the future, or otherwise unknown events or outcome on a single or person or unit, based on patterns in data. Most analyses of metabolic syndrome are not predictive analytics but statistical explanatory studies that build a proposed model (theory building) and then validate metabolic syndrome predictors hypothesized (theory testing). A proposed theoretical model forms with causal hypotheses that specify how and why certain empirical phenomena occur. Predictive analytics and explanatory modeling have their own territories in analysis. However, predictive analytics can perform vital roles in explanatory studies, i.e., scientific activities such as theory building, theory testing, and relevance assessment. In the context, this study is to demonstrate how to use our predictive analytics to support theory building (i.e., hypothesis generation). For the purpose, this study utilized a big data predictive analytics platform TM based on a co-occurrence graph. The co-occurrence graph is depicted with nodes (e.g., items in a basket) and arcs (direct connections between two nodes), where items in a basket are fully connected. A cluster is a collection of fully connected items, where the specific group of items has co-occurred in several rows in a data set. Clusters can be ranked using importance metrics, such as node size (number of items), frequency, surprise (observed frequency vs. expected), among others. The size of a graph can be represented by the numbers of nodes and arcs. Since the size of a co-occurrence graph does not depend directly on the number of observations (transactions), huge amounts of transactions can be represented and processed efficiently. For a demonstration, a total of 13,254 metabolic syndrome training data is plugged into the analytics platform to generate rules (potential hypotheses). Each observation includes 31 predictors, for example, associated with sociodemographic, habits, and activities. Some are intentionally included to get predictive analytics insights on variable selection such as cancer examination, house type, and vaccination. The platform automatically generates plausible hypotheses (rules) without statistical modeling. Then the rules are validated with an external testing dataset including 4,090 observations. Results as a kind of inductive reasoning show potential hypotheses extracted as a set of association rules. Most statistical models generate just one estimated equation. On the other hand, a set of rules (many estimated equations from a statistical perspective) in this study may imply heterogeneity in a population (i.e., different subpopulations with unique features are aggregated). Next step of theory development, i.e., theory testing, statistically tests whether a proposed theoretical model is a plausible explanation of a phenomenon interested in. If hypotheses generated are tested statistically with several thousand observations, most of the variables will become significant as the p-values approach zero. Thus, theory validation needs statistical methods utilizing a part of observations such as bootstrap resampling with an appropriate sample size.Keywords: explanatory modeling, metabolic syndrome, predictive analytics, theory building
Procedia PDF Downloads 276378 Sublethal Effects of Industrial Effluents on Fish Fingerlings (Clarias gariepinus) from Ologe Lagoon Environs, Lagos, Nigeria
Authors: Akintade O. Adeboyejo, Edwin O. Clarke, Oluwatoyin Aderinola
Abstract:
The present study is on the sub-lethal toxicity of industrial effluents (IE) from the environment of Ologe Lagoon, Lagos, Nigeria on the African catfish fingerlings Clarias gariepinus. The fish were cultured in varying concentrations of industrial effluents: 0% (control), 5%, 15%, 25%, and 35%. Trials were carried out in triplicates for twelve (12) weeks. The culture system was a static renewable bioassay and was carried out in the fisheries laboratory of the Lagos State University, Ojo-Lagos. Weekly physico-chemical parameters: Temperature (0C), pH, Conductivity (ppm) and Dissolved Oxygen (DO in mg/l) were measured in each treatment tank. Length (cm) and weight (g) data were obtained weekly and used to calculate various growth parameters: mean weight gain (MWG), percentage weight gain (PWG), daily weight gain (DWG), specific growth rate (SGR) and survival. Haematological (Packed Cell Volume (PCV), Red blood cells (RBC), White Blood Cell (WBC), Neutrophil and Lymphocytes etc) and histological alterations were measured after 12 weeks. The physico-chemical parameters showed that the pH ranged from 7.82±0.25–8.07±0.02. DO range from 1.92±0.66-4.43±1.24 mg/l. The conductivity values increased with increase in concentration of I.E. While the temperature remained stable with mean value range between 26.08±2.14–26.38±2.28. The DO showed significant differences at P<0.05. There was progressive increase in length and weight of fish during the culture period. The fish placed in the control had highest increase in both weight and length while fish in 35% had the least. MWG ranged from 16.59–35.96, DWG is from 0.3–0.48, SGR varied from 1.0–1.86 and survival was 100%. Haematological results showed that C. gariepinus had PCV ranging from 13.0±1.7-27.7±0.6, RBC ranged from 4.7±0.6–9.1±0.1, and Neutrophil ranged from 26.7±4.6–61.0±1.0 amongst others. The highest values of these parameters were obtained in the control and lowest at 35%. While the reverse effects were observed for WBC and lymphocytes. This study has shown that effluents may affect the health status of the test organism and impair vital processes if exposure continues for a long period of time. The histological examination revealed several lesions as expressed by the gills and livers. The histopathology of the gills in the control tanks had normal tissues with no visible lesion, but at higher concentrations, there were: lifting of epithelium, swollen lamellae and gill arch infiltration, necrosis and gill arch destruction. While in the liver: control (0%) show normal liver cells, at higher toxic level, there were: vacoulation, destruction of the hepatic parenchyma, tissue becoming eosinophilic (i.e. tending towards Carcinogenicity) and severe disruption of the hepatic cord architecture. The study has shown that industrial effluents from the study area may affect fish health status and impair vital processes if exposure continues for a long period of time even at lower concentrations (Sublethal).Keywords: sublethal toxicity, industrial effluents, clarias gariepinus, ologe lagoon
Procedia PDF Downloads 610377 Electromagnetic Simulation Based on Drift and Diffusion Currents for Real-Time Systems
Authors: Alexander Norbach
Abstract:
The script in this paper describes the use of advanced simulation environment using electronic systems (Microcontroller, Operational Amplifiers, and FPGA). The simulation may be used for all dynamic systems with the diffusion and the ionisation behaviour also. By additionally required observer structure, the system works with parallel real-time simulation based on diffusion model and the state-space representation for other dynamics. The proposed deposited model may be used for electrodynamic effects, including ionising effects and eddy current distribution also. With the script and proposed method, it is possible to calculate the spatial distribution of the electromagnetic fields in real-time. For further purpose, the spatial temperature distribution may be used also. With upon system, the uncertainties, unknown initial states and disturbances may be determined. This provides the estimation of the more precise system states for the required system, and additionally, the estimation of the ionising disturbances that occur due to radiation effects. The results have shown that a system can be also developed and adopted specifically for space systems with the real-time calculation of the radiation effects only. Electronic systems can take damage caused by impacts with charged particle flux in space or radiation environment. In order to be able to react to these processes, it must be calculated within a shorter time that ionising radiation and dose is present. All available sensors shall be used to observe the spatial distributions. By measured value of size and known location of the sensors, the entire distribution can be calculated retroactively or more accurately. With the formation, the type of ionisation and the direct effect to the systems and thus possible prevent processes can be activated up to the shutdown. The results show possibilities to perform more qualitative and faster simulations independent of kind of systems space-systems and radiation environment also. The paper gives additionally an overview of the diffusion effects and their mechanisms. For the modelling and derivation of equations, the extended current equation is used. The size K represents the proposed charge density drifting vector. The extended diffusion equation was derived and shows the quantising character and has similar law like the Klein-Gordon equation. These kinds of PDE's (Partial Differential Equations) are analytically solvable by giving initial distribution conditions (Cauchy problem) and boundary conditions (Dirichlet boundary condition). For a simpler structure, a transfer function for B- and E- fields was analytically calculated. With known discretised responses g₁(k·Ts) and g₂(k·Ts), the electric current or voltage may be calculated using a convolution; g₁ is the direct function and g₂ is a recursive function. The analytical results are good enough for calculation of fields with diffusion effects. Within the scope of this work, a proposed model of the consideration of the electromagnetic diffusion effects of arbitrary current 'waveforms' has been developed. The advantage of the proposed calculation of diffusion is the real-time capability, which is not really possible with the FEM programs available today. It makes sense in the further course of research to use these methods and to investigate them thoroughly.Keywords: advanced observer, electrodynamics, systems, diffusion, partial differential equations, solver
Procedia PDF Downloads 130376 Natural Dyes: A Global Perspective on Commercial Solutions and Industry Players
Authors: Laura Seppälä, Ana Nuutinen
Abstract:
Environmental concerns are increasing the interest in the potential uses of natural dyes. Natural dyes are more safe and environmentally friendly option than synthetic dyes. However, one must be also cautious with natural dyes, because, for example, some dyestuff such as plants or mushrooms, as well as some mordants are poisonous. By natural dyes we mean dyes that are derived from plants, fungi, bark, lichens, algae, insects, and minerals. Different plant parts, such as stems, leaves, flowers, roots, bark, berries, fruits, and cones, can be utilized for textile dyeing and printing, pigment manufacture, and other processes depending on the season. They may be utilized to produce distinctive colour tones that are challenging to do with synthetic dyes. This adds value to textiles and makes them stand out. Synthetic dyes quickly replaced natural dyes, after being developed in the middle of the 19th century, but natural dyes have remained the dyeing method of crafters until recently. This research examines the commercial solutions for natural dyes in many parts of the world, such as Europe, the United States, South America, Africa, Asia, New Zealand, and Australia. This study aims to determine the commercial status of natural dyes. Each continent has its own traditions and specific dyestuffs. The availability of natural dyes can vary depending on several aspects, including plant species, temperature, and harvesting techniques, which poses a challenge to the work of designers and crafters. While certain plants may only provide dyes during specific seasons, others may do so continuously. To find the ideal time to collect natural dyes, it is critical to research various plant species and their harvesting techniques. Furthermore, to guarantee the quality and colour of the dye, plant material must be handled and processed properly. This research was conducted via an internet search, and results were searched systematically for commercial stakeholders in the field. The research question looked at commercial players in the field of natural dyes. This qualitative case study interpreted the data using thematic analysis. Each webpage was screenshotted and analyzed in reflection on to research question. Online content analysis means systematically coding and analyzing qualitative data. The most evident result was that the natural dyes interest in different parts of the World. There are clothing collections dyed with natural dyes, dyestuff stores, and courses for natural dyeing. This article presents the designers who work with natural dyes and actors who are involved with the natural dye industry. Several websites emphasized the safety and environmental benefits of natural dyes. Many of them included eye-catching images of textiles dyed naturally, and the colours of such dyes are thought to be attractive since they are beautiful and natural hues. The search did not find big-scale industrial solutions for natural dyes, but there were several instances of dyeing with natural dyes. Understanding the players, designers, and stakeholders in the natural dye business is the purpose of this article. The comprehension of the current state of the art illustrates the direction that the natural dye business is currently taking.Keywords: commercial solutions, environmental issues, key stakeholders, natural dyes, sustainability, textile dyeing
Procedia PDF Downloads 65375 Prospective Analytical Cohort Study to Investigate a Physically Active Classroom-Based Wellness Programme to Propose a Mechanism to Meet Societal Need for Increased Physical Activity Participation and Positive Subjective Well-Being amongst Adolescent
Authors: Aileen O'loughlin
Abstract:
‘Is Everybody Going WeLL?’ (IEGW?) is a 33-hour classroom-based initiative created to a) explore values and how they impact on well-being, b) encourage adolescents to connect with their community, and c) provide them with the education to encourage and maintain a lifetime love of physical activity (PA) to ensure beneficial effects on their personal well-being. This initiative is also aimed at achieving sustainable education and aligning with the United Nation’s Sustainable Development Goals numbers 3 and 4. The classroom is a unique setting in which adolescents’ PA participation can be positively influenced through fun PA policies and initiatives. The primary purpose of this research is to evaluate a range of psychosocial and PA outcomes following the 33-hour education programme. This research examined the impact of a PA and well-being programme consisting of either a 60minute or 80minute class, depending on the timetable structure of the school, delivered once a week. Participant outcomes were measured using validated questionnaires regarding Self-esteem, Mental Health Literacy (MHL) and Daily Physical Activity Participation. These questionnaires were administered at three separate time points; baseline, mid-intervention, and post intervention. Semi-structured interviews with participating teachers regarding adherence and participants’ attitudes were completed post-intervention. These teachers were randomly selected for interview. This perspective analytical cohort study included 235 post-primary school students between 11-13 years of age (100 boys and 135 girls) from five public Irish post-primary schools. Three schools received the intervention only; a 33hour interactive well-being learning unit, one school formed a control group and one school had participants in both the intervention and control group. Participating schools were a convenience sample. Data presented outlines baseline data collected pre-participation (0 hours completed). N = 18 junior certificate students returned all three questionnaires fully completed for a 56.3% return rate from 1 school, Intervention School #3. 94.4% (n = 17) of participants enjoy taking part in some form of PA, however only 5.5% (n = 1) of the participants took part in PA every day of the previous 7 days and only 5.5% (n = 1) of those surveyed participated in PA every day during a normal week. 55% (n = 11) had a low level of self-esteem, 50% (n = 9) fall within the normal range of self-esteem, and n = 0 surveyed demonstrated a high level of self-esteem. Female participants’ Mean score was higher than their male counterparts when MHL was compared. Correlation analyses revealed a small association between Self-esteem and Happiness (r = 0.549). Positive correlations were also revealed between MHL and Happiness, MHL and Self-esteem and Self-esteem and 60+ minutes of PA completed daily. IEGW? is a classroom-based with simple methods easy to implement, replicate and financially viable to both public and private schools. It’s unique dataset will allow for the evaluation of a societal approach to the psycho-social well-being and PA participation levels of adolescents. This research is a work in progress and future work is required to learn how to best support the implementation of ‘Is Everybody Going WeLL?’ as part of the school curriculum.Keywords: education, life-long learning, physical activity, psychosocial well-being
Procedia PDF Downloads 115374 On the Utility of Bidirectional Transformers in Gene Expression-Based Classification
Authors: Babak Forouraghi
Abstract:
A genetic circuit is a collection of interacting genes and proteins that enable individual cells to implement and perform vital biological functions such as cell division, growth, death, and signaling. In cell engineering, synthetic gene circuits are engineered networks of genes specifically designed to implement functionalities that are not evolved by nature. These engineered networks enable scientists to tackle complex problems such as engineering cells to produce therapeutics within the patient's body, altering T cells to target cancer-related antigens for treatment, improving antibody production using engineered cells, tissue engineering, and production of genetically modified plants and livestock. Construction of computational models to realize genetic circuits is an especially challenging task since it requires the discovery of the flow of genetic information in complex biological systems. Building synthetic biological models is also a time-consuming process with relatively low prediction accuracy for highly complex genetic circuits. The primary goal of this study was to investigate the utility of a pre-trained bidirectional encoder transformer that can accurately predict gene expressions in genetic circuit designs. The main reason behind using transformers is their innate ability (attention mechanism) to take account of the semantic context present in long DNA chains that are heavily dependent on the spatial representation of their constituent genes. Previous approaches to gene circuit design, such as CNN and RNN architectures, are unable to capture semantic dependencies in long contexts, as required in most real-world applications of synthetic biology. For instance, RNN models (LSTM, GRU), although able to learn long-term dependencies, greatly suffer from vanishing gradient and low-efficiency problem when they sequentially process past states and compresses contextual information into a bottleneck with long input sequences. In other words, these architectures are not equipped with the necessary attention mechanisms to follow a long chain of genes with thousands of tokens. To address the above-mentioned limitations, a transformer model was built in this work as a variation to the existing DNA Bidirectional Encoder Representations from Transformers (DNABERT) model. It is shown that the proposed transformer is capable of capturing contextual information from long input sequences with an attention mechanism. In previous works on genetic circuit design, the traditional approaches to classification and regression, such as Random Forrest, Support Vector Machine, and Artificial Neural Networks, were able to achieve reasonably high R2 accuracy levels of 0.95 to 0.97. However, the transformer model utilized in this work, with its attention-based mechanism, was able to achieve a perfect accuracy level of 100%. Further, it is demonstrated that the efficiency of the transformer-based gene expression classifier is not dependent on the presence of large amounts of training examples, which may be difficult to compile in many real-world gene circuit designs.Keywords: machine learning, classification and regression, gene circuit design, bidirectional transformers
Procedia PDF Downloads 61373 Ascribing Identities and Othering: A Multimodal Discourse Analysis of a BBC Documentary on YouTube
Authors: Shomaila Sadaf, Margarethe Olbertz-Siitonen
Abstract:
This study looks at identity and othering in discourses around sensitive issues in social media. More specifically, the study explores the multimodal resources and narratives through which the other is formed, and identities are ascribed in online spaces. As an integral part of social life, media spaces have become an important site for negotiating and ascribing identities. In line with recent research, identity is seen hereas constructions of belonging which go hand in hand with processes of in- and out-group formations that in some cases may lead to othering. Previous findings underline that identities are neither fixed nor limited but rather contextual, intersectional, and interactively achieved. The goal of this study is to explore and develop an understanding of how people co-construct the ‘other’ and ascribe certain identities in social media using multiple modes. In the beginning of the year 2018, the British government decided to include relationships, sexual orientation, and sex education into the curriculum of state funded primary schools. However, the addition of information related to LGBTQ+in the curriculum has been met with resistance, particularly from religious parents.For example, the British Muslim community has voiced their concerns and protested against the actions taken by the British government. YouTube has been used by news companies to air video stories covering the protest and narratives of the protestors along with the position ofschool officials. The analysis centers on a YouTube video dealing with the protest ofa local group of parents against the addition of information about LGBTQ+ in the curriculum in the UK. The video was posted in 2019. By the time of this study, the videos had approximately 169,000 views andaround 6000 comments. In deference to multimodal nature of YouTube videos, this study utilizes multimodal discourse analysis as a method of choice. The study is still ongoing and therefore has not yet yielded any final results. However, the initial analysis indicates a hierarchy of ascribing identities in the data. Drawing on multimodal resources, the media works with social categorizations throughout the documentary, presenting and classifying involved conflicting parties in the light of their own visible and audible identifications. The protesters can be seen to construct a strong group identity as Muslim parents (e.g., clothing and reference to shared values). While the video appears to be designed as a documentary that puts forward facts, the media does not seem to succeed in taking a neutral position consistently throughout the video. At times, the use of images, soundsand language contributes to the formation of “us” vs. “them”, where the audience is implicitly encouraged to pick a side. Only towards the end of the documentary this problematic opposition is addressed and critically reflected through an expert interview that is – interestingly – visually located outside the previously presented ‘battlefield’. This study contributes to the growing understanding of the discursive construction of the ‘other’ in social media. Videos available online are a rich source for examining how the different social actors ascribe multiple identities and form the other.Keywords: identity, multimodal discourse analysis, othering, youtube
Procedia PDF Downloads 113372 Investigation of Processing Conditions on Rheological Features of Emulsion Gels and Oleogels Stabilized by Biopolymers
Authors: M. Sarraf, J. E. Moros, M. C. Sánchez
Abstract:
Oleogels are self-standing systems that are able to trap edible liquid oil into a tridimensional network and also help to use less fat by forming crystallization oleogelators. There are different ways to generate oleogelation and oil structuring, including direct dispersion, structured biphasic systems, oil sorption, and indirect method (emulsion-template). The selection of processing conditions as well as the composition of the oleogels is essential to obtain a stable oleogel with characteristics suitable for its purpose. In this sense, one of the ingredients widely used in food products to produce oleogels and emulsions is polysaccharides. Basil seed gum (BSG), with the scientific name Ocimum basilicum, is a new native polysaccharide with high viscosity and pseudoplastic behavior because of its high molecular weight in the food industry. Also, proteins can stabilize oil in water due to the presence of amino and carboxyl moieties that result in surface activity. Whey proteins are widely used in the food industry due to available, cheap ingredients, nutritional and functional characteristics such as emulsifier and a gelling agent, thickening, and water-binding capacity. In general, the interaction of protein and polysaccharides has a significant effect on the food structures and their stability, like the texture of dairy products, by controlling the interactions in macromolecular systems. Using edible oleogels as oil structuring helps for targeted delivery of a component trapped in a structural network. Therefore, the development of efficient oleogel is essential in the food industry. A complete understanding of the important points, such as the ratio oil phase, processing conditions, and concentrations of biopolymers that affect the formation and stability of the emulsion, can result in crucial information in the production of a suitable oleogel. In this research, the effects of oil concentration and pressure used in the manufacture of the emulsion prior to obtaining the oleogel have been evaluated through the analysis of droplet size and rheological properties of obtained emulsions and oleogels. The results show that the emulsion prepared in the high-pressure homogenizer (HPH) at higher pressure values has smaller droplet sizes and a higher uniformity in the size distribution curve. On the other hand, in relation to the rheological characteristics of the emulsions and oleogels obtained, the predominantly elastic character of the systems must be noted, as they present values of the storage modulus higher than those of losses, also showing an important plateau zone, typical of structured systems. In the same way, if steady-state viscous flow tests have been analyzed on both emulsions and oleogels, the result is that, once again, the pressure used in the homogenizer is an important factor for obtaining emulsions with adequate droplet size and the subsequent oleogel. Thus, various routes for trapping oil inside a biopolymer matrix with adjustable mechanical properties could be applied for the creation of the three-dimensional network in order to the oil absorption and creating oleogel.Keywords: basil seed gum, particle size, viscoelastic properties, whey protein
Procedia PDF Downloads 66371 Understanding the Perceived Barriers and Facilitators to Exercise Participation in the Workplace
Authors: Jayden R. Hunter, Brett A. Gordon, Stephen R. Bird, Amanda C. Benson
Abstract:
The World Health Organisation recognises the workplace as an important setting for exercise promotion, with potential benefits including improved employee health and fitness, and reduced worker absenteeism and presenteeism. Despite these potential benefits to both employee and employer, there is a lack of evidence supporting the long-term effectiveness of workplace exercise programs. There is, therefore, a need for better-informed programs that cater to employee exercise preferences. Specifically, workplace exercise programs should address any time, motivation, internal and external barriers to participation reported by sub-groups of employees. This study sought to compare exercise participation to perceived barriers and facilitators to workplace exercise engagement of university employees. This information is needed to design and implement wider-reaching programs aiming to maximise long-term employee exercise adherence and subsequent health, fitness and productivity benefits. An online survey was advertised at an Australian university with the potential to reach 3,104 full-time employees. Along with exercise participation (International physical activity questionnaire) and behaviour (stage of behaviour change in relation to physical activity questionnaire), perceived barriers (corporate exercise barriers scale) and facilitators to workplace exercise participation were identified. The survey response rate was 8.1% (252 full-time employees; 95% white-collar; 60% female; 79.4% aged 30–59 years; 57% professional and 38% academic). Most employees reported meeting (43.7%) or exceeding (42.9%) exercise guidelines over the previous week (i.e. ⩾30 min of moderate-intensity exercise on most days or ⩾ 25 min of vigorous-intensity exercise on at least three days per week). Reported exercise behaviour over the previous six months showed that 64.7% of employees were in maintenance, 8.3% were in action, 10.9% were in preparation, 12.4% were in contemplation, and 3.8% were in the pre-contemplation stage of change. Perceived barriers towards workplace exercise participation were significantly higher in employees not attaining weekly exercise guidelines compared to employees meeting or exceeding guidelines, including a lack of time or reduced motivation (p < 0.001; partial eta squared = 0.24 (large effect)), exercise attitude (p < 0.05; partial eta squared = 0.04 (small effect)), internal (p < 0.01; partial eta squared = 0.10 (moderate effect)) and external (p < 0.01; partial eta squared = 0.06 (moderate effect)) barriers. The most frequently reported exercise facilitators were personal training (particularly for insufficiently active employees; 33%) and group exercise classes (20%). The most frequently cited preferred modes of exercise were walking (70%), swimming (50%), gym (48%), and cycling (45%). In conclusion, providing additional means of support such as individualised gym, swimming and cycling programs with personal supervision and guidance may be particularly useful for employees not meeting recommended moderate-vigorous volumes of exercise, to help overcome reported exercise barriers in order to improve participation, health, and fitness. While individual biopsychosocial factors should be considered when making recommendations for interventions, the specific barriers and facilitators to workplace exercise participation identified by this study can inform the development of workplace exercise programs aiming to broaden employee engagement and promote greater ongoing exercise adherence. This is especially important for the uptake of less active employees who perceive greater barriers to workplace exercise participation than their more active colleagues.Keywords: exercise barriers, exercise facilitators, physical activity, workplace health
Procedia PDF Downloads 146370 Supercritical Water Gasification of Organic Wastes for Hydrogen Production and Waste Valorization
Authors: Laura Alvarez-Alonso, Francisco Garcia-Carro, Jorge Loredo
Abstract:
Population growth and industrial development imply an increase in the energy demands and the problems caused by emissions of greenhouse effect gases, which has inspired the search for clean sources of energy. Hydrogen (H₂) is expected to play a key role in the world’s energy future by replacing fossil fuels. The properties of H₂ make it a green fuel that does not generate pollutants and supplies sufficient energy for power generation, transportation, and other applications. Supercritical Water Gasification (SCWG) represents an attractive alternative for the recovery of energy from wastes. SCWG allows conversion of a wide range of raw materials into a fuel gas with a high content of hydrogen and light hydrocarbons through their treatment at conditions higher than those that define the critical point of water (temperature of 374°C and pressure of 221 bar). Methane used as a transport fuel is another important gasification product. The number of different uses of gas and energy forms that can be produced depending on the kind of material gasified and type of technology used to process it, shows the flexibility of SCWG. This feature allows it to be integrated with several industrial processes, as well as power generation systems or waste-to-energy production systems. The final aim of this work is to study which conditions and equipment are the most efficient and advantageous to explore the possibilities to obtain streams rich in H₂ from oily wastes, which represent a major problem both for the environment and human health throughout the world. In this paper, the relative complexity of technology needed for feasible gasification process cycles is discussed with particular reference to the different feedstocks that can be used as raw material, different reactors, and energy recovery systems. For this purpose, a review of the current status of SCWG technologies has been carried out, by means of different classifications based on key features as the feed treated or the type of reactor and other apparatus. This analysis allows to improve the technology efficiency through the study of model calculations and its comparison with experimental data, the establishment of kinetics for chemical reactions, the analysis of how the main reaction parameters affect the yield and composition of products, or the determination of the most common problems and risks that can occur. The results of this work show that SCWG is a promising method for the production of both hydrogen and methane. The most significant choices of design are the reactor type and process cycle, which can be conveniently adopted according to waste characteristics. Regarding the future of the technology, the design of SCWG plants is still to be optimized to include energy recovery systems in order to reduce costs of equipment and operation derived from the high temperature and pressure conditions that are necessary to convert water to the SC state, as well as to find solutions to remove corrosion and clogging of components of the reactor.Keywords: hydrogen production, organic wastes, supercritical water gasification, system integration, waste-to-energy
Procedia PDF Downloads 147369 High Speed Motion Tracking with Magnetometer in Nonuniform Magnetic Field
Authors: Jeronimo Cox, Tomonari Furukawa
Abstract:
Magnetometers have become more popular in inertial measurement units (IMU) for their ability to correct estimations using the earth's magnetic field. Accelerometer and gyroscope-based packages fail with dead-reckoning errors accumulated over time. Localization in robotic applications with magnetometer-inclusive IMUs has become popular as a way to track the odometry of slower-speed robots. With high-speed motions, the accumulated error increases over smaller periods of time, making them difficult to track with IMU. Tracking a high-speed motion is especially difficult with limited observability. Visual obstruction of motion leaves motion-tracking cameras unusable. When motions are too dynamic for estimation techniques reliant on the observability of the gravity vector, the use of magnetometers is further justified. As available magnetometer calibration methods are limited with the assumption that background magnetic fields are uniform, estimation in nonuniform magnetic fields is problematic. Hard iron distortion is a distortion of the magnetic field by other objects that produce magnetic fields. This kind of distortion is often observed as the offset from the origin of the center of data points when a magnetometer is rotated. The magnitude of hard iron distortion is dependent on proximity to distortion sources. Soft iron distortion is more related to the scaling of the axes of magnetometer sensors. Hard iron distortion is more of a contributor to the error of attitude estimation with magnetometers. Indoor environments or spaces inside ferrite-based structures, such as building reinforcements or a vehicle, often cause distortions with proximity. As positions correlate to areas of distortion, methods of magnetometer localization include the production of spatial mapping of magnetic field and collection of distortion signatures to better aid location tracking. The goal of this paper is to compare magnetometer methods that don't need pre-productions of magnetic field maps. Mapping the magnetic field in some spaces can be costly and inefficient. Dynamic measurement fusion is used to track the motion of a multi-link system with us. Conventional calibration by data collection of rotation at a static point, real-time estimation of calibration parameters each time step, and using two magnetometers for determining local hard iron distortion are compared to confirm the robustness and accuracy of each technique. With opposite-facing magnetometers, hard iron distortion can be accounted for regardless of position, Rather than assuming that hard iron distortion is constant regardless of positional change. The motion measured is a repeatable planar motion of a two-link system connected by revolute joints. The links are translated on a moving base to impulse rotation of the links. Equipping the joints with absolute encoders and recording the motion with cameras to enable ground truth comparison to each of the magnetometer methods. While the two-magnetometer method accounts for local hard iron distortion, the method fails where the magnetic field direction in space is inconsistent.Keywords: motion tracking, sensor fusion, magnetometer, state estimation
Procedia PDF Downloads 84368 Sensor Network Structural Integration for Shape Reconstruction of Morphing Trailing Edge
Authors: M. Ciminello, I. Dimino, S. Ameduri, A. Concilio
Abstract:
Improving aircraft's efficiency is one of the key elements of Aeronautics. Modern aircraft possess many advanced functions, such as good transportation capability, high Mach number, high flight altitude, and increasing rate of climb. However, no aircraft has a possibility to reach all of this optimized performance in a single airframe configuration. The aircraft aerodynamic efficiency varies considerably depending on the specific mission and on environmental conditions within which the aircraft must operate. Structures that morph their shape in response to their surroundings may at first seem like the stuff of science fiction, but take a look at nature and lots of examples of plants and animals that adapt to their environment would arise. In order to ensure both the controllable and the static robustness of such complex structural systems, a monitoring network is aimed at verifying the effectiveness of the given control commands together with the elastic response. In order to achieve this kind of information, the use of FBG sensors network is, in this project, proposed. The sensor network is able to measure morphing structures shape which may show large, global displacements due to non-standard architectures and materials adopted. Chord -wise variations may allow setting and chasing the best layout as a function of the particular and transforming reference state, always targeting best aerodynamic performance. The reason why an optical sensor solution has been selected is that while keeping a few of the contraindication of the classical systems (like cabling, continuous deployment, and so on), fibre optic sensors may lead to a dramatic reduction of the wires mass and weight thanks to an extreme multiplexing capability. Furthermore, the use of the ‘light’ as ‘information carrier’, permits dealing with nimbler, non-shielded wires, and avoids any kind of interference with the on-board instrumentation. The FBG-based transducers, herein presented, aim at monitoring the actual shape of adaptive trailing edge. Compared to conventional systems, these transducers allow more fail-safe measurements, by taking advantage of a supporting structure, hosting FBG, whose properties may be tailored depending on the architectural requirements and structural constraints, acting as strain modulator. The direct strain may, in fact, be difficult because of the large deformations occurring in morphing elements. A modulation transducer is then necessary to keep the measured strain inside the allowed range. In this application, chord-wise transducer device is a cantilevered beam sliding trough the spars and copying the camber line of the ATE ribs. FBG sensors array position are dimensioned and integrated along the path. A theoretical model describing the system behavior is implemented. To validate the design, experiments are then carried out with the purpose of estimating the functions between rib rotation and measured strain.Keywords: fiber optic sensor, morphing structures, strain sensor, shape reconstruction
Procedia PDF Downloads 329367 Neonatology Clinical Routine in Cats and Dogs: Cases, Main Conditions and Mortality
Authors: Maria L. G. Lourenço, Keylla H. N. P. Pereira, Viviane Y. Hibaru, Fabiana F. Souza, João C. P. Ferreira, Simone B. Chiacchio, Luiz H. A. Machado
Abstract:
The neonatal care of cats and dogs represents a challenge to veterinarians due to the small size of the newborns and their physiological particularities. In addition, many Veterinary Medicine colleges around the world do not include neonatology in the curriculum, which makes it less likely for the veterinarian to have basic knowledge regarding neonatal care and worsens the clinical care these patients receive. Therefore, lack of assistance and negligence have become frequent in the field, which contributes towards the high mortality rates. This study aims at describing cases and the main conditions pertaining to the neonatology clinical routine in cats and dogs, highlighting the importance of specialized care in this field of Veterinary Medicine. The study included 808 neonates admitted to the São Paulo State University (UNESP) Veterinary Hospital, Botucatu, São Paulo, Brazil, between January 2018 and November 2019. Of these, 87.3% (705/808) were dogs and 12.7% (103/808) were cats. Among the neonates admitted, 57.3% (463/808) came from emergency c-sections due to dystocia, 8.7% (71/808) cane from vaginal deliveries with obstetric maneuvers due to dystocia, and 34% (274/808) were admitted for clinical care due to neonatal conditions. Among the neonates that came from emergency c-sections and vaginal deliveries, 47.3% (253/534) was born in respiratory distress due to severe hypoxia or persistent apnea and required resuscitation procedure, such as the Jen Chung acupuncture point (VG26), oxygen therapy with mask, pulmonary expansion with resuscitator, heart massages and administration of emergency medication, such as epinephrine. On the other hand, in the neonatal clinical care, the main conditions and alterations observed in the newborns were omphalophlebitis, toxic milk syndrome, neonatal conjunctivitis, swimmer puppy syndrome, neonatal hemorrhagic syndrome, pneumonia, trauma, low weight at birth, prematurity, congenital malformations (cleft palate, cleft lip, hydrocephaly, anasarca, vascular anomalies in the heart, anal atresia, gastroschisis, omphalocele, among others), neonatal sepsis and other local and systemic bacterial infections, viral infections (feline respiratory complex, parvovirus, canine distemper, canine infectious traqueobronchitis), parasitical infections (Toxocara spp., Ancylostoma spp., Strongyloides spp., Cystoisospora spp., Babesia spp. and Giardia spp.) and fungal infections (dermatophytosis by Microsporum canis). The most common clinical presentation observed was the neonatal triad (hypothermia, hypoglycemia and dehydration), affecting 74.6% (603/808) of the patients. The mortality rate among the neonates was 10.5% (85/808). Being knowledgeable about neonatology is essential for veterinarians to provide adequate care for these patients in the clinical routine. Adding neonatology to college curriculums, improving the dissemination of information on the subject, and providing annual training in neonatology for veterinarians and employees are important to improve immediate care and reduce the mortality rates.Keywords: neonatal care, puppies, neonatal, conditions
Procedia PDF Downloads 228366 Study of Biomechanical Model for Smart Sensor Based Prosthetic Socket Design System
Authors: Wei Xu, Abdo S. Haidar, Jianxin Gao
Abstract:
Prosthetic socket is a component that connects the residual limb of an amputee with an artificial prosthesis. It is widely recognized as the most critical component that determines the comfort of a patient when wearing the prosthesis in his/her daily activities. Through the socket, the body weight and its associated dynamic load are distributed and transmitted to the prosthesis during walking, running or climbing. In order to achieve a good-fit socket for an individual amputee, it is essential to obtain the biomechanical properties of the residual limb. In current clinical practices, this is achieved by a touch-and-feel approach which is highly subjective. Although there have been significant advancements in prosthetic technologies such as microprocessor controlled knee and ankle joints in the last decade, the progress in designing a comfortable socket has been rather limited. This means that the current process of socket design is still very time-consuming, and highly dependent on the expertise of the prosthetist. Supported by the state-of-the-art sensor technologies and numerical simulations, a new socket design system is being developed to help prosthetists achieve rapid design of comfortable sockets for above knee amputees. This paper reports the research work related to establishing biomechanical models for socket design. Through numerical simulation using finite element method, comprehensive relationships between pressure on residual limb and socket geometry were established. This allowed local topological adjustment for the socket so as to optimize the pressure distributions across the residual limb. When the full body weight of a patient is exerted on the residual limb, high pressures and shear forces between the residual limb and the socket occur. During numerical simulations, various hyperplastic models, namely Ogden, Yeoh and Mooney-Rivlin, were used, and their effectiveness in representing the biomechanical properties of soft tissues of the residual limb was evaluated. This also involved reverse engineering, which resulted in an optimal representative model under compression test. To validate the simulation results, a range of silicone models were fabricated. They were tested by an indentation device which yielded the force-displacement relationships. Comparisons of results obtained from FEA simulations and experimental tests showed that the Ogden model did not fit well the soft tissue material indentation data, while the Yeoh model gave the best representation of the soft tissue mechanical behavior under indentation. Compared with hyperplastic model, the result showed that elastic model also had significant errors. In addition, normal and shear stress distributions on the surface of the soft tissue model were obtained. The effect of friction in compression testing and the influence of soft tissue stiffness and testing boundary conditions were also analyzed. All these have contributed to the overall goal of designing a good-fit socket for individual above knee amputees.Keywords: above knee amputee, finite element simulation, hyperplastic model, prosthetic socket
Procedia PDF Downloads 205365 Phorbol 12-Myristate 13-Acetate (PMA)-Differentiated THP-1 Monocytes as a Validated Microglial-Like Model in Vitro
Authors: Amelia J. McFarland, Andrew K. Davey, Shailendra Anoopkumar-Dukie
Abstract:
Microglia are the resident macrophage population of the central nervous system (CNS), contributing to both innate and adaptive immune response, and brain homeostasis. Activation of microglia occurs in response to a multitude of pathogenic stimuli in their microenvironment; this induces morphological and functional changes, resulting in a state of acute neuroinflammation which facilitates injury resolution. Adequate microglial function is essential for the health of the neuroparenchyma, with microglial dysfunction implicated in numerous CNS pathologies. Given the critical role that these macrophage-derived cells play in CNS homeostasis, there is a high demand for microglial models suitable for use in neuroscience research. The isolation of primary human microglia, however, is both difficult and costly, with microglial activation an unwanted but inevitable result of the extraction process. Consequently, there is a need for the development of alternative experimental models which exhibit morphological, biochemical and functional characteristics of human microglia without the difficulties associated with primary cell lines. In this study, our aim was to evaluate whether THP-1 human peripheral blood monocytes would display microglial-like qualities following an induced differentiation, and, therefore, be suitable for use as surrogate microglia. To achieve this aim, THP-1 human peripheral blood monocytes from acute monocytic leukaemia were differentiated with a range of phorbol 12-myristate 13-acetate (PMA) concentrations (50-200 nM) using two different protocols: a 5-day continuous PMA exposure or a 3-day continuous PMA exposure followed by a 5-day rest in normal media. In each protocol and at each PMA concentration, microglial-like cell morphology was assessed through crystal violet staining and the presence of CD-14 microglial / macrophage cell surface marker. Lipopolysaccharide (LPS) from Escherichia coli (055: B5) was then added at a range of concentrations from 0-10 mcg/mL to activate the PMA-differentiated THP-1 cells. Functional microglial-like behavior was evaluated by quantifying the release of prostaglandin (PG)-E2 and pro-inflammatory cytokines interleukin (IL)-1β and tumour necrosis factor (TNF)-α using mediator-specific ELISAs. Furthermore, production of global reactive oxygen species (ROS) and nitric oxide (NO) were determined fluorometrically using dichlorodihydrofluorescein diacetate (DCFH-DA) and diaminofluorescein diacetate (DAF-2-DA) respectively. Following PMA-treatment, it was observed both differentiation protocols resulted in cells displaying distinct microglial morphology from 10 nM PMA. Activation of differentiated cells using LPS significantly augmented IL-1β, TNF-α and PGE2 release at all LPS concentrations under both differentiation protocols. Similarly, a significant increase in DCFH-DA and DAF-2-DA fluorescence was observed, indicative of increases in ROS and NO production. For all endpoints, the 5-day continuous PMA treatment protocol yielded significantly higher mediator levels than the 3-day treatment and 5-day rest protocol. Our data, therefore, suggests that the differentiation of THP-1 human monocyte cells with PMA yields a homogenous microglial-like population which, following stimulation with LPS, undergo activation to release a range of pro-inflammatory mediators associated with microglial activation. Thus, the use of PMA-differentiated THP-1 cells represents a suitable microglial model for in vitro research.Keywords: differentiation, lipopolysaccharide, microglia, monocyte, neuroscience, THP-1
Procedia PDF Downloads 388