Search results for: small baseline subset algorithm
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 9116

Search results for: small baseline subset algorithm

716 Investigations of Effective Marketing Metric Strategies: The Case of St. George Brewery Factory, Ethiopia

Authors: Mekdes Getu Chekol, Biniam Tedros Kahsay, Rahwa Berihu Haile

Abstract:

The main objective of this study is to investigate the marketing strategy practice in the Case of St. George Brewery Factory in Addis Ababa. One of the core activities in a Business Company to stay in business is having a well-developed marketing strategy. It assessed how the marketing strategies were practiced in the company to achieve its goals aligned with segmentation, target market, positioning, and the marketing mix elements to satisfy customer requirements. Using primary and secondary data, the study is conducted by using both qualitative and quantitative approaches. The primary data was collected through open and closed-ended questionnaires. Considering the size of the population is small, the selection of the respondents was carried out by using a census. The finding shows that the company used all the 4 Ps of the marketing mix elements in its marketing strategies and provided quality products at affordable prices by promoting its products by using high and effective advertising mechanisms. The product availability and accessibility are admirable with the practices of both direct and indirect distribution channels. On the other hand, the company has identified its target customers, and the company’s market segmentation practice is geographical location. Communication effectiveness between the marketing department and other departments is very good. The adjusted R2 model explains 61.6% of the marketing strategy practice variance by product, price, promotion, and place. The remaining 38.4% of variation in the dependent variable was explained by other factors not included in this study. The result reveals that all four independent variables, product, price, promotion, and place, have a positive beta sign, proving that predictor variables have a positive effect on that of the predicting dependent variable marketing strategy practice. Even though the marketing strategies of the company are effectively practiced, there are some problems that the company faces while implementing them. These are infrastructure problems, economic problems, intensive competition in the market, shortage of raw materials, seasonality of consumption, socio-cultural problems, and the time and cost of awareness creation for the customers. Finally, the authors suggest that the company better develop a long-range view and try to implement a more structured approach to attain information about potential customers, competitor’s actions, and market intelligence within the industry. In addition, we recommend conducting the study by increasing the sample size and including different marketing factors.

Keywords: marketing strategy, market segmentation, target marketing, market positioning, marketing mix

Procedia PDF Downloads 37
715 Field Synergy Analysis of Combustion Characteristics in the Afterburner of Solid Oxide Fuel Cell System

Authors: Shing-Cheng Chang, Cheng-Hao Yang, Wen-Sheng Chang, Chih-Chia Lin, Chun-Han Li

Abstract:

The solid oxide fuel cell (SOFC) is a promising green technology which can achieve a high electrical efficiency. Due to the high operating temperature of SOFC stack, the off-gases at high temperature from anode and cathode outlets are introduced into an afterburner to convert the chemical energy into thermal energy by combustion. The heat is recovered to preheat the fresh air and fuel gases before they pass through the stack during the SOFC power generation system operation. For an afterburner of the SOFC system, the temperature control with a good thermal uniformity is important. A burner with a well-designed geometry usually can achieve a satisfactory performance. To design an afterburner for an SOFC system, the computational fluid dynamics (CFD) simulation is adoptable. In this paper, the hydrogen combustion characteristics in an afterburner with simple geometry are studied by using CFD. The burner is constructed by a cylinder chamber with the configuration of a fuel gas inlet, an air inlet, and an exhaust outlet. The flow field and temperature distributions inside the afterburner under different fuel and air flow rates are analyzed. To improve the temperature uniformity of the afterburner during the SOFC system operation, the flow paths of anode/cathode off-gases are varied by changing the positions of fuels and air inlet channel to improve the heat and flow field synergy in the burner furnace. Because the air flow rate is much larger than the fuel gas, the flow structure and heat transfer in the afterburner is dominated by the air flow path. The present work studied the effects of fluid flow structures on the combustion characteristics of an SOFC afterburner by three simulation models with a cylindrical combustion chamber and a tapered outlet. All walls in the afterburner are assumed to be no-slip and adiabatic. In each case, two set of parameters are simulated to study the transport phenomena of hydrogen combustion. The equivalence ratios are in the range of 0.08 to 0.1. Finally, the pattern factor for the simulation cases is calculated to investigate the effect of gas inlet locations on the temperature uniformity of the SOFC afterburner. The results show that the temperature uniformity of the exhaust gas can be improved by simply adjusting the position of the gas inlet. The field synergy analysis indicates the design of the fluid flow paths should be in the way that can significantly contribute to the heat transfer, i.e. the field synergy angle should be as small as possible. In the study cases, the averaged synergy angle of the burner is about 85̊, 84̊, and 81̊ respectively.

Keywords: afterburner, combustion, field synergy, solid oxide fuel cell

Procedia PDF Downloads 127
714 An Analysis of LoRa Networks for Rainforest Monitoring

Authors: Rafael Castilho Carvalho, Edjair de Souza Mota

Abstract:

As the largest contributor to the biogeochemical functioning of the Earth system, the Amazon Rainforest has the greatest biodiversity on the planet, harboring about 15% of all the world's flora. Recognition and preservation are the focus of research that seeks to mitigate drastic changes, especially anthropic ones, which irreversibly affect this biome. Functional and low-cost monitoring alternatives to reduce these impacts are a priority, such as those using technologies such as Low Power Wide Area Networks (LPWAN). Promising, reliable, secure and with low energy consumption, LPWAN can connect thousands of IoT devices, and in particular, LoRa is considered one of the most successful solutions to facilitate forest monitoring applications. Despite this, the forest environment, in particular the Amazon Rainforest, is a challenge for these technologies, requiring work to identify and validate the use of technology in a real environment. To investigate the feasibility of deploying LPWAN in remote water quality monitoring of rivers in the Amazon Region, a LoRa-based test bed consisting of a Lora transmitter and a LoRa receiver was set up, both parts were implemented with Arduino and the LoRa chip SX1276. The experiment was carried out at the Federal University of Amazonas, which contains one of the largest urban forests in Brazil. There are several springs inside the forest, and the main goal is to collect water quality parameters and transmit the data through the forest in real time to the gateway at the uni. In all, there are nine water quality parameters of interest. Even with a high collection frequency, the amount of information that must be sent to the gateway is small. However, for this application, the battery of the transmitter device is a concern since, in the real application, the device must run without maintenance for long periods of time. With these constraints in mind, parameters such as Spreading Factor (SF) and Coding Rate (CR), different antenna heights, and distances were tuned to better the connectivity quality, measured with RSSI and loss rate. A handheld spectrum analyzer RF Explorer was used to get the RSSI values. Distances exceeding 200 m have soon proven difficult to establish communication due to the dense foliage and high humidity. The optimal combinations of SF-CR values were 8-5 and 9-5, showing the lowest packet loss rates, 5% and 17%, respectively, with a signal strength of approximately -120 dBm, these being the best settings for this study so far. The rains and climate changes imposed limitations on the equipment, and more tests are already being conducted. Subsequently, the range of the LoRa configuration must be extended using a mesh topology, especially because at least three different collection points in the same water body are required.

Keywords: IoT, LPWAN, LoRa, coverage, loss rate, forest

Procedia PDF Downloads 69
713 Optimization of Heat Source Assisted Combustion on Solid Rocket Motors

Authors: Minal Jain, Vinayak Malhotra

Abstract:

Solid Propellant ignition consists of rapid and complex events comprising of heat generation and transfer of heat with spreading of flames over the entire burning surface area. Proper combustion and thus propulsion depends heavily on the modes of heat transfer characteristics and cavity volume. Fire safety is an integral component of a successful rocket flight failing to which may lead to overall failure of the rocket. This leads to enormous forfeiture in resources viz., money, time, and labor involved. When the propellant is ignited, thrust is generated and the casing gets heated up. This heat adds on to the propellant heat and the casing, if not at proper orientation starts burning as well, leading to the whole rocket being completely destroyed. This has necessitated active research efforts emphasizing a comprehensive study on the inter-energy relations involved for effective utilization of the solid rocket motors for better space missions. Present work is focused on one of the major influential aspects of this detrimental burning which is the presence of an external heat source, in addition to a potential heat source which is already ignited. The study is motivated by the need to ensure better combustion and fire safety presented experimentally as a simplified small-scale mode of a rocket carrying a solid propellant inside a cavity. The experimental setup comprises of a paraffin wax candle as the pilot fuel and incense stick as the external heat source. The candle is fixed and the incense stick position and location is varied to investigate the find the influence of the pilot heat source. Different configurations of the external heat source presence with separation distance are tested upon. Regression rates of the pilot thin solid fuel are noted to fundamentally understand the non-linear heat and mass transfer which is the governing phenomenon. An attempt is made to understand the phenomenon fundamentally and the mechanism governing it. Results till now indicate non-linear heat transfer assisted with the occurrence of flaming transition at selected critical distances. With an increase in separation distance, the effect is noted to drop in a non-monotonic trend. The parametric study results are likely to provide useful physical insight about the governing physics and utilization in proper testing, validation, material selection, and designing of solid rocket motors with enhanced safety.

Keywords: combustion, propellant, regression, safety

Procedia PDF Downloads 151
712 Prediction of Live Birth in a Matched Cohort of Elective Single Embryo Transfers

Authors: Mohsen Bahrami, Banafsheh Nikmehr, Yueqiang Song, Anuradha Koduru, Ayse K. Vuruskan, Hongkun Lu, Tamer M. Yalcinkaya

Abstract:

In recent years, we have witnessed an explosion of studies aimed at using a combination of artificial intelligence (AI) and time-lapse imaging data on embryos to improve IVF outcomes. However, despite promising results, no study has used a matched cohort of transferred embryos which only differ in pregnancy outcome, i.e., embryos from a single clinic which are similar in parameters, such as: morphokinetic condition, patient age, and overall clinic and lab performance. Here, we used time-lapse data on embryos with known pregnancy outcomes to see if the rich spatiotemporal information embedded in this data would allow the prediction of the pregnancy outcome regardless of such critical parameters. Methodology—We did a retrospective analysis of time-lapse data from our IVF clinic utilizing Embryoscope 100% of the time for embryo culture to blastocyst stage with known clinical outcomes, including live birth vs nonpregnant (embryos with spontaneous abortion outcomes were excluded). We used time-lapse data from 200 elective single transfer embryos randomly selected from January 2019 to June 2021. Our sample included 100 embryos in each group with no significant difference in patient age (P=0.9550) and morphokinetic scores (P=0.4032). Data from all patients were combined to make a 4th order tensor, and feature extraction were subsequently carried out by a tensor decomposition methodology. The features were then used in a machine learning classifier to classify the two groups. Major Findings—The performance of the model was evaluated using 100 random subsampling cross validation (train (80%) - test (20%)). The prediction accuracy, averaged across 100 permutations, exceeded 80%. We also did a random grouping analysis, in which labels (live birth, nonpregnant) were randomly assigned to embryos, which yielded 50% accuracy. Conclusion—The high accuracy in the main analysis and the low accuracy in random grouping analysis suggest a consistent spatiotemporal pattern which is associated with pregnancy outcomes, regardless of patient age and embryo morphokinetic condition, and beyond already known parameters, such as: early cleavage or early blastulation. Despite small samples size, this ongoing analysis is the first to show the potential of AI methods in capturing the complex morphokinetic changes embedded in embryo time-lapse data, which contribute to successful pregnancy outcomes, regardless of already known parameters. The results on a larger sample size with complementary analysis on prediction of other key outcomes, such as: euploidy and aneuploidy of embryos will be presented at the meeting.

Keywords: IVF, embryo, machine learning, time-lapse imaging data

Procedia PDF Downloads 85
711 Inertial Spreading of Drop on Porous Surfaces

Authors: Shilpa Sahoo, Michel Louge, Anthony Reeves, Olivier Desjardins, Susan Daniel, Sadik Omowunmi

Abstract:

The microgravity on the International Space Station (ISS) was exploited to study the imbibition of water into a network of hydrophilic cylindrical capillaries on time and length scales long enough to observe details hitherto inaccessible under Earth gravity. When a drop touches a porous medium, it spreads as if laid on a composite surface. The surface first behaves as a hydrophobic material, as liquid must penetrate pores filled with air. When contact is established, some of the liquid is drawn into pores by a capillarity that is resisted by viscous forces growing with length of the imbibed region. This process always begins with an inertial regime that is complicated by possible contact pinning. To study imbibition on Earth, time and distance must be shrunk to mitigate gravity-induced distortion. These small scales make it impossible to observe the inertial and pinning processes in detail. Instead, in the International Space Station (ISS), astronaut Luca Parmitano slowly extruded water spheres until they touched any of nine capillary plates. The 12mm diameter droplets were large enough for high-speed GX1050C video cameras on top and side to visualize details near individual capillaries, and long enough to observe dynamics of the entire imbibition process. To investigate the role of contact pinning, a text matrix was produced which consisted nine kinds of porous capillary plates made of gold-coated brass treated with Self-Assembled Monolayers (SAM) that fixed advancing and receding contact angles to known values. In the ISS, long-term microgravity allowed unambiguous observations of the role of contact line pinning during the inertial phase of imbibition. The high-speed videos of spreading and imbibition on the porous plates were analyzed using computer vision software to calculate the radius of the droplet contact patch with the plate and height of the droplet vs time. These observations are compared with numerical simulations and with data that we obtained at the ESA ZARM free-fall tower in Bremen with a unique mechanism producing relatively large water spheres and similarity in the results were observed. The data obtained from the ISS can be used as a benchmark for further numerical simulations in the field.

Keywords: droplet imbibition, hydrophilic surface, inertial phase, porous medium

Procedia PDF Downloads 122
710 Shameful Heroes of Queer Cinema: A Critique of Mumbai Police (2013) and My Life Partner (2014)

Authors: Payal Sudhan

Abstract:

Popular films in India, Bollywood, and other local industries make a range of commercial films that attract vast viewership. Love, Heroism, Action, Adventure, Revenge, etc., are some of the dearest themes chosen by many filmmakers of various popular film Industries across the world. However, sexuality has become an issue to address within the cinema. Such films feature in small numbers compared to other themes. One can easily assume that homosexuality is unlikely to be a favorite theme found in Indian popular cinema. It doesn’t mean that there is absolutely no film made on the issues of homosexuality. There have been several attempts. Earlier, some movies depicted homosexual (gay) characters as comedians, which continued until the beginning of the 21st century. The study aims to explore how modern homophobia and stereotype are represented in the films and how it affects homosexuality in the recent Malayalam Cinema. The study wills primarily focusing on Mumbai Police (2013) and My Life Partner (2014). The study tries to explain social space, the idea of a cure, and criminality. The film that has been selected for the analysis Mumbai Police (2013) is a crime thriller. The nonlinear narration of the movie reveals, towards the end, the murderer of ACP Aryan IPS, who was shot dead in a public meeting. In the end, the culprit is the enquiring officer, ACP Antony Moses, himself a close friend and colleague of the victim. Much to one’s curiosity, the primary cause turns out to be the sexual relation Antony has. My Life Partner generically can be classified as a drama. The movie puts forth male bonding and visibly riddles the notions of love and sex between Kiran and his roommate Richard. Running through the same track, the film deals with a different ‘event.’ The ‘event’ is the exclusive celebration of male bonding. The socio-cultural background of the cinema is heterosexual. The elements of heterosexual social setup meet the ends of diplomacy of the Malayalam queer visual culture. The film reveals the life of two gays who were humiliated by the larger heterosexual society. In the end, Kiran dies because of extreme humiliation. The paper is a comparative and cultural analysis of the two movies, My Life Partner and Mumbai Police. I try to bring all the points of comparison together and explain the similarities and differences, how one movie differs from another. Thus, my attempt here explains how stereotypes and homophobia with other related issues are represented in these two movies.

Keywords: queer cinema, homophobia, malayalam cinema, queer films

Procedia PDF Downloads 222
709 Web and Smart Phone-based Platform Combining Artificial Intelligence and Satellite Remote Sensing Data to Geoenable Villages for Crop Health Monitoring

Authors: Siddhartha Khare, Nitish Kr Boro, Omm Animesh Mishra

Abstract:

Recent food price hikes may signal the end of an era of predictable global grain crop plenty due to climate change, population expansion, and dietary changes. Food consumption will treble in 20 years, requiring enormous production expenditures. Climate and the atmosphere changed owing to rainfall and seasonal cycles in the past decade. India's tropical agricultural relies on evapotranspiration and monsoons. In places with limited resources, the global environmental change affects agricultural productivity and farmers' capacity to adjust to changing moisture patterns. Motivated by these difficulties, satellite remote sensing might be combined with near-surface imaging data (smartphones, UAVs, and PhenoCams) to enable phenological monitoring and fast evaluations of field-level consequences of extreme weather events on smallholder agriculture output. To accomplish this technique, we must digitally map all communities agricultural boundaries and crop kinds. With the improvement of satellite remote sensing technologies, a geo-referenced database may be created for rural Indian agriculture fields. Using AI, we can design digital agricultural solutions for individual farms. Main objective is to Geo-enable each farm along with their seasonal crop information by combining Artificial Intelligence (AI) with satellite and near-surface data and then prepare long term crop monitoring through in-depth field analysis and scanning of fields with satellite derived vegetation indices. We developed an AI based algorithm to understand the timelapse based growth of vegetation using PhenoCam or Smartphone based images. We developed an android platform where user can collect images of their fields based on the android application. These images will be sent to our local server, and then further AI based processing will be done at our server. We are creating digital boundaries of individual farms and connecting these farms with our smart phone application to collect information about farmers and their crops in each season. We are extracting satellite-based information for each farm from Google earth engine APIs and merging this data with our data of tested crops from our app according to their farm’s locations and create a database which will provide the data of quality of crops from their location.

Keywords: artificial intelligence, satellite remote sensing, crop monitoring, android and web application

Procedia PDF Downloads 90
708 The Democracy of Love and Suffering in the Erotic Epigrams of Meleager

Authors: Carlos A. Martins de Jesus

Abstract:

The Greek anthology, first put together in the tenth century AD, gathers in two separate books a large number of epigrams devoted to love and its consequences, both of hetero (book V) and homosexual (book XII) nature. While some poets wrote epigrams of only one genre –that is the case of Strato (II cent. BC), the organizer of a wide-spread garland of homosexual epigrams –, several others composed within both categories, often using the same topics of love and suffering. Using Plato’s theorization of two different kinds of Eros (Symp. 180d-182a), the popular (pandemos) and the celestial (ouranios), homoerotic epigrammatic love is more often associated with the first one, while heterosexual poetry tends to be connected to a higher form of love. This paper focuses on the epigrammatic production of a single first-century BC poet, Meleager, aiming to look for the similarities and differences on singing both kinds of love. From Meleager, the Greek Anthology –a garland whose origins have been traced back to the poet’s garland itself– preserves more than sixty heterosexual and 48 homosexual epigrams, an important and unprecedented amount of poems that are able to trace a complete profile of his way of singing love. Meleager’s poetry deals with personal experience and emotions, frequently with love and the unhappiness that usually comes from it. Most times he describes himself not as an active and engaged lover, but as one struck by the beauty of a woman or boy, i.e., in a stage prior to erotic consummation. His epigrams represent the unreal and fantastic (literally speaking) world of the lover, in which the imagery and wordplays are used to convey emotion in the epigrams of both genres. Elsewhere Meleager surprises the reader by offering a surrealist or dreamlike landscape where everyday adventures are transcribed into elaborate metaphors for erotic feeling. For instance, in 12.81, the lovers are shipwrecked, and as soon as they have disembarked, they are promptly kidnapped by a figure who is both Eros and a beautiful boy. Particularly –and worth-to-know why significant – in the homosexual poems collected in Book XII, mythology also plays an important role, namely in the figure and the scene of Ganimedes’ kidnap by Zeus for his royal court (12. 70, 94). While mostly refusing the Hellenistic model of dramatic love epigram, in which a small everyday scene is portrayed –and 5. 182 is a clear exception to this almost rule –, Meleager actually focuses on the tumultuous inside of his (poetic) lovers, in the realm of a subject that feels love and pain far beyond his/her erotic preferences. In relation to loving and suffering –mostly suffering, it has to be said –, Meleager’s love is therefore completely democratic. There is no real place in his epigrams for the traditional association mentioned before between homoeroticism and a carnal-erotic-pornographic love, while the heterosexual one being more evenly and pure, so to speak.

Keywords: epigram, erotic epigram, Greek Anthology, Meleager

Procedia PDF Downloads 243
707 Assessment of Water Reuse Potential in a Metal Finishing Factory

Authors: Efe Gumuslu, Guclu Insel, Gülten Yuksek, Nilay Sayi Ucar, Emine Ubay Cokgor, Tuğba Olmez Hanci, Didem Okutman Tas, Fatoş Germirli Babuna, Derya Firat Ertem, Ökmen Yildirim, Özge Erturan, Betül Kirci

Abstract:

Although water reclamation and reuse are inseparable parts of sustainable production concept all around the world, current levels of reuse constitute only a small fraction of the total volume of industrial effluents. Nowadays, within the perspective of serious climate change, wastewater reclamation and reuse practices should be considered as a requirement. Industrial sector is one of the largest users of water sources. The OECD Environmental Outlook to 2050 predicts that global water demand for manufacturing will increase by 400% from 2000 to 2050 which is much larger than any other sector. Metal finishing industry is one of the industries that requires high amount of water during the manufacturing. Therefore, actions regarding the improvement of wastewater treatment and reuse should be undertaken on both economic and environmental sustainability grounds. Process wastewater can be reused for more purposes if the appropriate treatment systems are installed to treat the wastewater to the required quality level. Recent studies showed that membrane separation techniques may help in solving the problem of attaining a suitable quality of water that allows being recycled back to the process. The metal finishing factory where this study is conducted is one of the biggest white-goods manufacturers in Turkey. The sheet metal parts used in the cookers production have to be exposed to surface pre-treatment processes composed of degreasing, rinsing, nanoceramics coating and deionization rinsing processes, consecutively. The wastewater generating processes in the factory are enamel coating, painting and styrofoam processes. In the factory, the main source of water is the well water. While some part of the well water is directly used in the processes after passing through resin treatment, some portion of it is directed to the reverse osmosis treatment to obtain required water quality for enamel coating and painting processes. In addition to these processes another important source of water that can be considered as a potential water source is rainwater (3660 tons/year). In this study, process profiles as well as pollution profiles were assessed by a detailed quantitative and qualitative characterization of the wastewater sources generated in the factory. Based on the preliminary results the main water sources that can be considered for reuse in the processes were determined as painting and styrofoam processes.

Keywords: enamel coating, painting, reuse, wastewater

Procedia PDF Downloads 368
706 A Longitudinal Study of Social Engagement in Classroom in Children with Autism Spectrum Disorder

Authors: Cecile Garry, Katia Rovira, Julie Brisson

Abstract:

Autism Spectrum Disorder (ASD) is defined by a qualitative and quantitative impairment of social interaction. Indeed early intervention programs, such as the Early Start Denver Model (ESDM), aimed at encouraging the development of social skills. In classroom, the children need to be socially engaged to learn. Early intervention programs can thus be implemented in kindergarten schools. In these schools, ASD children have more opportunities to interact with their peers or adults than in elementary schools. However, the preschool children with ASD are less socially engaged than their typically developing peers in the classroom. They initiate, respond and maintain less the social interactions. In addition, they produce more responses than initiations. When they interact, the non verbal communication is more used than verbal or symbolic communication forms and they are more engaged with adults than with peers. Nevertheless, communicative patterns may vary according to the clinical profiles of ASD children. Indeed, the ASD children with better cognitive skills interact more with their peers and use more symbolic communication than the ASD children with a low cognitive level. ASD children with the less severe symptoms use more the verbal communication than ASD children with the more severe symptoms. Small groups and structured activities encourage coordinated joint engagement episodes in ASD children. Our goal is to evaluate ASD children’s social engagement development in class, with their peers or adults, during dyadic or group activities. Participants were 19 preschool children with ASD aged from 3 to 6 years old that benefited of an early intervention in special kindergarten schools. Severity of ASD symptoms was measured with the CARS at the beginning of the follow-up. Classroom situations of interaction were recorded during 10 minutes (5 minutes of dyadic interaction and 5 minutes of a group activity), every 2 months, during 10 months. Social engagement behaviors of children, including initiations, responses and imitation, directed to a peer or an adult, were then coded. The Observer software (Noldus) that allows to annotate behaviors was the coding system used. A double coding was conducted and revealed a good inter judges fidelity. Results show that ASD children were more often and longer socially engaged in dyadic than in groups situations. They were also more engaged with adults than with peers. Children with the less severe symptoms of ASD were more socially engaged in groups situations than children with the more severe symptoms of ASD. Then, ASD children with the less severe symptoms of ASD were more engaged with their peers than ASD children with the more severe symptoms of ASD. However, the engagement frequency increased during the 10 month of follow-up but only for ASD children with the more severe symptoms at the beginning. To conclude, these results highlighted the necessity of individualizing early intervention programs according to the clinical profile of the child.

Keywords: autism spectrum disorder, preschool children, developmental psychology, early interventions, social interactions

Procedia PDF Downloads 150
705 Rumen Metabolites and Microbial Load in Fattening Yankasa Rams Fed Urea and Lime Treated Groundnut (Arachis Hypogeae) Shell in a Complete Diet

Authors: Bello Muhammad Dogon Kade

Abstract:

The study was conducted to determine the effect of a treated groundnut (Arachis hypogaea) shell in a complete diet on blood metabolites and microbial load in fattening Yankasa rams. The study was conducted at the Teaching and Research Farm (Small Ruminants Unit of Animal Science Department, Faculty of Agriculture, Ahmadu Bello University, Zaria. Each kilogram of groundnut shell was treated with 5% urea and 5% lime for treatments 2 (UTGNS) and 3 (LTGNS), respectively. For treatment 4 (ULTGNS), 1 kg of groundnut shell was treated with 2.5% urea and 2.5% lime, but the shell in treatment 1 was not treated (UNTGNS). Sixteen Yankasa rams were used and randomly assigned to the four treatment diets with four animals per treatment in a completely randomized design (CRD). The diet was formulated to have 14% crude protein (CP) content. Rumen fluid was collected from each ram at the end of the experiment at 0 and 4 hours post-feeding. The samples were then put in a 30 ml bottle and acidified with 5 drops of concentrated sulphuric (0.1N H₂SO4) acid to trap ammonia. The results of the blood metabolites showed that the mean values of NH₃-N differed significantly (P<0.05) among the treatment groups, with rams in the ULTGNS diet having the highest significant value (31.96 mg/L). TVFs were significantly (P<0.05) higher in rams fed UNTGNS diet and higher in total nitrogen; the effect of sampling periods revealed that NH3N, TVFs and TP were significantly (P<0.05) higher in rumen fluid collected 4hrs post feeding among the rams across the treatment groups, but rumen fluid pH was significantly (p<0.05) higher in 0-hour post-feeding in all the rams in the treatment diets. In the treatment and sampling period’s interaction effects, animals on the ULTGNS diet had the highest mean values of NH3N in both 0 and 4 hours post-feeding and were significantly (P<0.5) higher compared to rams on the other treatment diets. Rams on the UTGNS diet had the highest bacteria load of 4.96X105/ml, which was significantly (P<0.05) higher than a microbial load of animals fed UNTGNS, LTGNS and ULTGNS diets. However, protozoa counts were significantly (P<0.05) higher in rams fed the UTGNS diet than those followed by the ULTGNS diet. The results showed that there was no significant difference (P>0.05) in the bacteria count of the animals at both 0 and 4 hours post-feeding. But rumen fungi and protozoa load at 0 hours were significantly (P<0.05) higher than at 4 hours post-feeding. The use of untreated ground groundnut shells in the diet of fattening Yankasa ram is therefore recommended.

Keywords: blood metabolites, microbial load, volatile fatty acid, ammonia, total protein

Procedia PDF Downloads 44
704 Improving Predictions of Coastal Benthic Invertebrate Occurrence and Density Using a Multi-Scalar Approach

Authors: Stephanie Watson, Fabrice Stephenson, Conrad Pilditch, Carolyn Lundquist

Abstract:

Spatial data detailing both the distribution and density of functionally important marine species are needed to inform management decisions. Species distribution models (SDMs) have proven helpful in this regard; however, models often focus only on species occurrences derived from spatially expansive datasets and lack the resolution and detail required to inform regional management decisions. Boosted regression trees (BRT) were used to produce high-resolution SDMs (250 m) at two spatial scales predicting probability of occurrence, abundance (count per sample unit), density (count per km2) and uncertainty for seven coastal seafloor taxa that vary in habitat usage and distribution to examine prediction differences and implications for coastal management. We investigated if small scale regionally focussed models (82,000 km2) can provide improved predictions compared to data-rich national scale models (4.2 million km2). We explored the variability in predictions across model type (occurrence vs abundance) and model scale to determine if specific taxa models or model types are more robust to geographical variability. National scale occurrence models correlated well with broad-scale environmental predictors, resulting in higher AUC (Area under the receiver operating curve) and deviance explained scores; however, they tended to overpredict in the coastal environment and lacked spatially differentiated detail for some taxa. Regional models had lower overall performance, but for some taxa, spatial predictions were more differentiated at a localised ecological scale. National density models were often spatially refined and highlighted areas of ecological relevance producing more useful outputs than regional-scale models. The utility of a two-scale approach aids the selection of the most optimal combination of models to create a spatially informative density model, as results contrasted for specific taxa between model type and scale. However, it is vital that robust predictions of occurrence and abundance are generated as inputs for the combined density model as areas that do not spatially align between models can be discarded. This study demonstrates the variability in SDM outputs created over different geographical scales and highlights implications and opportunities for managers utilising these tools for regional conservation, particularly in data-limited environments.

Keywords: Benthic ecology, spatial modelling, multi-scalar modelling, marine conservation.

Procedia PDF Downloads 70
703 Iranian English as Foreign Language Teachers' Psychological Well-Being across Gender: During the Pandemic

Authors: Fatemeh Asadi Farsad, Sima Modirkhameneh

Abstract:

The purpose of this study was to explore the pattern of Psychological Well-Being (PWB) of Iranian male and female EFL teachers during the pandemic. It was intended to see if such a drastic change in the context and mode of teaching affects teachers' PWB. Furthermore, the possible difference between the six elements of PWB of Iranian EFL male vs. female teachers during the pandemic was investigated. The other purpose was to find out the EFL teachers’ perceptions of any modifications, and factors leading to such modifications in their PWB during pandemic. For the purpose of this investigation, a total of 81 EFL teachers (59 female, 22 male) with an age range of 25 to 35 were conveniently sampled from different cities in Iran. Ryff’s PWB questionnaire was sent to participant teachers through online platforms to elicit data on their PWB. As for their perceptions on the possible modifications and the factors involved in PWB during pandemic, a set of semi-structured interviews were run among both sample groups. The findings revealed that male EFL teachers had the highest mean on personal growth, followed by purpose of life, and self-acceptance and the lowest mean on environmental mastery. With a slightly similar pattern, female EFL teachers had the highest mean on personal growth, followed by purpose in life, and positive relationship with others with the lowest mean on environmental mastery. However, no significant difference was observed between the male and female groups’ overall means on elements of PWB. Additionally, participants perceived that their anxiety level in online classes altered due to factors like (1) Computer literacy skills, (2) Lack of social communications and interactions with colleagues and students, (3) Online class management, (4) Overwhelming workloads, and (5) Time management. The study ends with further suggestions as regards effective online teaching preparation considering teachers PWB, especially at severe situations such as covid-19 pandemic. The findings offer to determine the reformations of educational policies concerning enhancing EFL teachers’ PWB through computer literacy courses and stress management courses. It is also suggested that to proactively support teachers’ mental health, it is necessary to provide them with advisors and psychologists if possible for free. Limitations: One limitation is the small number of participants (81), suggesting that future replications should include more participants for reliable findings. Another limitation is the gender imbalance, which future studies should address to yield better outcomes. Furthermore, Limited data gathering tools suggest using observations, diaries, and narratives for more insights in future studies. The study focused on one model of PWB, calling for further research on other models in the literature. Considering the wide effect of the COVID-19 pandemic, future studies should consider additional variables (e.g., teaching experience, age, income) to understand Iranian EFL teachers’ vulnerabilities and strengths better.

Keywords: online teaching, psychological well-being, female and male EFL teachers, pandemic

Procedia PDF Downloads 38
702 Investigation of the Function of Chemotaxonomy of White Tea on the Regulatory Function of Genes in Pathway of Colon Cancer

Authors: Fereydoon Bondarian, Samira Shaygan

Abstract:

Today, many nutritionists recommend the consumption of plants, fruits, and vegetables to provide the antioxidants needed by the body because the use of plant antioxidants usually causes fewer side effects and better treatment. Natural antioxidants increase the power of plasma antioxidants and reduce the incidence of some diseases, such as cancer. Bad lifestyles and environmental factors play an important role in increasing the incidence of cancer. In this study, different extracts of white teas taken from two types of tea available in Iran (clone 100 and Chinese hybrid) due to the presence of a hydroxyl functional group in their structure to inhibit free radicals and anticancer properties, using 3 aqueous, methanolic and aqueous-methanolic methods were used. The total polyphenolic content was calculated using the Folin-Ciocalcu method, and the percentage of inhibition and trapping of free radicals in each of the extracts was calculated using the DPPH method. With the help of high-performance liquid chromatography, a small amount of each catechin in the tea samples was obtained. Clone 100 white tea was found to be the best sample of tea in terms of all the examined attributes (total polyphenol content, antioxidant properties, and individual amount of each catechin). The results showed that aqueous and aqueous-methanolic extracts of Clone 100 white tea have the highest total polyphenol content with 27.59±0.08 and 36.67±0.54 (equivalent gallic acid per gram dry weight of leaves), respectively. Due to having the highest level of different groups of catechin compounds, these extracts have the highest property of inhibiting and trapping free radicals with 66.61±0.27 and 71.74±0.27% (mg/l) of the extracted sample against ascorbic acid). Using the MTT test, the inhibitory effect of clone 100 white tea extract in inhibiting the growth of HCT-116 colon cancer cells was investigated and the best time and concentration treatments were 500, 150 and 1000 micrograms in 8, 16 and 24 hours, respectively. To investigate gene expression changes, selected genes, including tumorigenic genes, proto-oncogenes, tumor suppressors, and genes involved in apoptosis, were selected and analyzed using the real-time PCR method and in the presence of concentrations obtained for white tea. White tea extract at a concentration of 1000 μg/ml 3 times 16, 8, and 24 hours showed the highest growth inhibition in cancer cells with 53.27, 55.8, and 86.06%. The concentration of 1000 μg/ml aqueous extract of white tea under 24-hour treatment increased the expression of tumor suppressor genes compared to the normal sample.

Keywords: catechin, gene expression, suppressor genes, colon cell line

Procedia PDF Downloads 48
701 Miniaturization of Germanium Photo-Detectors by Using Micro-Disk Resonator

Authors: Haifeng Zhou, Tsungyang Liow, Xiaoguang Tu, Eujin Lim, Chao Li, Junfeng Song, Xianshu Luo, Ying Huang, Lianxi Jia, Lianwee Luo, Kim Dowon, Qing Fang, Mingbin Yu, Guoqiang Lo

Abstract:

Several Germanium photodetectors (PD) built on silicon micro-disks are fabricated on the standard Si photonics multiple project wafers (MPW) and demonstrated to exhibit very low dark current, satisfactory operation bandwidth and moderate responsivity. Among them, a vertical p-i-n Ge PD based on a 2.0 µm-radius micro-disk has a dark current of as low as 35 nA, compared to a conventional PD current of 1 µA with an area of 100 µm2. The operation bandwidth is around 15 GHz at a reverse bias of 1V. The responsivity is about 0.6 A/W. Microdisk is a striking planar structure in integrated optics to enhance light-matter interaction and construct various photonics devices. The disk geometries feature in strongly and circularly confining light into an ultra-small volume in the form of whispering gallery modes. A laser may benefit from a microdisk in which a single mode overlaps the gain materials both spatially and spectrally. Compared to microrings, micro-disk removes the inner boundaries to enable even better compactness, which also makes it very suitable for some scenarios that electrical connections are needed. For example, an ultra-low power (≈ fJ) athermal Si modulator has been demonstrated with a bit rate of 25Gbit/s by confining both photons and electrically-driven carriers into a microscale volume.In this work, we study Si-based PDs with Ge selectively grown on a microdisk with the radius of a few microns. The unique feature of using microdisk for Ge photodetector is that mode selection is not important. In the applications of laser or other passive optical components, microdisk must be designed very carefully to excite the fundamental mode in a microdisk in that essentially the microdisk usually supports many higher order modes in the radial directions. However, for detector applications, this is not an issue because the local light absorption is mode insensitive. Light power carried by all modes are expected to be converted into photo-current. Another benefit of using microdisk is that the power circulation inside avoids any introduction of the reflector. A complete simulation model with all involved materials taken into account is established to study the promise of microdisk structures for photodetector by using finite difference time domain (FDTD) method. By viewing from the current preliminary data, the directions to further improve the device performance are also discussed.

Keywords: integrated optical devices, silicon photonics, micro-resonator, photodetectors

Procedia PDF Downloads 399
700 Application of the Material Point Method as a New Fast Simulation Technique for Textile Composites Forming and Material Handling

Authors: Amir Nazemi, Milad Ramezankhani, Marian Kӧrber, Abbas S. Milani

Abstract:

The excellent strength to weight ratio of woven fabric composites, along with their high formability, is one of the primary design parameters defining their increased use in modern manufacturing processes, including those in aerospace and automotive. However, for emerging automated preform processes under the smart manufacturing paradigm, complex geometries of finished components continue to bring several challenges to the designers to cope with manufacturing defects on site. Wrinklinge. g. is a common defectoccurring during the forming process and handling of semi-finished textile composites. One of the main reasons for this defect is the weak bending stiffness of fibers in unconsolidated state, causing excessive relative motion between them. Further challenges are represented by the automated handling of large-area fiber blanks with specialized gripper systems. For fabric composites forming simulations, the finite element (FE)method is a longstanding tool usedfor prediction and mitigation of manufacturing defects. Such simulations are predominately meant, not only to predict the onset, growth, and shape of wrinkles but also to determine the best processing condition that can yield optimized positioning of the fibers upon forming (or robot handling in the automated processes case). However, the need for use of small-time steps via explicit FE codes, facing numerical instabilities, as well as large computational time, are among notable drawbacks of the current FEtools, hindering their extensive use as fast and yet efficient digital twins in industry. This paper presents a novel woven fabric simulation technique through the application of the material point method (MPM), which enables the use of much larger time steps, facing less numerical instabilities, hence the ability to run significantly faster and efficient simulationsfor fabric materials handling and forming processes. Therefore, this method has the ability to enhance the development of automated fiber handling and preform processes by calculating the physical interactions with the MPM fiber models and rigid tool components. This enables the designers to virtually develop, test, and optimize their processes based on either algorithmicor Machine Learning applications. As a preliminary case study, forming of a hemispherical plain weave is shown, and the results are compared to theFE simulations, as well as experiments.

Keywords: material point method, woven fabric composites, forming, material handling

Procedia PDF Downloads 173
699 Finite Element Analysis of a Glass Facades Supported by Pre-Tensioned Cable Trusses

Authors: Khair Al-Deen Bsisu, Osama Mahmoud Abuzeid

Abstract:

Significant technological advances have been achieved in the design and building construction of steel and glass in the last two decades. The metal glass support frame has been replaced by further sophisticated technological solutions, for example, the point fixed glazing systems. The minimization of the visual mass has reached extensive possibilities through the evolution of technology in glass production and the better understanding of the structural potential of glass itself, the technological development of bolted fixings, the introduction of the glazing support attachments of the glass suspension systems and the use for structural stabilization of cables that reduce to a minimum the amount of metal used. The variability of solutions of tension structures, allied to the difficulties related to geometric and material non-linear behavior, usually overrules the use of analytical solutions, letting numerical analysis as the only general approach to the design and analysis of tension structures. With the characteristics of low stiffness, lightweight, and small damping, tension structures are obviously geometrically nonlinear. In fact, analysis of cable truss is not only one of the most difficult nonlinear analyses because the analysis path may have rigid-body modes, but also a time consuming procedure. Non-linear theory allowing for large deflections is used. The flexibility of supporting members was observed to influence the stresses in the pane considerably in some cases. No other class of architectural structural systems is as dependent upon the use of digital computers as are tensile structures. Besides complexity, the process of design and analysis of tension structures presents a series of specificities, which usually lead to the use of special purpose programs, instead of general purpose programs (GPPs), such as ANSYS. In a special purpose program, part of the design know how is embedded in program routines. It is very probable that this type of program will be the option of the final user, in design offices. GPPs offer a range of types of analyses and modeling options. Besides, traditional GPPs are constantly being tested by a large number of users, and are updated according to their actual demands. This work discusses the use of ANSYS for the analysis and design of tension structures, such as cable truss structures under wind and gravity loadings. A model to describe the glass panels working in coordination with the cable truss was proposed. Under the proposed model, a FEM model of the glass panels working in coordination with the cable truss was established.

Keywords: Glass Construction material, Facades, Finite Element, Pre-Tensioned Cable Truss

Procedia PDF Downloads 266
698 Application of Micro-Tunneling Technique to Rectify Tilted Structures Constructed on Cohesive Soil

Authors: Yasser R. Tawfic, Mohamed A. Eid

Abstract:

Foundation differential settlement and supported structure tilting is an occasionally occurred engineering problem. This may be caused by overloading, changes in ground soil properties or unsupported nearby excavations. Engineering thinking points directly toward the logic solution for such problem by uplifting the settled side. This can be achieved with deep foundation elements such as micro-piles and macro-piles™, jacked piers and helical piers, jet grouted soil-crete columns, compaction grout columns, cement grouting or with chemical grouting, or traditional pit underpinning with concrete and mortar. Although, some of these techniques offer economic, fast and low noise solutions, many of them are quite the contrary. For tilted structures, with limited inclination, it may be much easier to cause a balancing settlement on the less-settlement side which shall be done carefully in a proper rate. This principal has been applied in Leaning Tower of Pisa stabilization with soil extraction from the ground surface. In this research, the authors attempt to introduce a new solution with a different point of view. So, micro-tunneling technique is presented in here as an intended ground deformation cause. In general, micro-tunneling is expected to induce limited ground deformations. Thus, the researchers propose to apply the technique to form small size ground unsupported holes to produce the target deformations. This shall be done in four phases: •Application of one or more micro-tunnels, regarding the existing differential settlement value, under the raised side of the tilted structure. •For each individual tunnel, the lining shall be pulled out from both sides (from jacking and receiving shafts) in slow rate. •If required, according to calculations and site records, an additional surface load can be applied on the raised foundation side. •Finally, a strengthening soil grouting shall be applied for stabilization after adjustment. A finite element based numerical model is presented to simulate the proposed construction phases for different tunneling positions and tunnels group. For each case, the surface settlements are calculated and induced plasticity points are checked. These results show the impact of the suggested procedure on the tilted structure and its feasibility. Comparing results also show the importance of the position selection and tunnels group gradual effect. Thus, a new engineering solution is presented to one of the structural and geotechnical engineering challenges.

Keywords: differential settlement, micro-tunneling, soil-structure interaction, tilted structures

Procedia PDF Downloads 197
697 Heat Vulnerability Index (HVI) Mapping in Extreme Heat Days Coupled with Air Pollution Using Principal Component Analysis (PCA) Technique: A Case Study of Amiens, France

Authors: Aiman Mazhar Qureshi, Ahmed Rachid

Abstract:

Extreme heat events are emerging human environmental health concerns in dense urban areas due to anthropogenic activities. High spatial and temporal resolution heat maps are important for urban heat adaptation and mitigation, helping to indicate hotspots that are required for the attention of city planners. The Heat Vulnerability Index (HVI) is the important approach used by decision-makers and urban planners to identify heat-vulnerable communities and areas that require heat stress mitigation strategies. Amiens is a medium-sized French city, where the average temperature has been increasing since the year 2000 by +1°C. Extreme heat events are recorded in the month of July for the last three consecutive years, 2018, 2019 and 2020. Poor air quality, especially ground-level ozone, has been observed mainly during the same hot period. In this study, we evaluated the HVI in Amiens during extreme heat days recorded last three years (2018,2019,2020). The Principal Component Analysis (PCA) technique is used for fine-scale vulnerability mapping. The main data we considered for this study to develop the HVI model are (a) socio-economic and demographic data; (b) Air pollution; (c) Land use and cover; (d) Elderly heat-illness; (e) socially vulnerable; (f) Remote sensing data (Land surface temperature (LST), mean elevation, NDVI and NDWI). The output maps identified the hot zones through comprehensive GIS analysis. The resultant map shows that high HVI exists in three typical areas: (1) where the population density is quite high and the vegetation cover is small (2) the artificial surfaces (built-in areas) (3) industrial zones that release thermal energy and ground-level ozone while those with low HVI are located in natural landscapes such as rivers and grasslands. The study also illustrates the system theory with a causal diagram after data analysis where anthropogenic activities and air pollution appear in correspondence with extreme heat events in the city. Our suggested index can be a useful tool to guide urban planners and municipalities, decision-makers and public health professionals in targeting areas at high risk of extreme heat and air pollution for future interventions adaptation and mitigation measures.

Keywords: heat vulnerability index, heat mapping, heat health-illness, remote sensing, urban heat mitigation

Procedia PDF Downloads 135
696 High Performance Computing Enhancement of Agent-Based Economic Models

Authors: Amit Gill, Lalith Wijerathne, Sebastian Poledna

Abstract:

This research presents the details of the implementation of high performance computing (HPC) extension of agent-based economic models (ABEMs) to simulate hundreds of millions of heterogeneous agents. ABEMs offer an alternative approach to study the economy as a dynamic system of interacting heterogeneous agents, and are gaining popularity as an alternative to standard economic models. Over the last decade, ABEMs have been increasingly applied to study various problems related to monetary policy, bank regulations, etc. When it comes to predicting the effects of local economic disruptions, like major disasters, changes in policies, exogenous shocks, etc., on the economy of the country or the region, it is pertinent to study how the disruptions cascade through every single economic entity affecting its decisions and interactions, and eventually affect the economic macro parameters. However, such simulations with hundreds of millions of agents are hindered by the lack of HPC enhanced ABEMs. In order to address this, a scalable Distributed Memory Parallel (DMP) implementation of ABEMs has been developed using message passing interface (MPI). A balanced distribution of computational load among MPI-processes (i.e. CPU cores) of computer clusters while taking all the interactions among agents into account is a major challenge for scalable DMP implementations. Economic agents interact on several random graphs, some of which are centralized (e.g. credit networks, etc.) whereas others are dense with random links (e.g. consumption markets, etc.). The agents are partitioned into mutually-exclusive subsets based on a representative employer-employee interaction graph, while the remaining graphs are made available at a minimum communication cost. To minimize the number of communications among MPI processes, real-life solutions like the introduction of recruitment agencies, sales outlets, local banks, and local branches of government in each MPI-process, are adopted. Efficient communication among MPI-processes is achieved by combining MPI derived data types with the new features of the latest MPI functions. Most of the communications are overlapped with computations, thereby significantly reducing the communication overhead. The current implementation is capable of simulating a small open economy. As an example, a single time step of a 1:1 scale model of Austria (i.e. about 9 million inhabitants and 600,000 businesses) can be simulated in 15 seconds. The implementation is further being enhanced to simulate 1:1 model of Euro-zone (i.e. 322 million agents).

Keywords: agent-based economic model, high performance computing, MPI-communication, MPI-process

Procedia PDF Downloads 117
695 Labor Welfare and Social Security

Authors: Shoaib Alvi

Abstract:

Mahatma Gandhi was said “Man becomes great exactly in the degree in which he works for the welfare of his fellow-men”. Labor welfare is an important fact of Industrial relations. With the growth of industrialization, mechanization and computerization, labor welfare measures have got the fillip. The author believes that Labor welfare includes provisions of various facilities and amenities in and around the work place for the better life of the workers. Labor welfare is, thus, one of the major determinants of industrial relations. It comprises all human efforts the work place for the better life of the worker. The social and economic aspects of the life of the workers have the direct influence on the social and economic development of the nation. Author thinks that there could be multiple objectives in having, labor welfare programme the concern for improving the lot of the workers, a philosophy of humanitarianism or internal social responsibility, a feeling of concern, and caring by providing some of life's basic amenities, besides the basic pay packet. Such caring is supposed to build a sense of loyalty on the part of the employee towards the organization. The author thinks that Social security is the security that the State furnishes against the risks which an individual of small means cannot today, stand up to by himself even in private combination with his fellows. Social security is one of the pillars on which the structure of a welfare state rests, and it constitutes the hardcore of social policy in most countries. It is through social security measures that the state attempts to maintain every citizen at a certain prescribed level below which no one is allowed to fall. According to author, social assistance is a method according to which benefits are given to the needy persons, fulfilling the prescribed conditions, by the government out of its own resources. Author has analyzed and studied the relationship between the labor welfare social security and also studied various international conventions on provisions of social security by International Authorities like United Nations, International Labor Organization, and European Union etc. Author has also studied and analyzed concept of labor welfare and social security schemes of many countries around the globe ex:- Social security in Australia, Social security in Switzerland, Social Security (United States), Mexican Social Security Institute, Welfare in Germany, Social security schemes of India for labor welfare in both organized sector and unorganized sector. In this Research paper, Author has done the study on the Conceptual framework of the Labour Welfare. According to author, labors are highly perishable, which need constant welfare measures for their upgradation and performance in this field. At last author has studied role of trade unions and labor welfare unions and other institutions working for labor welfare, in this research paper author has also identified problems these Unions and labor welfare bodies’ face and tried to find out solutions for the problems and also analyzed various steps taken by the government of various countries around the globe.

Keywords: labor welfare, internal social responsibility, social security, international conventions

Procedia PDF Downloads 557
694 Payload Bay Berthing of an Underwater Vehicle With Vertically Actuated Thrusters

Authors: Zachary Cooper-Baldock, Paulo E. Santos, Russell S. A. Brinkworth, Karl Sammut

Abstract:

In recent years, large unmanned underwater vehicles such as the Boeing Voyager and Anduril Ghost Shark have been developed. These vessels can be structured to contain onboard internal payload bays. These payload bays can serve a variety of purposes – including the launch and recovery (LAR) of smaller underwater vehicles. The LAR of smaller vessels is extremely important, as it enables transportation over greater distances, increased time on station, data transmission and operational safety. The larger vessel and its payload bay structure complicate the LAR of UUVs in contrast to static docks that are affixed to the seafloor, as they actively impact the local flow field. These flow field impacts require analysis to determine if UUV vessels can be safely launched and recovered inside the motherships. This research seeks to determine the hydrodynamic forces exerted on a vertically over-actuated, small, unmanned underwater vehicle (OUUV) during an internal LAR manoeuvre and compare this to an under-actuated vessel (UUUV). In this manoeuvre, the OUUV is navigated through the stern wake region of the larger vessel to a set point within the internal payload bay. The manoeuvre is simulated using ANSYS Fluent computational fluid dynamics models, covering the entire recovery of the OUUV and UUUV. The analysis of the OUUV is compared against the UUUV to determine the differences in the exerted forces. Of particular interest are the drag, pressure, turbulence and flow field effects exerted as the OUUV is driven inside the payload bay of the larger vessel. The hydrodynamic forces and flow field disturbances are used to determine the feasibility of making such an approach. From the simulations, it was determined that there was no significant detrimental physical forces, particularly with regard to turbulence. The flow field effects exerted by the OUUV are significant. The vertical thrusters exert significant wake structures, but their orientation ensures the wake effects are exerted below the UUV, minimising the impact. It was also seen that OUUV experiences higher drag forces compared to the UUUV, which will correlate to an increased energy expenditure. This investigation found no key indicators that recovery via a mothership payload bay was not feasible. The turbulence, drag and pressure phenomenon were of a similar magnitude to existing static and towed dock structures.

Keywords: underwater vehicles, submarine, autonomous underwater vehicles, AUV, computational fluid dynamics, flow fields, pressure, turbulence, drag

Procedia PDF Downloads 69
693 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching

Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa

Abstract:

Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.

Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching

Procedia PDF Downloads 174
692 A Proposal for an Excessivist Social Welfare Ordering

Authors: V. De Sandi

Abstract:

In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.

Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering

Procedia PDF Downloads 47
691 Degradation Kinetics of Cardiovascular Implants Employing Full Blood and Extra-Corporeal Circulation Principles: Mimicking the Human Circulation In vitro

Authors: Sara R. Knigge, Sugat R. Tuladhar, Hans-Klaus HöFfler, Tobias Schilling, Tim Kaufeld, Axel Haverich

Abstract:

Tissue engineered (TE) heart valves based on degradable electrospun fiber scaffold represent a promising approach to overcome the known limitations of mechanical or biological prostheses. But the mechanical stress in the high-pressure system of the human circulation is a severe challenge for the delicate materials. Hence, the prediction of the scaffolds` in vivo degradation kinetics must be as accurate as possible to prevent fatal events in future animal or even clinical trials. Therefore, this study investigates whether long-term testing in full blood provides more meaningful results regarding the degradation behavior than conventional tests in simulated body fluids (SBF) or Phosphate Buffered Saline (PBS). Fiber mats were produced from a polycaprolactone (PCL)/tetrafluoroethylene solution by electrospinning. The morphology of the fiber mats was characterized via scanning electron microscopy (SEM). A maximum physiological degradation environment utilizing a test set-up with porcine full blood was established. The set-up consists of a reaction vessel, an oxygenator unit, and a roller pump. The blood parameters (pO2, pCO2, temperature, and pH) were monitored with an online test system. All tests were also carried out in the test circuit with SBF and PBS to compare conventional degradation media with the novel full blood setting. The polymer's degradation is quantified by SEM picture analysis, differential scanning calorimetry (DSC), and Raman spectroscopy. Tensile and cyclic loading tests were performed to evaluate the mechanical integrity of the scaffold. Preliminary results indicate that PCL degraded slower in full blood than in SBF and PBS. The uptake of water is more pronounced in the full blood group. Also, PCL preserved its mechanical integrity longer when degraded in full blood. Protein absorption increased during the degradation process. Red blood cells, platelets, and their aggregates adhered on the PCL. Presumably, the degradation led to a more hydrophilic polymeric surface which promoted the protein adsorption and the blood cell adhesion. Testing degradable implants in full blood allows for developing more reliable scaffold materials in the future. Material tests in small and large animal trials thereby can be focused on testing candidates that have proven to function well in an in-vivo-like setting.

Keywords: Electrospun scaffold, full blood degradation test, long-term polymer degradation, tissue engineered aortic heart valve

Procedia PDF Downloads 135
690 Investigation of the Carbon Dots Optical Properties Using Laser Scanning Confocal Microscopy and TimE-resolved Fluorescence Microscopy

Authors: M. S. Stepanova, V. V. Zakharov, P. D. Khavlyuk, I. D. Skurlov, A. Y. Dubovik, A. L. Rogach

Abstract:

Carbon dots are small carbon-based spherical nanoparticles, which are typically less than 10 nm in size that can be modified with surface passivation and heteroatoms doping. The light-absorbing ability of carbon dots has attracted a significant amount of attention in photoluminescence for bioimaging and fluorescence sensing applications owing to their advantages, such as tunable fluorescence emission, photo- and thermostability and low toxicity. In this study, carbon dots were synthesized by the solvothermal method from citric acid and ethylenediamine dissolved in water. The solution was heated for 5 hours at 200°C and then cooled down to room temperature. The carbon dots films were obtained by evaporation from a high-concentration aqueous solution. The increase of both luminescence intensity and light transmission was obtained as a result of a 405 nm laser exposure to a part of the carbon dots film, which was detected using a confocal laser scanning microscope (LSM 710, Zeiss). Blueshift up to 35 nm of the luminescence spectrum is observed as luminescence intensity, which is increased more than twofold. The exact value of the shift depends on the time of the laser exposure. This shift can be caused by the modification of surface groups at the carbon dots, which are responsible for long-wavelength luminescence. In addition, a shift of the absorption peak by 10 nm and a decrease in the optical density at the wavelength of 350 nm is detected, which is responsible for the absorption of surface groups. The obtained sample was also studied with time-resolved confocal fluorescence microscope (MicroTime 100, PicoQuant), which made it possible to receive a time-resolved photoluminescence image and construct emission decays of the laser-exposed and non-exposed areas. 5 MHz pulse rate impulse laser has been used as a photoluminescence excitation source. Photoluminescence decay was approximated by two exhibitors. The laser-exposed area has the amplitude of the first-lifetime component (A1) twice as much as before, with increasing τ1. At the same time, the second-lifetime component (A2) decreases. These changes evidence a modification of the surface groups of carbon dots. The detected effect can be used to create thermostable fluorescent marks, the physical size of which is bounded by the diffraction limit of the optics (~ 200-300 nm) used for exposure and to improve the optical properties of carbon dots or in the field of optical encryption. Acknowledgements: This work was supported by the Ministry of Science and Higher Education of Russian Federation, goszadanie no. 2019-1080 and financially supported by Government of Russian Federation, Grant 08-08.

Keywords: carbon dots, photoactivation, optical properties, photoluminescence and absorption spectra

Procedia PDF Downloads 156
689 Studies on the Histomorphometry of the Digestive Tract and Associated Digestive Glands in Ostrich (Struthio camelus) with Gender and Progressing Age in Pakistan

Authors: Zaima Umar, Anas S. Qureshi, Adeel Sarfraz, Saqib Umar, Talha Umar, Muhammad Usman

Abstract:

Ostrich has been a good source of food and income for people across the world. To get a better understanding of health and health-related problems, the knowledge of its digestive system is of utmost importance. The present study was conducted to determine the morphological and histometrical variations in the digestive system and associated glands of ostrich (Struthio camelus) as regard to the gender and progressive age. A total of 40 apparently healthy ostriches of both genders and two progressive age groups; young one (less than two year, group A); and adult (2-15 years, group B) in equal number were used in this study. Digestive organs including tongue, esophagus, proventriculus, gizzard, small and large intestines and associated glands like liver and pancreas were collected immediately after slaughtering the birds. The organs of the digestive system and associated glands of each group were studied grossly and histologically. Grossly colour, shape consistency, weight and various dimensions (length, width, and circumference) of organs of the digestive tract and associated glands were recorded. The mean (± SEM) of all gross anatomical parameters in group A were significantly (p ≤ 0.01) different from that of group B. For microscopic studies, 1-2 cm tissue samples of organs of the digestive system and associated glands were taken. The tissue was marked and fixed in the neutral buffer formaldehyde solution for histological studies. After fixation, the sections of 5-7 µm were cut and stained by haematoxylin and eosin stain. All the layers (epithelium, lamina propria, lamina muscularis, submucosa and tunica muscularis) were measured (µm) with the help of automated computer software Image J®. The results of this study provide valuable information on the gender and age-related histological and histometrical variations in the digestive organs of ostrich (Struthio camelus). The microscopic studies of different parts of the digestive system revealed highly significant differences (p ≤ 0.01) among the two groups. The esophagus was lined by non-keratinized stratified squamous epithelium. The duodenum, jejunum, and ileum showed similar histological structures. Statistical analysis revealed significant (p ≤ 0.05) increase in the thickness of different tunics of the gastrointestinal tract in adult birds (up to 15 years) as compared with young ones (less than two years). Therefore, it can be concluded that there is a gradual but consistent growth in the observed digestive organs mimicking that of other poultry species and may be helpful in determining the growth pattern in this bird. However, there is a need to record the changes at closer time intervals.

Keywords: ostrich, digestive system, histomorphometry, grossly

Procedia PDF Downloads 136
688 Monoallelic and Biallelic Deletions of 13q14 in a Group of 36 CLL Patients Investigated by CGH Haematological Cancer and SNP Array (8x60K)

Authors: B. Grygalewicz, R. Woroniecka, J. Rygier, K. Borkowska, A. Labak, B. Nowakowska, B. Pienkowska-Grela

Abstract:

Introduction: Chronic lymphocytic leukemia (CLL) is the most common form of adult leukemia in the Western world. Hemizygous and or homozygous loss at 13q14 occur in more than half of cases and constitute the most frequent chromosomal abnormality in CLL. It is believed that deletions 13q14 play a role in CLL pathogenesis. Two microRNA genes miR-15a and miR- 16-1 are targets of 13q14 deletions and plays a tumor suppressor role by targeting antiapoptotic BCL2 gene. Deletion size, as a single change detected in FISH analysis, has haprognostic significance. Patients with small deletions, without RB1 gene involvement, have the best prognosis and the longest overall survival time (OS 133 months). In patients with bigger deletion region, containing RB1 gene, prognosis drops to intermediate, like in patients with normal karyotype and without changes in FISH with overall survival 111 months. Aim: Precise delineation of 13q14 deletions regions in two groups of CLL patients, with mono- and biallelic deletions and qualifications of their prognostic significance. Methods: Detection of 13q14 deletions was performed by FISH analysis with CLL probe panel (D13S319, LAMP1, TP53, ATM, CEP-12). Accurate deletion size detection was performed by CGH Haematological Cancer and SNP array (8x60K). Results: Our investigated group of CLL patients with the 13q14 deletion, detected by FISH analysis, comprised two groups: 18 patients with monoallelic deletions and 18 patients with biallelic deletions. In FISH analysis, in the monoallelic group the range of cells with deletion, was 43% to 97%, while in biallelic group deletion was detected in 11% to 94% of cells. Microarray analysis revealed precise deletion regions. In the monoallelic group, the range of size was 348,12 Kb to 34,82 Mb, with median deletion size 7,93 Mb. In biallelic group discrepancy of total deletions, size was 135,27 Kb to 33,33 Mb, with median deletion size 2,52 Mb. The median size of smaller deletion regions on one copy chromosome 13 was 1,08 Mb while the average region of bigger deletion on the second chromosome 13 was 4,04 Mb. In the monoallelic group, in 8/18 deletion region covered RB1 gene. In the biallelic group, in 4/18 cases, revealed deletion on one copy of biallelic deletion and in 2/18 showed deletion of RB1 gene on both deleted 13q14 regions. All minimal deleted regions included miR-15a and miR-16-1 genes. Genetic results will be correlated with clinical data. Conclusions: Application of CGH microarrays technique in CLL allows accurately delineate the size of 13q14 deletion regions, what have a prognostic value. All deleted regions included miR15a and miR-16-1, what confirms the essential role of these genes in CLL pathogenesis. In our investigated groups of CLL patients with mono- and biallelic 13q14 deletions, patients with biallelic deletion presented smaller deletion sizes (2,52 Mb vs 7,93 Mb), what is connected with better prognosis.

Keywords: CLL, deletion 13q14, CGH microarrays, SNP array

Procedia PDF Downloads 249
687 Topological Language for Classifying Linear Chord Diagrams via Intersection Graphs

Authors: Michela Quadrini

Abstract:

Chord diagrams occur in mathematics, from the study of RNA to knot theory. They are widely used in theory of knots and links for studying the finite type invariants, whereas in molecular biology one important motivation to study chord diagrams is to deal with the problem of RNA structure prediction. An RNA molecule is a linear polymer, referred to as the backbone, that consists of four types of nucleotides. Each nucleotide is represented by a point, whereas each chord of the diagram stands for one interaction for Watson-Crick base pairs between two nonconsecutive nucleotides. A chord diagram is an oriented circle with a set of n pairs of distinct points, considered up to orientation preserving diffeomorphisms of the circle. A linear chord diagram (LCD) is a special kind of graph obtained cutting the oriented circle of a chord diagram. It consists of a line segment, called its backbone, to which are attached a number of chords with distinct endpoints. There is a natural fattening on any linear chord diagram; the backbone lies on the real axis, while all the chords are in the upper half-plane. Each linear chord diagram has a natural genus of its associated surface. To each chord diagram and linear chord diagram, it is possible to associate the intersection graph. It consists of a graph whose vertices correspond to the chords of the diagram, whereas the chord intersections are represented by a connection between the vertices. Such intersection graph carries a lot of information about the diagram. Our goal is to define an LCD equivalence class in terms of identity of intersection graphs, from which many chord diagram invariants depend. For studying these invariants, we introduce a new representation of Linear Chord Diagrams based on a set of appropriate topological operators that permits to model LCD in terms of the relations among chords. Such set is composed of: crossing, nesting, and concatenations. The crossing operator is able to generate the whole space of linear chord diagrams, and a multiple context free grammar able to uniquely generate each LDC starting from a linear chord diagram adding a chord for each production of the grammar is defined. In other words, it allows to associate a unique algebraic term to each linear chord diagram, while the remaining operators allow to rewrite the term throughout a set of appropriate rewriting rules. Such rules define an LCD equivalence class in terms of the identity of intersection graphs. Starting from a modelled RNA molecule and the linear chord, some authors proposed a topological classification and folding. Our LCD equivalence class could contribute to the RNA folding problem leading to the definition of an algorithm that calculates the free energy of the molecule more accurately respect to the existing ones. Such LCD equivalence class could be useful to obtain a more accurate estimate of link between the crossing number and the topological genus and to study the relation among other invariants.

Keywords: chord diagrams, linear chord diagram, equivalence class, topological language

Procedia PDF Downloads 195