Search results for: reactive power optimization
473 Supply Chain Analysis with Product Returns: Pricing and Quality Decisions
Authors: Mingming Leng
Abstract:
Wal-Mart has allocated considerable human resources for its quality assurance program, in which the largest retailer serves its supply chains as a quality gatekeeper. Asda Stores Ltd., the second largest supermarket chain in Britain, is now investing £27m in significantly increasing the frequency of quality control checks in its supply chains and thus enhancing quality across its fresh food business. Moreover, Tesco, the largest British supermarket chain, already constructed a quality assessment center to carry out its gatekeeping responsibility. Motivated by the above practices, we consider a supply chain in which a retailer plays the gatekeeping role in quality assurance by identifying defects among a manufacturer's products prior to selling them to consumers. The impact of a retailer's gatekeeping activity on pricing and quality assurance in a supply chain has not been investigated in the operations management area. We draw a number of managerial insights that are expected to help practitioners judiciously consider the quality gatekeeping effort at the retail level. As in practice, when the retailer identifies a defective product, she immediately returns it to the manufacturer, who then replaces the defect with a good quality product and pays a penalty to the retailer. If the retailer does not recognize a defect but sells it to a consumer, then the consumer will identify the defect and return it to the retailer, who then passes the returned 'unidentified' defect to the manufacturer. The manufacturer also incurs a penalty cost. Accordingly, we analyze a two-stage pricing and quality decision problem, in which the manufacturer and the retailer bargain over the manufacturer's average defective rate and wholesale price at the first stage, and the retailer decides on her optimal retail price and gatekeeping intensity at the second stage. We also compare the results when the retailer performs quality gatekeeping with those when the retailer does not. Our supply chain analysis exposes some important managerial insights. For example, the retailer's quality gatekeeping can effectively reduce the channel-wide defective rate, if her penalty charge for each identified de-fect is larger than or equal to the market penalty for each unidentified defect. When the retailer imple-ments quality gatekeeping, the change in the negotiated wholesale price only depends on the manufac-turer's 'individual' benefit, and the change in the retailer's optimal retail price is only related to the channel-wide benefit. The retailer is willing to take on the quality gatekeeping responsibility, when the impact of quality relative to retail price on demand is high and/or the retailer has a strong bargaining power. We conclude that the retailer's quality gatekeeping can help reduce the defective rate for consumers, which becomes more significant when the retailer's bargaining position in her supply chain is stronger. Retailers with stronger bargaining powers can benefit more from their quality gatekeeping in supply chains.Keywords: bargaining, game theory, pricing, quality, supply chain
Procedia PDF Downloads 277472 Investigation of the Function of Chemotaxonomy of White Tea on the Regulatory Function of Genes in Pathway of Colon Cancer
Authors: Fereydoon Bondarian, Samira Shaygan
Abstract:
Today, many nutritionists recommend the consumption of plants, fruits, and vegetables to provide the antioxidants needed by the body because the use of plant antioxidants usually causes fewer side effects and better treatment. Natural antioxidants increase the power of plasma antioxidants and reduce the incidence of some diseases, such as cancer. Bad lifestyles and environmental factors play an important role in increasing the incidence of cancer. In this study, different extracts of white teas taken from two types of tea available in Iran (clone 100 and Chinese hybrid) due to the presence of a hydroxyl functional group in their structure to inhibit free radicals and anticancer properties, using 3 aqueous, methanolic and aqueous-methanolic methods were used. The total polyphenolic content was calculated using the Folin-Ciocalcu method, and the percentage of inhibition and trapping of free radicals in each of the extracts was calculated using the DPPH method. With the help of high-performance liquid chromatography, a small amount of each catechin in the tea samples was obtained. Clone 100 white tea was found to be the best sample of tea in terms of all the examined attributes (total polyphenol content, antioxidant properties, and individual amount of each catechin). The results showed that aqueous and aqueous-methanolic extracts of Clone 100 white tea have the highest total polyphenol content with 27.59±0.08 and 36.67±0.54 (equivalent gallic acid per gram dry weight of leaves), respectively. Due to having the highest level of different groups of catechin compounds, these extracts have the highest property of inhibiting and trapping free radicals with 66.61±0.27 and 71.74±0.27% (mg/l) of the extracted sample against ascorbic acid). Using the MTT test, the inhibitory effect of clone 100 white tea extract in inhibiting the growth of HCT-116 colon cancer cells was investigated and the best time and concentration treatments were 500, 150 and 1000 micrograms in 8, 16 and 24 hours, respectively. To investigate gene expression changes, selected genes, including tumorigenic genes, proto-oncogenes, tumor suppressors, and genes involved in apoptosis, were selected and analyzed using the real-time PCR method and in the presence of concentrations obtained for white tea. White tea extract at a concentration of 1000 μg/ml 3 times 16, 8, and 24 hours showed the highest growth inhibition in cancer cells with 53.27, 55.8, and 86.06%. The concentration of 1000 μg/ml aqueous extract of white tea under 24-hour treatment increased the expression of tumor suppressor genes compared to the normal sample.Keywords: catechin, gene expression, suppressor genes, colon cell line
Procedia PDF Downloads 58471 Stability and Rheology of Sodium Diclofenac-Loaded and Unloaded Palm Kernel Oil Esters Nanoemulsion Systems
Authors: Malahat Rezaee, Mahiran Basri, Raja Noor Zaliha Raja Abdul Rahman, Abu Bakar Salleh
Abstract:
Sodium diclofenac is one of the most commonly used drugs of nonsteroidal anti-inflammatory drugs (NSAIDs). It is especially effective in the controlling the severe conditions of inflammation and pain, musculoskeletal disorders, arthritis, and dysmenorrhea. Formulation as nanoemulsions is one of the nanoscience approaches that have been progressively considered in pharmaceutical science for transdermal delivery of drug. Nanoemulsions are a type of emulsion with particle sizes ranging from 20 nm to 200 nm. An emulsion is formed by the dispersion of one liquid, usually the oil phase in another immiscible liquid, water phase that is stabilized using surfactant. Palm kernel oil esters (PKOEs), in comparison to other oils; contain higher amounts of shorter chain esters, which suitable to be applied in micro and nanoemulsion systems as a carrier for actives, with excellent wetting behavior without the oily feeling. This research was aimed to study the effect of O/S ratio on stability and rheological behavior of sodium diclofenac loaded and unloaded palm kernel oil esters nanoemulsion systems. The effect of different O/S ratio of 0.25, 0.50, 0.75, 1.00 and 1.25 on stability of the drug-loaded and unloaded nanoemulsion formulations was evaluated by centrifugation, freeze-thaw cycle and storage stability tests. Lecithin and cremophor EL were used as surfactant. The stability of the prepared nanoemulsion formulations was assessed based on the change in zeta potential and droplet size as a function of time. Instability mechanisms including coalescence and Ostwald ripening for the nanoemulsion system were discussed. In comparison between drug-loaded and unloaded nanoemulsion formulations, drug-loaded formulations represented smaller particle size and higher stability. In addition, the O/S ratio of 0.5 was found to be the best ratio of oil and surfactant for production of a nanoemulsion with the highest stability. The effect of O/S ratio on rheological properties of drug-loaded and unloaded nanoemulsion systems was studied by plotting the flow curves of shear stress (τ) and viscosity (η) as a function of shear rate (γ). The data were fitted to the Power Law model. The results showed that all nanoemulsion formulations exhibited non-Newtonian flow behaviour by displaying shear thinning behaviour. Viscosity and yield stress were also evaluated. The nanoemulsion formulation with the O/S ratio of 0.5 represented higher viscosity and K values. In addition, the sodium diclofenac loaded formulations had more viscosity and higher yield stress than drug-unloaded formulations.Keywords: nanoemulsions, palm kernel oil esters, sodium diclofenac, rheoligy, stability
Procedia PDF Downloads 423470 Machine Learning in Patent Law: How Genetic Breeding Algorithms Challenge Modern Patent Law Regimes
Authors: Stefan Papastefanou
Abstract:
Artificial intelligence (AI) is an interdisciplinary field of computer science with the aim of creating intelligent machine behavior. Early approaches to AI have been configured to operate in very constrained environments where the behavior of the AI system was previously determined by formal rules. Knowledge was presented as a set of rules that allowed the AI system to determine the results for specific problems; as a structure of if-else rules that could be traversed to find a solution to a particular problem or question. However, such rule-based systems typically have not been able to generalize beyond the knowledge provided. All over the world and especially in IT-heavy industries such as the United States, the European Union, Singapore, and China, machine learning has developed to be an immense asset, and its applications are becoming more and more significant. It has to be examined how such products of machine learning models can and should be protected by IP law and for the purpose of this paper patent law specifically, since it is the IP law regime closest to technical inventions and computing methods in technical applications. Genetic breeding models are currently less popular than recursive neural network method and deep learning, but this approach can be more easily described by referring to the evolution of natural organisms, and with increasing computational power; the genetic breeding method as a subset of the evolutionary algorithms models is expected to be regaining popularity. The research method focuses on patentability (according to the world’s most significant patent law regimes such as China, Singapore, the European Union, and the United States) of AI inventions and machine learning. Questions of the technical nature of the problem to be solved, the inventive step as such, and the question of the state of the art and the associated obviousness of the solution arise in the current patenting processes. Most importantly, and the key focus of this paper is the problem of patenting inventions that themselves are developed through machine learning. The inventor of a patent application must be a natural person or a group of persons according to the current legal situation in most patent law regimes. In order to be considered an 'inventor', a person must actually have developed part of the inventive concept. The mere application of machine learning or an AI algorithm to a particular problem should not be construed as the algorithm that contributes to a part of the inventive concept. However, when machine learning or the AI algorithm has contributed to a part of the inventive concept, there is currently a lack of clarity regarding the ownership of artificially created inventions. Since not only all European patent law regimes but also the Chinese and Singaporean patent law approaches include identical terms, this paper ultimately offers a comparative analysis of the most relevant patent law regimes.Keywords: algorithms, inventor, genetic breeding models, machine learning, patentability
Procedia PDF Downloads 108469 Genetic Diversity Analysis in Ecological Populations of Persian Walnut
Authors: Masoud Sheidai, Fahimeh Koohdar, Hashem Sharifi
Abstract:
Juglans regia (L.) commonly known as Persian walnut of the genus Juglans L. (Juglandaceae) is one of the most important cultivated plant species due to its high-quality wood and edible nuts. The genetic diversity analysis is essential for conservation and management of tree species. Persian walnut is native from South-Eastern Europe to North-Western China through Tibet, Nepal, Northern India, Pakistan, and Iran. The species like Persian walnut, which has a wide range of geographical distribution, should harbor extensive genetic variability to adapt to environmental fluctuations they face. We aimed to study the population genetic structure of seven Persian walnut populations including three wild and four cultivated populations by using ISSR (Inter simple sequence repeats) and SRAP (Sequence related amplified polymorphism) molecular markers. We also aimed to compare the genetic variability revealed by ISSR neutral multilocus marker and rDNA ITS sequences. The studied populations differed in morphological features as the samples in each population were clustered together and were separate from the other populations. Three wild populations studied were placed close to each other. The mantel test after 5000 times permutation performed between geographical distance and morphological distance in Persian walnut populations produced significant correlation (r = 0.48, P = 0.002). Therefore, as the populations become farther apart, they become more divergent in morphological features. ISSR analysis produced 47 bands/ loci, while we obtained 15 SRAP bands. Gst and other differentiation statistics determined for these loci revealed that most of the ISSR and SRAP loci have very good discrimination power and can differentiate the studied populations. AMOVA performed for these loci produced a significant difference (< 0.05) supporting the above-said result. AMOVA produced significant genetic difference based on ISSR data among the studied populations (PhiPT = 0.52, P = 0.001). AMOVA revealed that 53% of the total variability is due to among population genetic difference, while 47% is due to within population genetic variability. The results showed that both multilocus molecular markers and ITS sequences can differentiate Persian walnut populations. The studied populations differed genetically and showed isolation by distance (IBD). ITS sequence based MP and Bayesian phylogenetic trees revealed that Iranian walnut cultivars form a distinct clade separated from the cultivars studied from elsewhere. Almost all clades obtained have high bootstrap value. The results indicated that a combination of multilpcus and sequencing molecular markers can be used in genetic differentiation of Persian walnut.Keywords: genetic diversity, population, molecular markers, genetic difference
Procedia PDF Downloads 162468 Environmental Aspects of Alternative Fuel Use for Transport with Special Focus on Compressed Natural Gas (CNG)
Authors: Szymon Kuczynski, Krystian Liszka, Mariusz Laciak, Andrii Oliinyk, Adam Szurlej
Abstract:
The history of gaseous fuel use in the motive power of vehicles dates back to the second half of the nineteenth century, and thus the beginnings of the automotive industry. The engines were powered by coal gas and became the prototype for internal combustion engines built so far. It can thus be considered that this construction gave rise to the automotive industry. As the socio-economic development advances, so does the number of motor vehicles. Although, due to technological progress in recent decades, the emissions generated by internal combustion engines of cars have been reduced, a sharp increase in the number of cars and the rapidly growing traffic are an important source of air pollution and a major cause of acoustic threat, in particular in large urban agglomerations. One of the solutions, in terms of reducing exhaust emissions and improving air quality, is a more extensive use of alternative fuels: CNG, LNG, electricity and hydrogen. In the case of electricity use for transport, it should be noted that the environmental outcome depends on the structure of electricity generation. The paper shows selected regulations affecting the use of alternative fuels for transport (including Directive 2014/94/EU) and its dynamics between 2000 and 2015 in Poland and selected EU countries. The paper also gives a focus on the impact of alternative fuels on the environment by comparing the volume of individual emissions (compared to the emissions from conventional fuels: petrol and diesel oil). Bearing in mind that the extent of various alternative fuel use is determined in first place by economic conditions, the article describes the price relationships between alternative and conventional fuels in Poland and selected EU countries. It is pointed out that although Poland has a wealth of experience in using methane alternative fuels for transport, one of the main barriers to their development in Poland is the extensive use of LPG. In addition, a poorly developed network of CNG stations in Poland, which does not allow easy transport, especially in the northern part of the country, is a serious problem to a further development of CNG use as fuel for transport. An interesting solution to this problem seems to be the use of home CNG filling stations: Home Refuelling Appliance (HRA, refuelling time 8-10 hours) and Home Refuelling Station (HRS, refuelling time 8-10 minutes). The team is working on HRA and HRS technologies. The article also highlights the impact of alternative fuel use on energy security by reducing reliance on imports of crude oil and petroleum products.Keywords: alternative fuels, CNG (Compressed Natural Gas), CNG stations, LNG (Liquefied Natural Gas), NGVs (Natural Gas Vehicles), pollutant emissions
Procedia PDF Downloads 227467 Counter-Terrorism and De-Radicalization as Soft Strategies in Combating Terrorism in Indonesia: A Critical Review
Authors: Tjipta Lesmana
Abstract:
Terrorist attacks quickly penetrated Indonesia following the downfall of Soeharto regime in May 1998. Reform era was officially proclaimed. Indonesia turned to 'heaven state' from 'authoritarian state'. For the first time since 1966, the country experienced a full-scale freedom of expression, including freedom of the press, and heavy acknowledgement of human rights practice. Some religious extremists previously run away to neighbor countries to escape from security apparatus secretly backed home. Quickly they consolidated the power to continue their long aspiration and dream to establish 'Shariah Indonesia', Indonesia based on Khilafah ideology. Bali bombings I which shocked world community occurred on 12 October 2002 in the famous tourist district of Kuta on the Indonesian island of Bali, killing 202 people (including 88 Australians, 38 Indonesians, and people from more than 20 other nationalities). In the capital, Jakarta, successive bombings were blasted in Marriott hotel, Australian Embassy, residence of the Philippine Ambassador and stock exchange office. A 'drunken Indonesia' is far from ready to combat nationwide sudden and massive terrorist attacks. Police Detachment 88 (Densus 88) Indonesian counter-terrorism squad, was quickly formed following 2002 Bali Bombing. Anti-terrorism Provisional Act was immediately erected, as well, due to urgent need to fight terrorism. Some Bali bombings criminals were deadly executed after sentenced by the court. But a series of terrorist suicide attacks and another Bali bombings (the second one) in Bali, again, shocked world community. Terrorism network is undoubtedly spreading nationwide. Suspicion is high that they had close connection with Al Qaeda’s groups. Even 'Afghanistan alumni' and 'Syria alumni' returned to Indonesia to back up the local mujahidins in their fights to topple Indonesia constitutional government and set up Islamic state (Khilafah). Supported by massive aids from friendly nations, especially Australia and United States, Indonesia launched large scale operations to crush terrorism consisted of various radical groups such as JAD, JAS, and JAADI. Huge energy, money, and souls were dedicated. Terrorism is, however, persistently entrenched. High ranking officials from Detachment 88 squad and military intelligence believe that terrorism is still one the most deadly enemy of Indonesia.Keywords: counter-radicalization, de-radicalization, Khalifah, Union State, Al Qaedah, ISIS
Procedia PDF Downloads 178466 The Language of Science in Higher Education: Related Topics and Discussions
Authors: Gurjeet Singh, Harinder Singh
Abstract:
In this paper, we present "The Language of Science in Higher Education: Related Questions and Discussions". Linguists have written and researched in depth the role of language in science. On this basis, it is clear that language is not just a medium or vehicle for communicating knowledge and ideas. Nor are there mere signs of language knowledge and conversion of ideas into code. In the process of reading and writing, everyone thinks deeply and struggles to understand concepts and make sense. Linguistics play an important role in achieving concepts. In the context of such linguistic diversity, there is no straightforward and simple answer to the question of which language should be the language of advanced science and technology. Many important topics related to this issue are as follows: Involvement in practical or Deep theoretical issues. Languages for the study of science and other subjects. Language issues of science to be considered separate from the development of science, capitalism, colonial history, the worldview of the common man. The democratization of science and technology education in India is possible only by providing maximum reading/resource material in regional languages. The scientific research should be increase to chances of understanding the subject. Multilingual instead or monolingual. As far as deepening the understanding of the subject is concerned, we can shed light on it based on two or three experiences. An attempt was made to make the famous sociological journal Economic and Political Weekly Hindi almost three decades ago. There were many obstacles in this work. The original articles written in Hindi were not found, and the papers and articles of the English Journal were translated into Hindi, and a journal called Sancha was taken out. Equally important is the democratization of knowledge and the deepening of understanding of the subject. However, the question is that if higher education in science is in Hindi or other languages, then it would be a problem to get job. In fact, since independence, English has been dominant in almost every field except literature. There are historical reasons for this, which cannot be reversed. As mentioned above, due to colonial rule, even before independence, English was established as a language of communication, the language of power/status, the language of higher education, the language of administration, and the language of scholarly discourse. After independence, attempts to make Hindi or Hindustani the national language in India were unsuccessful. Given this history and current reality, higher education should be multilingual or at least bilingual. Translation limits should also be increased for those who choose the material for translation. Writing in regional languages on science, making knowledge of various international languages available in Indian languages, etc., is equally important for all to have opportunities to learn English.Keywords: language, linguistics, literature, culture, ethnography, punjabi, gurmukhi, higher education
Procedia PDF Downloads 91465 Sacred Echoes: The Shamanic Journey of Hushahu and the Empowerment of Indigenous Women
Authors: Nadia K. Thalji
Abstract:
The shamanic odyssey of Hushahu, a courageous indigenous woman from the Amazon, reverberates with profound significance, resonating far beyond the confines of her tribal boundaries. This abstract explores Hushahu's transformative journey, which serves as a beacon of empowerment for indigenous women across the Amazon region. Hushahu's narrative unfolds against the backdrop of entrenched gender norms and colonial legacies that have historically marginalized women from spiritual leadership and ritual practices. Despite societal expectations and entrenched traditions, Hushahu boldly embraces her calling as a shaman, defying cultural constraints and challenging prevailing gender norms. Her journey represents a symbolic uprising against centuries of patriarchal dominance, offering a glimpse into the resilience and strength of indigenous women. Drawing upon Jungian psychology, Hushahu's quest can be understood as a profound exploration of the symbolic dimensions of the psyche. Through Hushahu’s initiation rituals and visionary experiences, the initiate embarks on a transformative journey of self-discovery, encountering archetypal symbols and tapping into the collective unconscious. Symbolism permeates the path, guiding Hushahu through the depths of the rainforest and illuminating the hidden realms of consciousness. Central to Hushahu's narrative is the theme of empowerment—a theme that transcends individual experience to catalyze broader social change. As Hushahu finds a voice amidst the echoes of ancestral wisdom, the journey inspires a ripple effect of empowerment throughout indigenous communities. Other women within Hushahu's tribe and neighboring societies are emboldened to challenge traditional gender roles, stepping into leadership positions and reclaiming their rightful place in spiritual practices. The resonance of Hushahu's journey extends beyond the Amazon, reverberating across cultural boundaries and igniting conversations about gender equality and indigenous rights. Through courageous defiance of cultural norms, Hushahu emerges as a symbol of resilience and empowerment, offering hope and inspiration to marginalized women around the world. In conclusion, Hushahu's shamanic journey embodies the sacred echoes of empowerment, echoing across generations and landscapes. The story serves as a testament to the enduring power of the human spirit and the transformative potential of reclaiming one's voice in the face of adversity. As indigenous women continue to rise, Hushahu's legacy stands as a beacon of hope, illuminating the path towards a more equitable and inclusive world.Keywords: shamanic leadership, indigenous empowerment, gender norms, cultural transformation
Procedia PDF Downloads 48464 Nuclear Materials and Nuclear Security in India: A Brief Overview
Authors: Debalina Ghoshal
Abstract:
Nuclear security is the ‘prevention and detection of, and response to unauthorised removal, sabotage, unauthorised access, illegal transfer or other malicious acts involving nuclear or radiological material or their associated facilities.’ Ever since the end of Cold War, nuclear materials security has remained a concern for global security. However, with the increase in terrorist attacks not just in India especially, security of nuclear materials remains a priority. Therefore, India has made continued efforts to tighten its security on nuclear materials to prevent nuclear theft and radiological terrorism. Nuclear security is different from nuclear safety. Physical security is also a serious concern and India had been careful of the physical security of its nuclear materials. This is more so important since India is expanding its nuclear power capability to generate electricity for economic development. As India targets 60,000 MW of electricity production by 2030, it has a range of reactors to help it achieve its goal. These include indigenous Pressurised Heavy Water Reactors, now standardized at 700 MW per reactor Light Water Reactors, and the indigenous Fast Breeder Reactors that can generate more fuel for the future and enable the country to utilise its abundant thorium resource. Nuclear materials security can be enhanced through two important ways. One is through proliferation resistant technologies and diplomatic efforts to take non proliferation initiatives. The other is by developing technical means to prevent any leakage in nuclear materials in the hands of asymmetric organisations. New Delhi has already implemented IAEA Safeguards on their civilian nuclear installations. Moreover, the IAEA Additional Protocol has also been ratified by India in order to enhance its transparency of nuclear material and strengthen nuclear security. India is a party to the IAEA Conventions on Nuclear Safety and Security, and in particular the 1980 Convention on the Physical Protection of Nuclear Material and its amendment in 2005, Code of Conduct in Safety and Security of Radioactive Sources, 2006 which enables the country to provide for the highest international standards on nuclear and radiological safety and security. India's nuclear security approach is driven by five key components: Governance, Nuclear Security Practice and Culture, Institutions, Technology and International Cooperation. However, there is still scope for further improvements to strengthen nuclear materials and nuclear security. The NTI Report, ‘India’s improvement reflects its first contribution to the IAEA Nuclear Security Fund etc. in the future, India’s nuclear materials security conditions could be further improved by strengthening its laws and regulations for security and control of materials, particularly for control and accounting of materials, mitigating the insider threat, and for the physical security of materials during transport. India’s nuclear materials security conditions also remain adversely affected due to its continued increase in its quantities of nuclear material, and high levels of corruption among public officials.’ This paper would study briefly the progress made by India in nuclear and nuclear material security and the step ahead for India to further strengthen this.Keywords: India, nuclear security, nuclear materials, non proliferation
Procedia PDF Downloads 352463 A Comparison of Tsunami Impact to Sydney Harbour, Australia at Different Tidal Stages
Authors: Olivia A. Wilson, Hannah E. Power, Murray Kendall
Abstract:
Sydney Harbour is an iconic location with a dense population and low-lying development. On the east coast of Australia, facing the Pacific Ocean, it is exposed to several tsunamigenic trenches. This paper presents a component of the most detailed assessment of the potential for earthquake-generated tsunami impact on Sydney Harbour to date. Models in this study use dynamic tides to account for tide-tsunami interaction. Sydney Harbour’s tidal range is 1.5 m, and the spring tides from January 2015 that are used in the modelling for this study are close to the full tidal range. The tsunami wave trains modelled include hypothetical tsunami generated from earthquakes of magnitude 7.5, 8.0, 8.5, and 9.0 MW from the Puysegur and New Hebrides trenches as well as representations of the historical 1960 Chilean and 2011 Tohoku events. All wave trains are modelled for the peak wave to coincide with both a low tide and a high tide. A single wave train, representing a 9.0 MW earthquake at the Puysegur trench, is modelled for peak waves to coincide with every hour across a 12-hour tidal phase. Using the hydrodynamic model ANUGA, results are compared according to the impact parameters of inundation area, depth variation and current speeds. Results show that both maximum inundation area and depth variation are tide dependent. Maximum inundation area increases when coincident with a higher tide, however, hazardous inundation is only observed for the larger waves modelled: NH90high and P90high. The maximum and minimum depths are deeper on higher tides and shallower on lower tides. The difference between maximum and minimum depths varies across different tidal phases although the differences are slight. Maximum current speeds are shown to be a significant hazard for Sydney Harbour; however, they do not show consistent patterns according to tide-tsunami phasing. The maximum current speed hazard is shown to be greater in specific locations such as Spit Bridge, a narrow channel with extensive marine infrastructure. The results presented for Sydney Harbour are novel, and the conclusions are consistent with previous modelling efforts in the greater area. It is shown that tide must be a consideration for both tsunami modelling and emergency management planning. Modelling with peak tsunami waves coinciding with a high tide would be a conservative approach; however, it must be considered that maximum current speeds may be higher on other tides.Keywords: emergency management, sydney, tide-tsunami interaction, tsunami impact
Procedia PDF Downloads 242462 Recycling of Sintered NdFeB Magnet Waste Via Oxidative Roasting and Selective Leaching
Authors: W. Kritsarikan, T. Patcharawit, T. Yingnakorn, S. Khumkoa
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward a circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 °C to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 h. The leachate was then subjected to drying and roasting at 700 – 800 °C prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to an increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe₃O₄) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperatures. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 °C resulted in a greater Fe₂O₃ to Nd₂(SO₄)₃ ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 °C followed by acid leaching and roasting at 800 °C gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 182461 A Proposal for an Excessivist Social Welfare Ordering
Authors: V. De Sandi
Abstract:
In this paper, we characterize a class of rank-weighted social welfare orderings that we call ”Excessivist.” The Excessivist Social Welfare Ordering (eSWO) judges incomes above a fixed threshold θ as detrimental to society. To accomplish this, the identification of a richness or affluence line is necessary. We employ a fixed, exogenous line of excess. We define an eSWF in the form of a weighted sum of individual’s income. This requires introducing n+1 vectors of weights, one for all possible numbers of individuals below the threshold. To do this, the paper introduces a slight modification of the class of rank weighted class of social welfare function. Indeed, in our excessivist social welfare ordering, we allow the weights to be both positive (for individuals below the line) and negative (for individuals above). Then, we introduce ethical concerns through an axiomatic approach. The following axioms are required: continuity above and below the threshold (Ca, Cb), anonymity (A), absolute aversion to excessive richness (AER), pigou dalton positive weights preserving transfer (PDwpT), sign rank preserving full comparability (SwpFC) and strong pareto below the threshold (SPb). Ca, Cb requires that small changes in two income distributions above and below θ do not lead to changes in their ordering. AER suggests that if two distributions are identical in any respect but for one individual above the threshold, who is richer in the first, then the second should be preferred by society. This means that we do not care about the waste of resources above the threshold; the priority is the reduction of excessive income. According to PDwpT, a transfer from a better-off individual to a worse-off individual despite their relative position to the threshold, without reversing their ranks, leads to an improved distribution if the number of individuals below the threshold is the same after the transfer or the number of individuals below the threshold has increased. SPb holds only for individuals below the threshold. The weakening of strong pareto and our ethics need to be justified; we support them through the notion of comparative egalitarianism and income as a source of power. SwpFC is necessary to ensure that, following a positive affine transformation, an individual does not become excessively rich in only one distribution, thereby reversing the ordering of the distributions. Given the axioms above, we can characterize the class of the eSWO, getting the following result through a proof by contradiction and exhaustion: Theorem 1. A social welfare ordering satisfies the axioms of continuity above and below the threshold, anonymity, sign rank preserving full comparability, aversion to excessive richness, Pigou Dalton positive weight preserving transfer, and strong pareto below the threshold, if and only if it is an Excessivist-social welfare ordering. A discussion about the implementation of different threshold lines reviewing the primary contributions in this field follows. What the commonly implemented social welfare functions have been overlooking is the concern for extreme richness at the top. The characterization of Excessivist Social Welfare Ordering, given the axioms above, aims to fill this gap.Keywords: comparative egalitarianism, excess income, inequality aversion, social welfare ordering
Procedia PDF Downloads 63460 Effect of Packaging Material and Water-Based Solutions on Performance of Radio Frequency Identification for Food Packaging Applications
Authors: Amelia Frickey, Timothy (TJ) Sheridan, Angelica Rossi, Bahar Aliakbarian
Abstract:
The growth of large food supply chains demanded improved end-to-end traceability of food products, which has led to companies being increasingly interested in using smart technologies such as Radio Frequency Identification (RFID)-enabled packaging to track items. As technology is being widely used, there are several technological or economic issues that should be overcome to facilitate the adoption of this track-and-trace technology. One of the technological challenges of RFID technology is its sensitivity to different environmental form factors, including packaging materials and the content of the packaging. Although researchers have assessed the performance loss due to the proximity of water and aqueous solutions, there is still the need to further investigate the impacts of food products on the reading range of RFID tags. However, to the best of our knowledge, there are not enough studies to determine the correlation between RFID tag performance and food beverages properties. The goal of this project was to investigate the effect of the solution properties (pH and conductivity) and different packaging materials filled with food-like water-based solutions on the performance of an RFID tag. Three commercially available ultra high-frequency RFID tags were placed on three different bottles and filled with different concentrations of water-based solutions, including sodium chloride, citric acid, sucrose, and ethanol. Transparent glass, Polyethylneterephtalate (PET), and Tetrapak® were used as the packaging materials commonly used in the beverage industries. Tag readability (Theoretical Read Range, TRR) and sensitivity (Power on Tag Forward, PoF) were determined using an anechoic chamber. First, the best place to attach the tag for each packaging material was investigated using empty and water-filled bottles. Then, the bottles were filled with the food-like solutions and tested with the three different tags and the PoF and TRR at the fixed frequency of 915MHz. In parallel, the pH and conductivity of solutions were measured. The best-performing tag was then selected to test the bottles filled with wine, orange, and apple juice. Despite various solutions altering the performance of each tag, the change in tag performance had no correlation with the pH or conductivity of the solution. Additionally, packaging material played a significant role in tag performance. Each tag tested performed optimally under different conditions. This study is the first part of comprehensive research to determine the regression model for the prediction of tag performance behavior based on the packaging material and the content. More investigations, including more tags and food products, are needed to be able to develop a robust regression model. The results of this study can be used by RFID tag manufacturers to design suitable tags for specific products with similar properties.Keywords: smart food packaging, supply chain management, food waste, radio frequency identification
Procedia PDF Downloads 114459 Gender-Transformative Education: A Pathway to Nourishing and Evolving Gender Equality in the Higher Education of Iran
Authors: Sepideh Mirzaee
Abstract:
Gender-transformative (G-TE) education is a challenging concept in the field of education and it is a matter of hot debate in the contemporary world. Paulo Freire as the prominent advocate of transformative education considers it as an alternative to conventional banking model of education. Besides, a more inclusive concept has been introduced, namely, G-TE, as an unbiased education fostering an environment of gender justice. As its main tenet, G-TE eliminates obstacles to education and improves social shifts. A plethora of contemporary research indicates that G-TE could completely revolutionize education systems by displacing inequalities and changing gender stereotypes. Despite significant progress in female education and its effects on gender equality in Iran, challenges persist. There are some deficiencies regarding gender disparities in the society and, education, specifically. As an example, the number of women with university degrees is on the rise; thus, there will be an increasing demand for employment in the society by them. Instead, many job opportunities remain occupied by men and it is seen as intolerable for the society to assign such occupations to women. In fact, Iran is regarded as a patriarchal society where educational contexts can play a critical role to assign gender ideology to its learners. Thus, such gender ideologies in the education can become the prevailing ideologies in the entire society. Therefore, improving education in this regard, can lead to a significant change in a society subsequently influencing the status of women not only within their own country but also on a global scale. Notably, higher education plays a vital role in this empowerment and social change. Particularly higher education can have a crucial part in imparting gender neutral ideologies to its learners and bringing about substantial change. It has the potential to alleviate the detrimental effects of gender inequalities. Therefore, this study aims to conceptualize the pivotal role of G-TE and its potential power in developing gender equality within the higher educational system of Iran presented within a theoretical framework. The study emphasizes the necessity of stablishing a theoretical grounding for citizenship, and transformative education while distinguishing gender related issues including gender equality, equity and parity. This theoretical foundation will shed lights on the decisions made by policy-makers, syllabus designers, material developers, and specifically professors and students. By doing so, they will be able to promote and implement gender equality recognizing the determinants, obstacles, and consequences of sustaining gender-transformative approaches in their classes within the Iranian higher education system. The expected outcomes include the eradication of gender inequality, transformation of gender stereotypes and provision of equal opportunities for both males and females in education.Keywords: citizenship education, gender inequality, higher education, patriarchal society, transformative education
Procedia PDF Downloads 65458 In Vivo Evaluation of Exposure to Electromagnetic Fields at 27 GHz (5G) of Danio Rerio: A Preliminary Study
Authors: Elena Maria Scalisi, Roberta Pecoraro, Martina Contino, Sara Ignoto, Carmelo Iaria, Santi Concetto Pavone, Gino Sorbello, Loreto Di Donato, Maria Violetta Brundo
Abstract:
5G Technology is evolving to satisfy a variety of service requirements that may allow high data-rate connections (1Gbps) and lower latency times than current (<1ms). In order to support a high data transmission speed and a high traffic service for eMBB (enhanced mobile broadband) use cases, 5G systems have the characteristic of using different frequency bands of the radio wave spectrum (700 MHz, 3.6-3.8 GHz and 26.5-27.5 GHz), thus taking advantage of higher frequencies than previous mobile radio generations (1G-4G). However, waves at higher frequencies have a lower capacity to propagate in free space and therefore, in order to guarantee the capillary coverage of the territory for high reliability applications, it will be necessary to install a large number of repeaters. Following the introduction of this new technology, there has been growing concern over the past few months about possible harmful effects on human health. The aim of this preliminary study is to evaluate possible short term effects induced by 5G-millimeter waves on embryonic development and early life stages of Danio rerio by Z-FET. We exposed developing zebrafish at frequency of 27 GHz, with a standard pyramidal horn antenna placed at 15 cm far from the samples holder ensuring an incident power density of 10 mW/cm2. During the exposure cycle, from 6 h post fertilization (hpf) to 96 hpf, we measured a different morphological endpoints every 24 hours. Zebrafish embryo toxicity test (Z-FET) is a short term test, carried out on fertilized eggs of zebrafish and it represents an effective alternative to acute test with adult fish (OECD, 2013). We have observed that 5G did not reveal significant impacts on mortality nor on morphology because exposed larvae showed a normal detachment of the tail, presence of heartbeat, well-organized somites, therefore hatching rate was lower than untreated larvae even at 48 h of exposure. Moreover, the immunohistochemical analysis performed on larvae showed a negativity to the HSP-70 expression used as a biomarkers. This is a preliminary study on evaluation of potential toxicity induced by 5G and it seems appropriate to underline the importance that further studies would take, aimed at clarifying the probable real risk of exposure to electromagnetic fields.Keywords: Biomarker of exposure, embryonic development, 5G waves, zebrafish embryo toxicity test
Procedia PDF Downloads 129457 Social Inequality and Inclusion Policies in India: Lessons Learned and the Way Forward
Authors: Usharani Rathinam
Abstract:
Although policies directing inclusion of marginalized were in effect, majority of chronically impoverished in India belonged to schedule caste and schedule tribes. Also, taking into account that poverty is gendered; destitute women belonged to lower social order whose need is not largely highlighted at policy level. This paper discusses on social relations poverty which highlights on how social order that existed structurally in the society can perpetuate chronic poverty, followed by a critical review on social inclusion policies of India, its merits and demerits in addressing chronic poverty. Multiple case study design is utilized to address this concern in four districts of India; Jhansi, Tikamgarh, Cuddalore and Anantapur. These four districts were selected by purposive sampling based on the criteria; the district should either be categorized as a backward district or should have a history of high poverty rate. Qualitative methods including eighty in-depth interviews, six focus group discussions, six social mapping procedures and three key informant interviews were conducted in 2011, at each of the locations. Analysis of the data revealed that irrespective of gender, schedule castes and schedule tribe participants were found to be chronically poor in all districts. Caste based discrimination is exhibited at both micro and macro levels; village and institutional levels. At village level, lower caste respondents had lesser access to public resources. Also, within institutional settings, due to confiscation, unequal access to resources is noticed, especially in fund distribution. This study found that half of the budget intended for schedule caste and schedule tribes were confiscated by upper caste administrative staffs. This implies that power based on social hierarchy marginalize lower caste participants from accessing better economic, social, and political benefits, that had led them to suffer long term poverty. This study also explored the traditional ties between caste, social structure and bonded labour as a cause of long-term poverty. Though equal access is being emphasized in constitutional rights, issues at micro level have not been reflected in formulation of these rights. Therefore, it is significant for a policy to consider the structural complexity and then focus on issues such as equal distribution of assets and infrastructural facilities that will reduce exclusion and foster long-term security in areas such as employment, markets and public distribution.Keywords: caste, inclusion policies, India, social order
Procedia PDF Downloads 206456 Raman Tweezers Spectroscopy Study of Size Dependent Silver Nanoparticles Toxicity on Erythrocytes
Authors: Surekha Barkur, Aseefhali Bankapur, Santhosh Chidangil
Abstract:
Raman Tweezers technique has become prevalent in single cell studies. This technique combines Raman spectroscopy which gives information about molecular vibrations, with optical tweezers which use a tightly focused laser beam for trapping the single cells. Thus Raman Tweezers enabled researchers analyze single cells and explore different applications. The applications of Raman Tweezers include studying blood cells, monitoring blood-related disorders, silver nanoparticle-induced stress, etc. There is increased interest in the toxic effect of nanoparticles with an increase in the various applications of nanoparticles. The interaction of these nanoparticles with the cells may vary with their size. We have studied the effect of silver nanoparticles of sizes 10nm, 40nm, and 100nm on erythrocytes using Raman Tweezers technique. Our aim was to investigate the size dependence of the nanoparticle effect on RBCs. We used 785nm laser (Starbright Diode Laser, Torsana Laser Tech, Denmark) for both trapping and Raman spectroscopic studies. 100 x oil immersion objectives with high numerical aperture (NA 1.3) is used to focus the laser beam into a sample cell. The back-scattered light is collected using the same microscope objective and focused into the spectrometer (Horiba Jobin Vyon iHR320 with 1200grooves/mm grating blazed at 750nm). Liquid nitrogen cooled CCD (Symphony CCD-1024x256-OPEN-1LS) was used for signal detection. Blood was drawn from healthy volunteers in vacutainer tubes and centrifuged to separate the blood components. 1.5 ml of silver nanoparticles was washed twice with distilled water leaving 0.1 ml silver nanoparticles in the bottom of the vial. The concentration of silver nanoparticles is 0.02mg/ml so the 0.03mg of nanoparticles will be present in the 0.1 ml nanoparticles obtained. The 25 ul of RBCs were diluted in 2 ml of PBS solution and then treated with 50 ul (0.015mg) of nanoparticles and incubated in CO2 incubator. Raman spectroscopic measurements were done after 24 hours and 48 hours of incubation. All the spectra were recorded with 10mW laser power (785nm diode laser), 60s of accumulation time and 2 accumulations. Major changes were observed in the peaks 565 cm-1, 1211 cm-1, 1224 cm-1, 1371 cm-1, 1638 cm-1. A decrease in intensity of 565 cm-1, increase in 1211 cm-1 with a reduction in 1224 cm-1, increase in intensity of 1371 cm-1 also peak disappearing at 1635 cm-1 indicates deoxygenation of hemoglobin. Nanoparticles with higher size were showing maximum spectral changes. Lesser changes observed in case of 10nm nanoparticle-treated erythrocyte spectra.Keywords: erythrocytes, nanoparticle-induced toxicity, Raman tweezers, silver nanoparticles
Procedia PDF Downloads 293455 Recycling of Sintered Neodymium-Iron-Boron (NdFeB) Magnet Waste via Oxidative Roasting and Selective Leaching
Authors: Woranittha Kritsarikan
Abstract:
Neodymium-iron-boron (NdFeB) magnets classified as high-power magnets are widely used in various applications such as electrical and medical devices and account for 13.5 % of the permanent magnet’s market. Since its typical composition of 29 - 32 % Nd, 64.2 – 68.5 % Fe and 1 – 1.2 % B contains a significant amount of rare earth metals and will be subjected to shortages in the future. Domestic NdFeB magnet waste recycling should therefore be developed in order to reduce social, environmental impacts toward the circular economy. Most research works focus on recycling the magnet wastes, both from the manufacturing process and end of life. Each type of wastes has different characteristics and compositions. As a result, these directly affect recycling efficiency as well as the types and purity of the recyclable products. This research, therefore, focused on the recycling of manufacturing NdFeB magnet waste obtained from the sintering stage of magnet production and the waste contained 23.6% Nd, 60.3% Fe and 0.261% B in order to recover high purity neodymium oxide (Nd₂O₃) using hybrid metallurgical process via oxidative roasting and selective leaching techniques. The sintered NdFeB waste was first ground to under 70 mesh prior to oxidative roasting at 550 - 800 ᵒC to enable selective leaching of neodymium in the subsequent leaching step using H₂SO₄ at 2.5 M over 24 hours. The leachate was then subjected to drying and roasting at 700 – 800 ᵒC prior to precipitation by oxalic acid and calcination to obtain neodymium oxide as the recycling product. According to XRD analyses, it was found that increasing oxidative roasting temperature led to the increasing amount of hematite (Fe₂O₃) as the main composition with a smaller amount of magnetite (Fe3O4) found. Peaks of neodymium oxide (Nd₂O₃) were also observed in a lesser amount. Furthermore, neodymium iron oxide (NdFeO₃) was present and its XRD peaks were pronounced at higher oxidative roasting temperature. When proceeded to acid leaching and drying, iron sulfate and neodymium sulfate were mainly obtained. After the roasting step prior to water leaching, iron sulfate was converted to form hematite as the main compound, while neodymium sulfate remained in the ingredient. However, a small amount of magnetite was still detected by XRD. The higher roasting temperature at 800 ᵒC resulted in a greater Fe2O3 to Nd2(SO4)3 ratio, indicating a more effective roasting temperature. Iron oxides were subsequently water leached and filtered out while the solution contained mainly neodymium sulfate. Therefore, low oxidative roasting temperature not exceeding 600 ᵒC followed by acid leaching and roasting at 800 ᵒC gave the optimum condition for further steps of precipitation and calcination to finally achieve neodymium oxide.Keywords: NdFeB magnet waste, oxidative roasting, recycling, selective leaching
Procedia PDF Downloads 177454 Tests for Zero Inflation in Count Data with Measurement Error in Covariates
Authors: Man-Yu Wong, Siyu Zhou, Zhiqiang Cao
Abstract:
In quality of life, health service utilization is an important determinant of medical resource expenditures on Colorectal cancer (CRC) care, a better understanding of the increased utilization of health services is essential for optimizing the allocation of healthcare resources to services and thus for enhancing the service quality, especially for high expenditure on CRC care like Hong Kong region. In assessing the association between the health-related quality of life (HRQOL) and health service utilization in patients with colorectal neoplasm, count data models can be used, which account for over dispersion or extra zero counts. In our data, the HRQOL evaluation is a self-reported measure obtained from a questionnaire completed by the patients, misreports and variations in the data are inevitable. Besides, there are more zero counts from the observed number of clinical consultations (observed frequency of zero counts = 206) than those from a Poisson distribution with mean equal to 1.33 (expected frequency of zero counts = 156). This suggests that excess of zero counts may exist. Therefore, we study tests for detecting zero-inflation in models with measurement error in covariates. Method: Under classical measurement error model, the approximate likelihood function for zero-inflation Poisson regression model can be obtained, then Approximate Maximum Likelihood Estimation(AMLE) can be derived accordingly, which is consistent and asymptotically normally distributed. By calculating score function and Fisher information based on AMLE, a score test is proposed to detect zero-inflation effect in ZIP model with measurement error. The proposed test follows asymptotically standard normal distribution under H0, and it is consistent with the test proposed for zero-inflation effect when there is no measurement error. Results: Simulation results show that empirical power of our proposed test is the highest among existing tests for zero-inflation in ZIP model with measurement error. In real data analysis, with or without considering measurement error in covariates, existing tests, and our proposed test all imply H0 should be rejected with P-value less than 0.001, i.e., zero-inflation effect is very significant, ZIP model is superior to Poisson model for analyzing this data. However, if measurement error in covariates is not considered, only one covariate is significant; if measurement error in covariates is considered, only another covariate is significant. Moreover, the direction of coefficient estimations for these two covariates is different in ZIP regression model with or without considering measurement error. Conclusion: In our study, compared to Poisson model, ZIP model should be chosen when assessing the association between condition-specific HRQOL and health service utilization in patients with colorectal neoplasm. and models taking measurement error into account will result in statistically more reliable and precise information.Keywords: count data, measurement error, score test, zero inflation
Procedia PDF Downloads 288453 Inhibitory Effects of Crocin from Crocus sativus L. on Cell Proliferation of a Medulloblastoma Human Cell Line
Authors: Kyriaki Hatziagapiou, Eleni Kakouri, Konstantinos Bethanis, Alexandra Nikola, Eleni Koniari, Charalabos Kanakis, Elias Christoforides, George Lambrou, Petros Tarantilis
Abstract:
Medulloblastoma is a highly invasive tumour, as it tends to disseminate throughout the central nervous system early in its course. Despite the high 5-year-survival rate, a significant number of patients demonstrate serious long- or short-term sequelae (e.g., myelosuppression, endocrine dysfunction, cardiotoxicity, neurological deficits and cognitive impairment) and higher mortality rates, unrelated to the initial malignancy itself but rather to the aggressive treatment. A strong rationale exists for the use of Crocus sativus L (saffron) and its bioactive constituents (crocin, crocetin, safranal) as pharmaceutical agents, as they exert significant health-promoting properties. Crocins are water soluble carotenoids. Unlike other carotenoids, crocins are highly water-soluble compounds, with relatively low toxicity as they are not stored in adipose and liver tissues. Crocins have attracted wide attention as promising anti-cancer agents, due to their antioxidant, anti-inflammatory, and immunomodulatory effects, interference with transduction pathways implicated in tumorigenesis, angiogenesis, and metastasis (disruption of mitotic spindle assembly, inhibition of DNA topoisomerases, cell-cycle arrest, apoptosis or cell differentiation) and sensitization of cancer cells to radiotherapy and chemotherapy. The current research aimed to study the potential cytotoxic effect of crocins on TE671 medulloblastoma cell line, which may be useful in the optimization of existing and development of new therapeutic strategies. Crocins were extracted from stigmas of saffron in ultrasonic bath, using petroleum-ether, diethylether and methanol 70%v/v as solvents and the final extract was lyophilized. Identification of crocins according to high-performance liquid chromatography (HPLC) analysis was determined comparing the UV-vis spectra and the retention time (tR) of the peaks with literature data. For the biological assays crocin was diluted to nuclease and protease free water. TE671 cells were incubated with a range of concentrations of crocins (16, 8, 4, 2, 1, 0.5 and 0.25 mg/ml) for 24, 48, 72 and 96 hours. Analysis of cell viability after incubation with crocins was performed with Alamar Blue viability assay. The active ingredient of Alamar Blue, resazurin, is a blue, nontoxic, cell permeable compound virtually nonfluorescent. Upon entering cells, resazurin is reduced to a pink and fluorescent molecule, resorufin. Viable cells continuously convert resazurin to resorufin, generating a quantitative measure of viability. The colour of resorufin was quantified by measuring the absorbance of the solution at 600 nm with a spectrophotometer. HPLC analysis indicated that the most abundant crocins in our extract were trans-crocin-4 and trans-crocin-3. Crocins exerted significant cytotoxicity in a dose and time-dependent manner (p < 0.005 for exposed cells to any concentration at 48, 72 and 96 hours versus cells not exposed); as their concentration and time of exposure increased, the reduction of resazurin to resofurin decreased, indicating reduction in cell viability. IC50 values for each time point were calculated ~3.738, 1.725, 0.878 and 0.7566 mg/ml at 24, 48, 72 and 96 hours, respectively. The results of our study could afford the basis of research regarding the use of natural carotenoids as anticancer agents and the shift to targeted therapy with higher efficacy and limited toxicity. Acknowledgements: The research was funded by Fellowships of Excellence for Postgraduate Studies IKY-Siemens Programme.Keywords: crocetin, crocin, medulloblastoma, saffron
Procedia PDF Downloads 216452 Impact of Alkaline Activator Composition and Precursor Types on Properties and Durability of Alkali-Activated Cements Mortars
Authors: Sebastiano Candamano, Antonio Iorfida, Patrizia Frontera, Anastasia Macario, Fortunato Crea
Abstract:
Alkali-activated materials are promising binders obtained by an alkaline attack on fly-ashes, metakaolin, blast slag among others. In order to guarantee the highest ecological and cost efficiency, a proper selection of precursors and alkaline activators has to be carried out. These choices deeply affect the microstructure, chemistry and performances of this class of materials. Even if, in the last years, several researches have been focused on mix designs and curing conditions, the lack of exhaustive activation models, standardized mix design and curing conditions and an insufficient investigation on shrinkage behavior, efflorescence, additives and durability prevent them from being perceived as an effective and reliable alternative to Portland. The aim of this study is to develop alkali-activated cements mortars containing high amounts of industrial by-products and waste, such as ground granulated blast furnace slag (GGBFS) and ashes obtained from the combustion process of forest biomass in thermal power plants. In particular, the experimental campaign was performed in two steps. In the first step, research was focused on elucidating how the workability, mechanical properties and shrinkage behavior of produced mortars are affected by the type and fraction of each precursor as well as by the composition of the activator solutions. In order to investigate the microstructures and reaction products, SEM and diffractometric analyses have been carried out. In the second step, their durability in harsh environments has been evaluated. Mortars obtained using only GGBFS as binder showed mechanical properties development and shrinkage behavior strictly dependent on SiO2/Na2O molar ratio of the activator solutions. Compressive strengths were in the range of 40-60 MPa after 28 days of curing at ambient temperature. Mortars obtained by partial replacement of GGBFS with metakaolin and forest biomass ash showed lower compressive strengths (≈35 MPa) and shrinkage values when higher amount of ashes were used. By varying the activator solutions and binder composition, compressive strength up to 70 MPa associated with shrinkage values of about 4200 microstrains were measured. Durability tests were conducted to assess the acid and thermal resistance of the different mortars. They all showed good resistance in a solution of 5%wt of H2SO4 also after 60 days of immersion, while they showed a decrease of mechanical properties in the range of 60-90% when exposed to thermal cycles up to 700°C.Keywords: alkali activated cement, biomass ash, durability, shrinkage, slag
Procedia PDF Downloads 325451 Material Use and Life Cycle GHG Emissions of Different Electrification Options for Long-Haul Trucks
Authors: Nafisa Mahbub, Hajo Ribberink
Abstract:
Electrification of long-haul trucks has been in discussion as a potential strategy to decarbonization. These trucks will require large batteries because of their weight and long daily driving distances. Around 245 million battery electric vehicles are predicted to be on the road by the year 2035. This huge increase in the number of electric vehicles (EVs) will require intensive mining operations for metals and other materials to manufacture millions of batteries for the EVs. These operations will add significant environmental burdens and there is a significant risk that the mining sector will not be able to meet the demand for battery materials, leading to higher prices. Since the battery is the most expensive component in the EVs, technologies that can enable electrification with smaller batteries sizes have substantial potential to reduce the material usage and associated environmental and cost burdens. One of these technologies is an ‘electrified road’ (eroad), where vehicles receive power while they are driving, for instance through an overhead catenary (OC) wire (like trolleybuses and electric trains), through wireless (inductive) chargers embedded in the road, or by connecting to an electrified rail in or on the road surface. This study assessed the total material use and associated life cycle GHG emissions of two types of eroads (overhead catenary and in-road wireless charging) for long-haul trucks in Canada and compared them to electrification using stationary plug-in fast charging. As different electrification technologies require different amounts of materials for charging infrastructure and for the truck batteries, the study included the contributions of both for the total material use. The study developed a bottom-up approach model comparing the three different charging scenarios – plug in fast chargers, overhead catenary and in-road wireless charging. The investigated materials for charging technology and batteries were copper (Cu), steel (Fe), aluminium (Al), and lithium (Li). For the plug-in fast charging technology, different charging scenarios ranging from overnight charging (350 kW) to megawatt (MW) charging (2 MW) were investigated. A 500 km of highway (1 lane of in-road charging per direction) was considered to estimate the material use for the overhead catenary and inductive charging technologies. The study considered trucks needing an 800 kWh battery under the plug-in charger scenario but only a 200 kWh battery for the OC and inductive charging scenarios. Results showed that overall the inductive charging scenario has the lowest material use followed by OC and plug-in charger scenarios respectively. The materials use for the OC and plug-in charger scenarios were 50-70% higher than for the inductive charging scenarios for the overall system including the charging infrastructure and battery. The life cycle GHG emissions from the construction and installation of the charging technology material were also investigated.Keywords: charging technology, eroad, GHG emissions, material use, overhead catenary, plug in charger
Procedia PDF Downloads 51450 A Systematic Review Investigating the Use of EEG Measures in Neuromarketing
Authors: A. M. Byrne, E. Bonfiglio, C. Rigby, N. Edelstyn
Abstract:
Introduction: Neuromarketing employs numerous methodologies when investigating products and advertisement effectiveness. Electroencephalography (EEG), a non-invasive measure of electrical activity from the brain, is commonly used in neuromarketing. EEG data can be considered using time-frequency (TF) analysis, where changes in the frequency of brainwaves are calculated to infer participant’s mental states, or event-related potential (ERP) analysis, where changes in amplitude are observed in direct response to a stimulus. This presentation discusses the findings of a systematic review of EEG measures in neuromarketing. A systematic review summarises evidence on a research question, using explicit measures to identify, select, and critically appraise relevant research papers. Thissystematic review identifies which EEG measures are the most robust predictor of customer preference and purchase intention. Methods: Search terms identified174 papers that used EEG in combination with marketing-related stimuli. Publications were excluded if they were written in a language other than English or were not published as journal articles (e.g., book chapters). The review investigated which TF effect (e.g., theta-band power) and ERP component (e.g., N400) most consistently reflected preference and purchase intention. Machine-learning prediction was also investigated, along with the use of EEG combined with physiological measures such as eye-tracking. Results: Frontal alpha asymmetry was the most reliable TF signal, where an increase in activity over the left side of the frontal lobe indexed a positive response to marketing stimuli, while an increase in activity over the right side indexed a negative response. The late positive potential, a positive amplitude increase around 600 ms after stimulus presentation, was the most reliable ERP component, reflecting the conscious emotional evaluation of marketing stimuli. However, each measure showed mixed results when related to preference and purchase behaviour. Predictive accuracy was greatly improved through machine-learning algorithms such as deep neural networks, especially when combined with eye-tracking or facial expression analyses. Discussion: This systematic review provides a novel catalogue of the most effective use of each EEG measure commonly used in neuromarketing. Exciting findings to emerge are the identification of the frontal alpha asymmetry and late positive potential as markers of preferential responses to marketing stimuli. Predictive accuracy using machine-learning algorithms achieved predictive accuracies as high as 97%, and future research should therefore focus on machine-learning prediction when using EEG measures in neuromarketing.Keywords: EEG, ERP, neuromarketing, machine-learning, systematic review, time-frequency
Procedia PDF Downloads 111449 Men of Congress in Today’s Brazil: Ethnographic Notes on Neoliberal Masculinities in Support of Bolsonaro
Authors: Joao Vicente Pereira Fernandez
Abstract:
In the context of a democratic crisis, a new wave of authoritarianism prompts domineering male figures to leadership posts worldwide. Although the gendered aspect of this phenomenon has been reasonably documented, recent studies have focused on high-level commanding posts, such as those of president and prime-minister, leaving other positions of political power with limited attention. This natural focus of investigation, however powerful, seems to have restricted our understanding of the phenomenon by precluding a more thorough inquiry of its gendered aspects and its consequences for political representation as a whole. Trying to fill this gap, in recent research, we examined the election results of Jair Bolsonaro’s party for the Legislative Branch in 2018. We found that the party's proportion of non-male representatives was on average, showing it provided reasonable access of women to the legislature in a comparative perspective. However, and perhaps more intuitively, we also found that the elected members of Bolsonaro’s party performed very gendered roles, which allowed us to draw the first lines of the representative profiles gathered around the new-right in Brazil. These results unveiled new horizons for further research, addressing topics that range from the role of women for the new-right on Brazilian institutional politics to the relations between these profiles of representatives, their agendas, and political and electoral strategies. This article aims to deepen the understanding of some of these profiles in order to lay the groundwork for the development of the second research agenda mentioned above. More specifically, it focuses on two out of the three profiles that were grasped predominantly, if not entirely, from masculine subjects during our last research, with the objective of portraying the masculinity standards mobilized and promoted by them. These profiles –the entrepreneur and the army man – were chosen to be developed due to their proximity to both liberal and authoritarian views, and, moreover, because they can possibly represent two facets of the new-right that were integrated in a certain way around Bolsonaro in 2018, but that can be reworked in the future. After a brief introduction of the literature on masculinity and politics in times of democratic crisis, we succinctly present the relevant results of our previous research and then describe these two profiles and their masculinities in detail. We adopt a combination of ethnography and discourse analysis, methods that allow us to make sense of the data we collected on our previous research as well as of the data gathered for this article: social media posts and interactions between the elected members that inspired these profiles and their supporters. Finally, we discuss our results, presenting our main argument on how these descriptions provide a further understanding of the gendered aspect of liberal authoritarianism, from where to better apprehend its political implications in Brazil.Keywords: Brazilian politics, gendered politics, masculinities, new-right
Procedia PDF Downloads 121448 Portuguese Teachers in Bilingual Schools in Brazil: Professional Identities and Intercultural Conflicts
Authors: Antonieta Heyden Megale
Abstract:
With the advent of globalization, the social, cultural and linguistic situation of the whole world has changed. In this scenario, the teaching of English, in Brazil, has become a booming business and the belief that this language is essential to a successful life is played by the media that sees it as a commodity and spares no effort to sell it. In this context, it has become evident the growth of bilingual and international schools that have English and Portuguese as languages of instruction. According to federal legislation, all schools in the country must follow the Curriculum guidelines proposed by the Ministry of Education of Brazil. It is then mandatory that, in addition to the specific foreign curriculum an international school subscribes to, it must also teach all subjects of the official minimum curriculum and these subjects have to be taught in Portuguese. It is important to emphasize that, in these schools, English is the most prestigious language. Therefore, firstly, Brazilian teachers who teach Portuguese in such contexts find themselves in a situation in which they teach in a low-status language. Secondly, because such teachers’ actions are guided by a different cultural matrix, which differs considerably from Anglo-Saxon values and beliefs, they often experience intercultural conflict in their workplace. Taking it consideration, this research, focusing on the trajectories of a specific group of Brazilian teachers of Portuguese in international and bilingual schools located in the city of São Paulo, intends to analyze how they discursively represent their own professional identities and practices. More specifically the objectives of this research are to understand, from the perspective of the investigated teachers, how they (i) rebuilt narratively their professional careers and explain the factors that led them to an international or to an immersion bilingual school; (ii) position themselves with respect to their linguistic repertoire; (iii) interpret the intercultural practices they are involved with in school and (v) position themselves by foregrounding categories to determine their membership in the group of Portuguese teachers. We have worked with these teachers’ autobiographical narratives. The autobiographical approach assumes that the stories told by teachers are systems of meaning involved in the production of identities and subjectivities in the context of power relations. The teachers' narratives were elicited by the following trigger: "I would like you to tell me how you became a teacher in a bilingual/international school and what your impressions are about your work and about the context in which it is inserted". These narratives were produced orally, recorded, and transcribed for analysis. The teachers were also invited to draw their "linguistic portraits". The theoretical concepts of positioning and the indexical cues were taken into consideration in data analysis. The narratives produced by the teachers point to intercultural conflicts related to their expectations and representations of others, which are never neutral or objective truths but discursive constructions.Keywords: bilingual schools, identity, interculturality, narrative
Procedia PDF Downloads 337447 Simulation of the Flow in a Circular Vertical Spillway Using a Numerical Model
Authors: Mohammad Zamani, Ramin Mansouri
Abstract:
Spillways are one of the most important hydraulic structures of dams that provide the stability of the dam and downstream areas at the time of flood. A circular vertical spillway with various inlet forms is very effective when there is not enough space for the other spillway. Hydraulic flow in a vertical circular spillway is divided into three groups: free, orifice, and under pressure (submerged). In this research, the hydraulic flow characteristics of a Circular Vertical Spillway are investigated with the CFD model. Two-dimensional unsteady RANS equations were solved numerically using Finite Volume Method. The PISO scheme was applied for the velocity-pressure coupling. The mostly used two-equation turbulence models, k-ε and k-ω, were chosen to model Reynolds shear stress term. The power law scheme was used for the discretization of momentum, k, ε, and ω equations. The VOF method (geometrically reconstruction algorithm) was adopted for interface simulation. In this study, three types of computational grids (coarse, intermediate, and fine) were used to discriminate the simulation environment. In order to simulate the flow, the k-ε (Standard, RNG, Realizable) and k-ω (standard and SST) models were used. Also, in order to find the best wall function, two types, standard wall, and non-equilibrium wall function, were investigated. The laminar model did not produce satisfactory flow depth and velocity along the Morning-Glory spillway. The results of the most commonly used two-equation turbulence models (k-ε and k-ω) were identical. Furthermore, the standard wall function produced better results compared to the non-equilibrium wall function. Thus, for other simulations, the standard k-ε with the standard wall function was preferred. The comparison criterion in this study is also the trajectory profile of jet water. The results show that the fine computational grid, the input speed condition for the flow input boundary, and the output pressure for the boundaries that are in contact with the air provide the best possible results. Also, the standard wall function is chosen for the effect of the wall function, and the turbulent model k-ε (Standard) has the most consistent results with experimental results. When the jet gets closer to the end of the basin, the computational results increase with the numerical results of their differences. The mesh with 10602 nodes, turbulent model k-ε standard and the standard wall function, provide the best results for modeling the flow in a vertical circular Spillway. There was a good agreement between numerical and experimental results in the upper and lower nappe profiles. In the study of water level over crest and discharge, in low water levels, the results of numerical modeling are good agreement with the experimental, but with the increasing water level, the difference between the numerical and experimental discharge is more. In the study of the flow coefficient, by decreasing in P/R ratio, the difference between the numerical and experimental result increases.Keywords: circular vertical, spillway, numerical model, boundary conditions
Procedia PDF Downloads 86446 Strategies for Arctic Greenhouse Farming: An Energy and Technology Survey of Greenhouse Farming in the North of Sweden
Authors: William Sigvardsson, Christoffer Alenius, Jenny Lindblom, Andreas Johansson, Marcus Sandberg
Abstract:
This article covers a study focusing on a subarctic greenhouse located in Nikkala, Sweden. Through a visit and the creation of a CFD model, the study investigates the differences in energy demand with high pressure sodium (HPS) lights and light emitting diode (LED) lights in combination with an air-carried and water-carried heating system accordingly. Through an IDA ICE model, the impact of insulating the parts of the greenhouse without active cultivation was also investigated. This, with the purpose of comparing the current system in the greenhouse to state-of-the-art alternatives and evaluating if an investment in either a water-carried heating system in combination with LED lights and insulating the non-cultivating parts of the greenhouse could be considered profitable. Operating a greenhouse in the harsh subarctic climate found in the northern parts of Sweden is not an easy task and especially if the operation is year-round. With an average temperature of under -5 °C from November through January, efficient growing techniques are a must to ensure a profitable business. Today the most crucial parts of a greenhouse are the heating system, lighting system, dehumidifying measures, as well as thermal screen, and the impact of a poorly designed system in a sub-arctic could be devastating as the margins are slim. The greenhouse studied uses a pellet burner to power their air- carried heating system which is used. The simulations found the resulting savings amounted to just under 14 800 SEK monthly or 18 % of the total cost of energy by implementing the water-carrying heating system in combination with the LED lamps. Given this, a payback period of 3-9 years could be expected given different scenarios, including specific time periods, financial aids, and the resale price of the current system. The insulation of the non-cultivating parts of the greenhouse was found to have possible savings of 25 300 SEK annually or 46 % of the current heat demand resulting in a payback period of just over 1-2 years. Given the possible energy savings, a reduction in emitted CO2 equivalents of almost 1,9 tonnes could be achieved annually. It was concluded that relatively inexpensive investments in modern greenhouse equipment could make a significant contribution to reducing the energy consumption of the greenhouse resulting in a more competitive business environment for sub-arctic greenhouse owners. New parts of the greenhouse should be built with the water-carried heating system in combination with state-of-the-art LED lights, and all parts which are not housing active cultivation should be insulated. If the greenhouse in Nikkala is eligible for financial aid or finds a resale value in the current system, an investment should be made in a new water-carried heating system in combination with LED lights.Keywords: energy efficiency, sub-arctic greenhouses, energy measures, greenhouse climate control, greenhouse technology, CFD
Procedia PDF Downloads 75445 Sizing Residential Solar Power Systems Based on Site-Specific Energy Statistics
Authors: Maria Arechavaleta, Mark Halpin
Abstract:
In the United States, costs of solar energy systems have declined to the point that they are viable options for most consumers. However, there are no consistent procedures for specifying sufficient systems. The factors that must be considered are energy consumption, potential solar energy production, and cost. The traditional method of specifying solar energy systems is based on assumed daily levels of available solar energy and average amounts of daily energy consumption. The mismatches between energy production and consumption are usually mitigated using battery energy storage systems, and energy use is curtailed when necessary. The main consumer decision question that drives the total system cost is how much unserved (or curtailed) energy is acceptable? Of course additional solar conversion equipment can be installed to provide greater peak energy production and extra energy storage capability can be added to mitigate longer lasting low solar energy production periods. Each option increases total cost and provides a benefit which is difficult to quantify accurately. An approach to quantify the cost-benefit of adding additional resources, either production or storage or both, based on the statistical concepts of loss-of-energy probability and expected unserved energy, is presented in this paper. Relatively simple calculations, based on site-specific energy availability and consumption data, can be used to show the value of each additional increment of production or storage. With this incremental benefit-cost information, consumers can select the best overall performance combination for their application at a cost they are comfortable paying. The approach is based on a statistical analysis of energy consumption and production characteristics over time. The characteristics are in the forms of curves with each point on the curve representing an energy consumption or production value over a period of time; a one-minute period is used for the work in this paper. These curves are measured at the consumer location under the conditions that exist at the site and the duration of the measurements is a minimum of one week. While greater accuracy could be obtained with longer recording periods, the examples in this paper are based on a single week for demonstration purposes. The weekly consumption and production curves are overlaid on each other and the mismatches are used to size the battery energy storage system. Loss-of-energy probability and expected unserved energy indices are calculated in addition to the total system cost. These indices allow the consumer to recognize and quantify the benefit (probably a reduction in energy consumption curtailment) available for a given increase in cost. Consumers can then make informed decisions that are accurate for their location and conditions and which are consistent with their available funds.Keywords: battery energy storage systems, loss of load probability, residential renewable energy, solar energy systems
Procedia PDF Downloads 234444 Preschoolers’ Selective Trust in Moral Promises
Authors: Yuanxia Zheng, Min Zhong, Cong Xin, Guoxiong Liu, Liqi Zhu
Abstract:
Trust is a critical foundation of social interaction and development, playing a significant role in the physical and mental well-being of children, as well as their social participation. Previous research has demonstrated that young children do not blindly trust others but make selective trust judgments based on available information. The characteristics of speakers can influence children’s trust judgments. According to Mayer et al.’s model of trust, these characteristics of speakers, including ability, benevolence, and integrity, can influence children’s trust judgments. While previous research has focused primarily on the effects of ability and benevolence, there has been relatively little attention paid to integrity, which refers to individuals’ adherence to promises, fairness, and justice. This study focuses specifically on how keeping/breaking promises affects young children’s trust judgments. The paradigm of selective trust was employed in two experiments. A sample size of 100 children was required for an effect size of w = 0.30,α = 0.05,1-β = 0.85, using G*Power 3.1. This study employed a 2×2 within-subjects design to investigate the effects of moral valence of promises (within-subjects factor: moral vs. immoral promises), and fulfilment of promises (within-subjects factor: kept vs. broken promises) on children’s trust judgments (divided into declarative and promising contexts). In Experiment 1 adapted binary choice paradigms, presenting 118 preschoolers (62 girls, Mean age = 4.99 years, SD = 0.78) with four conflict scenarios involving the keeping or breaking moral/immoral promises, in order to investigate children’s trust judgments. Experiment 2 utilized single choice paradigms, in which 112 preschoolers (57 girls, Mean age = 4.94 years, SD = 0.80) were presented four stories to examine their level of trust. The results of Experiment 1 showed that preschoolers selectively trusted both promisors who kept moral promises and those who broke immoral promises, as well as their assertions and new promises. Additionally, the 5.5-6.5-year-old children are more likely to trust both promisors who keep moral promises and those who break immoral promises more than the 3.5- 4.5-year-old children. Moreover, preschoolers are more likely to make accurate trust judgments towards promisor who kept moral promise compared to those who broke immoral promises. The results of Experiment 2 showed significant differences of preschoolers’ trust degree: kept moral promise > broke immoral promise > broke moral promise ≈ kept immoral promise. This study is the first to investigate the development of trust judgement in moral promise among preschoolers aged 3.5-6.5. The results show that preschoolers can consider both valence and fulfilment of promises when making trust judgments. Furthermore, as preschoolers mature, they become more inclined to trust promisors who keep moral promises and those who break immoral promises. Additionally, the study reveals that preschoolers have the highest level of trust in promisors who kept moral promises, followed by those who broke immoral promises. Promisors who broke moral promises and those who kept immoral promises are trusted the least. These findings contribute valuable insights to our understanding of moral promises and trust judgment.Keywords: promise, trust, moral judgement, preschoolers
Procedia PDF Downloads 54