Search results for: complete feed
Commenced in January 2007
Frequency: Monthly
Edition: International
Paper Count: 3420

Search results for: complete feed

390 Applying Quadrant Analysis in Identifying Business-to-Business Customer-Driven Improvement Opportunities in Third Party Logistics Industry

Authors: Luay Jum'a

Abstract:

Many challenges are facing third-party logistics (3PL) providers in the domestic and global markets which create a volatile decision making environment. All these challenges such as managing changes in consumer behaviour, demanding expectations from customers and time compressions have turned into complex problems for 3PL providers. Since the movement towards increased outsourcing outpaces movement towards insourcing, the need to achieve a competitive advantage over competitors in 3PL market increases. This trend continues to grow over the years and as a result, areas of strengths and improvements are highlighted through the analysis of the LSQ factors that lead to B2B customers’ satisfaction which become a priority for 3PL companies. Consequently, 3PL companies are increasingly focusing on the most important issues from the perspective of their customers and relying more on this value of information in making their managerial decisions. Therefore, this study is concerned with providing guidance for improving logistics service quality (LSQ) levels in the context of 3PL industry in Jordan. The study focused on the most important factors in LSQ and used a managerial tool that guides 3PL companies in making LSQ improvements based on a quadrant analysis of two main dimensions: LSQ declared importance and LSQ inferred importance. Although, a considerable amount of research has been conducted to investigate the relationship between logistics service quality (LSQ) and customer satisfaction, there remains a lack of developing managerial tools to aid in the process of LSQ improvement decision-making. Moreover, the main advantage for the companies to use 3PL service providers as a trend is due to the realised percentage of cost reduction on the total cost of logistics operations and the incremental improvement in customer service. In this regard, having a managerial tool that help 3PL service providers in managing the LSQ factors portfolio effectively and efficiently would be a great investment for service providers. One way of suggesting LSQ improvement actions for 3PL service providers is via the adoption of analysis tools that perform attribute categorisation such as Importance–Performance matrix. In mind of the above, it can be stated that the use of quadrant analysis will provide a valuable opportunity for 3PL service providers to identify improvement opportunities as customer service attributes or factors importance are identified in two different techniques that complete each other. Moreover, the data were collected through conducting a survey and 293 questionnaires were returned from business-to-business (B2B) customers of 3PL companies in Jordan. The results showed that the LSQ factors vary in their importance and 3PL companies should focus on some LSQ factors more than other factors. Moreover, ordering procedures, timeliness/responsiveness LSQ factors considered being crucial in 3PL businesses and therefore they need to have more focus and development by 3PL service providers in the Jordanian market.

Keywords: logistics service quality, managerial decisions, quadrant analysis, third party logistics service provider

Procedia PDF Downloads 112
389 Cross-Sectional Analysis of the Health Product E-Commerce Market in Singapore

Authors: Andrew Green, Jiaming Liu, Kellathur Srinivasan, Raymond Chua

Abstract:

Introduction: The size of Singapore’s online health product (HP) market (e-commerce) is largely unknown. However, it is recognized that a large majority comes from overseas and thus, unregulated. As buying HP from unauthorized sources significantly compromises public health safety, understanding e-commerce users’ demographics and their perceptions on online HP purchasing becomes a pivotal first step to form a basis for recommendations in Singapore’s pharmacovigilance efforts. Objective: To assess the prevalence of online HP purchasing behaviour among Singaporean e-commerce users. Methodology: This is a cross-sectional study targeting Singaporean e-commerce users recruited from various local websites and online forums. Participants were not randomized into study arms but instead stratified by random sampling method based on participants’ age. A self-administered anonymous questionnaire was used to explore participants' demographics, online HP purchasing behaviour, knowledge and attitude. The association of different variables with online HP purchasing behaviour was analysed using logistic regression statistics. Main outcome measures: Prevalence of HP e-commerce users in Singapore (%) and variables that contribute to the prevalence (adjusted prevalent ratio). Results: The study recruited 372 complete and valid responses. The prevalence of online HP consumers among e-commerce users in Singapore is estimated to be 55.9% (1.7 million consumers). Online purchasing of complementary HP (46.9%) was the most prevalent, followed by medical devices (21.6%) and Western medicine (20.5%). Multivariate analysis showed that age is an independent variable that correlates with the likelihood of buying HP online. The prevalence of HP e-commerce users is highest in the 35-44 age group (64.1%) and lowest among the 16-24 age group (36.4%). The most bought HP through the internet are vitamins and minerals (21.5%), non-herbal (15.9%), herbal (13.9%), weight loss (8.7%) and sports (8.4%) supplements. While the top 3 products are distributed equally between the genders, there is a skew towards female respondents (12.4% in females vs. 4.9% in males) for weight loss supplements and towards males (13.2% in males vs. 3.7% in females) for sports supplements. Even though online consumers are in the younger age brackets, our study found that up to 72.0% of HP bought online are bought for others (buyer’s family and/or friends). Multivariate analysis showed a statistically significant association between purchasing HP through online means and the perceptions that 'internet is safe' (adjusted Prevalence Ratio=1.15, CI 1.03-1.28), 'buying HP online is time saving' (PR=1.17, CI 1.01-1.36), and 'recognition of HP brand' (PR=1.21 CI 1.06-1.40). Conclusions: This study has provided prevalence data for online HP market in Singapore, and has allowed the country’s regulatory body to formulate a targeted pharmacovigilance approach to this growing problem.

Keywords: e-commerce, pharmaceuticals, pharmacovigilance, Singapore

Procedia PDF Downloads 336
388 Estimating Age In Deceased Persons From The North Indian Population Using Ossification Of The Sternoclavicular Joint

Authors: Balaji Devanathan, Gokul G, Raveena Divya, Abhishek Yadav, Sudhir K.Gupta

Abstract:

Background: Age estimation is a common problem in administrative settings, medico legal cases, and among athletes competing in different sports. Age estimation is a problem in medico legal problems that arise in hospitals when there has been a criminal abortion, when consenting to surgery or a general physical examination, when there has been infanticide, impotence, sterility, etc. Medical imaging progress has benefited forensic anthropology in various ways, most notably in the area of determining bone age. An efficient method for researching the epiphyseal union and other differences in the body's bones and joints is multi-slice computed tomography. There isn't a significant database on Indians available. So to obtain an Indian based database author has performed this original study. Methodologies: The appearance and fusion of ossification centre of sternoclavicular joint is evaluated, and grades were assigned accordingly. Using MSCT scans, we examined the relationship between the age of the deceased and alterations in the sternoclavicular joint during the appearance and union in 500 instances, 327 men and 173 females, in the age range of 0 to 25 years. Results: According to our research in both the male and female groups, the ossification centre for the medial end of the clavicle first appeared between the ages of 18.5 and 17.1 respectively. The age range of the partial union was 20.4 and 20.2 years old. The earliest age of complete fusion was 23 years for males and 22 years for females. For fusion of their sternebrae into one, age range is 11–24 years for females and 17–24 years. The fusion of the third and fourth sternebrae was completed by 11 years. The fusions of the first and second and second and third sternebrae occur by the age of 17 years. Furthermore, correlation and reliability were carried out which yielded significant results. Conclusion: With numerous exceptions, the projected values are consistent with a large number of the previously developed age charts. These variations may be caused by the ethnic or regional heterogeneity in the ossification pattern among the population under study. The pattern of bone maturation did not significantly differ between the sexes, according to the study. The study's age range was 0 to 25 years, and for obvious reasons, the majority of the occurrences occurred in the last five years, or between 20 and 25 years of age. This resulted in a comparatively smaller study population for the 12–18 age group, where age estimate is crucial because of current legal requirements. It will require specialized PMCT research in this age range to produce population standard charts for age estimate. The medial end of the clavicle is one of several ossification foci that are being thoroughly investigated since they are challenging to assess with a traditional X-ray examination. Combining the two has been shown to be a valid result when it comes to raising the age beyond eighteen.

Keywords: age estimation, sternoclavicular joint, medial clavicle, computed tomography

Procedia PDF Downloads 19
387 A Novel Chicken W Chromosome Specific Tandem Repeat

Authors: Alsu F. Saifitdinova, Alexey S. Komissarov, Svetlana A. Galkina, Elena I. Koshel, Maria M. Kulak, Stephen J. O'Brien, Elena R. Gaginskaya

Abstract:

The mystery of sex determination is one of the most ancient and still not solved until the end so far. In many species, sex determination is genetic and often accompanied by the presence of dimorphic sex chromosomes in the karyotype. Genomic sequencing gave the information about the gene content of sex chromosomes which allowed to reveal their origin from ordinary autosomes and to trace their evolutionary history. Female-specific W chromosome in birds as well as mammalian male-specific Y chromosome is characterized by the degeneration of gene content and the accumulation of repetitive DNA. Tandem repeats complicate the analysis of genomic data. Despite the best efforts chicken W chromosome assembly includes only 1.2 Mb from expected 55 Mb. Supplementing the information on the sex chromosome composition not only helps to complete the assembly of genomes but also moves us in the direction of understanding of the sex-determination systems evolution. A whole-genome survey to the assembly Gallus_gallus WASHUC 2.60 was applied for repeats search in assembled genome and performed search and assembly of high copy number repeats in unassembled reads of SRR867748 short reads datasets. For cytogenetic analysis conventional methods of fluorescent in situ hybridization was used for previously cloned W specific satellites and specifically designed directly labeled synthetic oligonucleotide DNA probe was used for bioinformatically identified repetitive sequence. Hybridization was performed with mitotic chicken chromosomes and manually isolated giant meiotic lampbrush chromosomes from growing oocytes. A novel chicken W specific satellite (GGAAA)n which is not co-localizes with any previously described classes of W specific repeats was identified and mapped with high resolution. In the composition of autosomes this repeat units was found as a part of upstream regions of gonad specific protein coding sequences. These findings may contribute to the understanding of the role of tandem repeats in sex specific differentiation regulation in birds and sex chromosome evolution. This work was supported by the postdoctoral fellowships from St. Petersburg State University (#1.50.1623.2013 and #1.50.1043.2014), the grant for Leading Scientific Schools (#3553.2014.4) and the grant from Russian foundation for basic researches (#15-04-05684). The equipment and software of Research Resource Center “Chromas” and Theodosius Dobzhansky Center for Genome Bioinformatics of Saint Petersburg State University were used.

Keywords: birds, lampbrush chromosomes, sex chromosomes, tandem repeats

Procedia PDF Downloads 364
386 Secondhand Clothing and the Future of Fashion

Authors: Marike Venter de Villiers, Jessica Ramoshaba

Abstract:

In recent years, the fashion industry has been associated with the exploitation of both people and resources. This is largely due to the emergence of the fast fashion concept, which entails rapid and continual style changes where clothes quickly lose their appeal, become out-of-fashion, and are then disposed of. This cycle often entails appalling working conditions in sweatshops with low wages, child labor, and a significant amount of textile waste that ends up in landfills. Although the awareness of the negative implications of ‘mindless fashion production and consumption’ is growing, fast fashion remains to be a popular choice among the youth. This is especially prevalent in South Africa, a poverty-stricken country where a vast number of young adults are unemployed and living in poverty. Despite being in poverty, the celebrity conscious culture and fashion products frequently portrayed on the growing intrusive social media platforms in South Africa pressurizes the consumers to purchase fashion and luxury products. Young adults are therefore more vulnerable to the temptation to purchase fast fashion products. A possible solution to the detrimental effects that the fast fashion industry has on the environment is the revival of the secondhand clothing trend. Although the popularity of secondhand clothing has gained momentum among selected consumer segments, the adoption rate of such remains slow. The main purpose of this study was to explore consumers’ perceptions of the secondhand clothing trend and to gain insight into factors that inhibit the adoption of secondhand clothing. This study also aimed to investigate whether consumers are aware of the negative implications of the fast fashion industry and their likelihood to shift their clothing purchases to that of secondhand clothing. By means of a quantitative study, fifty young females were asked to complete a semi-structured questionnaire. The researcher approached females between the ages of 18 and 35 in a face-to-face setting. The results indicated that although they had an awareness of the negative consequences of fast fashion, they lacked detailed insight into the pertinent effects of fast fashion on the environment. Further, a number of factors inhibit their decision to buy from secondhand stores: firstly, the accessibility to the latest trends was not always available in secondhand stores; secondly, the convenience of shopping from a chain store outweighs the inconvenience of searching for and finding a secondhand store; and lastly, they perceived secondhand clothing to pose a hygiene risk. The findings of this study provide fashion marketers, and secondhand clothing stores, with insight into how they can incorporate the secondhand clothing trend into their strategies and marketing campaigns in an attempt to make the fashion industry more sustainable.

Keywords: eco-friendly fashion, fast fashion, secondhand clothing, eco-friendly fashion

Procedia PDF Downloads 109
385 A Conceptual Study for Investigating the Creation of Energy and Understanding the Properties of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied, known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity, which cannot be explained by modern physics, and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe, which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature can be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a "neutral state," possessing an energy level that is referred to as the "base energy." The governing principles of base energy are discussed in detail in our second paper in the series "A Conceptual Study for Addressing the Singularity of the Emerging Universe," which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 68
384 Transformations of River Zones in Hanoi, Vietnam: Problems of Urban Drainage and Environmental Pollution

Authors: Phong Le Ha

Abstract:

In many cities the entire world, the relationship between cities and rivers is always considered as a fundament of urban history research because of their profound interactions. This kind of relationship makes the river zones become extremely sensitive in many aspects. One of the most important aspect is their roles in the drainage of cities. In this paper we will examine an extraordinary case of Hanoi, the capital of Vietnam and Red river zones. This river has contradictory impacts to this city: It is considered as a source of life of the inhabitants who live along its two banks, however, the risk of inundation caused by the complicated hydrology system of this river is always a real threat to the cities that it flows through. Morphologically, Red river was connected to the inner rivers system that made Hanoi a complete form of a river city. This structure combined with the topography of Hanoi helps this city to assure a stable drainage system in which the river zones in the north of Hanoi play some extreme important roles. Nevertheless, in the late 20 years, Hanoi's strong urbanization and the instability of Red river's complicated hydrology make the very remarkable transformations in the relationship river-city and in the river zones: The connection between the river and the city declines; the system of inner lakes are progressively replaced by habitat land; in the river zones, the infrastructure system can't adapt to the transformations of the new quarters which have the origin of the agricultural villages. These changes bring out many chances for the urban development, but also many risks and problems, particularly in the environment and technical sides. Among these, pluvial and used water evacuation is one of the most severe problems. The disappear of inner-city lakes, the high dike and the topographical changes of Hanoi blow up the risk of inundation of this city. In consequences, the riverine zones, particularly in the north of Hanoi, where the two most important water evacuation rivers of Hanoi meet each other, are burdened with the drainage pressure. The unique water treatment plant in this zone seems to be overcharged in receiving each day about 40000m3 of used water (not include pluvial water). This kind of problem leads also to another risk related to the environmental pollution (water pollution and air pollution). So, in order to better understand the situation and to propose the solutions to resolve the problems, an interdisciplinary research covering many different fields such urban planning, architecture, geography, and especially drainage and environment has been carried out. In general, this paper will analyze an important part of the research : the process of urban transformation of Hanoi (changes in urban morphology, infrastructure system, evolution of the dike system, ...) and the hydrological changes of Red river which cause the drainage and environmental problems. The conclusions of these analyses will be the solid base of the following researches focusing on the solutions of a sustainable development.

Keywords: drainage, environment, Hanoi, infrastructure, red rivers, urbanization

Procedia PDF Downloads 365
383 Research Cooperation between of Ukraine in Terms of Food Chain Safety Control in the Frame of MICRORISK Project

Authors: Kinga Wieczorek, Elzbieta Kukier, Remigiusz Pomykala, Beata Lachtara, Renata Szewczyk, Krzysztof Kwiatek, Jacek Osek

Abstract:

The MICRORISK project (Research cooperation in assessment of microbiological hazard and risk in the food chain) was funded by the European Commission under the FP7 PEOPLE 2012 IRSES call within the International Research Staff Exchange Scheme of Marie Curie Action and realized during years from 2014 to 2015. The main aim of the project was to establish a cooperation between the European Union (EU) and the third State in the area important from the public health point of view. The following organizations have been engaged in the activity: National Veterinary Research Institute (NVRI) in Pulawy, Poland (coordinator), French Agency for Food, Environmental and Occupational Health & Safety (ANSES) in Maisons Alfort, France, National Scientific Center Institute of Experimental and Clinical Veterinary Medicine (NSC IECVM), Kharkov and State Scientific and Research Institute of Laboratory Diagnostics and Veterinary and Sanitary Expertise (SSRILDVSE) Kijev Ukraine. The results of the project showed that Ukraine used microbiological criteria in accordance with Commission Regulation (EC) No 2073/2005 of 15 November 2005 on microbiological criteria for foodstuffs. Compliance concerns both the criteria applicable at the stage of food safety (retail trade), as well as evaluation criteria and process hygiene in food production. In this case, the Ukrainian legislation also provides application of the criteria that do not have counterparts in the food law of the European Union, and are based on the provisions of Ukrainian law. Partial coherence of the Ukrainian and EU legal requirements in terms of microbiological criteria for food and feed concerns microbiological parameters such as total plate count, coliforms, coagulase-positive Staphylococcus spp., including S. aureus. Analysis of laboratory methods used for microbiological hazards control in food production chain has shown that most methods used in the EU are well-known by Ukrainian partners, and many of them are routinely applied as the only standards in the laboratory practice or simultaneously used with Ukrainian methods. The area without any legislation, where the EU regulation and analytical methods should be implemented is the area of Shiga toxin producing E. coli, including E. coli O157 and staphylococcal enterotoxin detection. During the project, the analysis of the existing Ukrainian and EU data concerning the prevalence of the most important food-borne pathogens on different stages of food production chain was performed. Particularly, prevalence of Salmonella spp., Campylobacter spp., L. monocytogenes as well as clostridia was examined. The analysis showed that poultry meat still appears to be the most important food-borne source of Campylobacter and Salmonella in the UE. On the other hand, L. monocytogenes were seldom detected above the legal safety limit (100 cfu/g) among the EU countries. Moreover, the analysis revealed the lack of comprehensive data regarding the prevalence of the most important food-borne pathogens in Ukraine. The results of the MICRORISK project are networking activities among researches originations participating in the tasks will help with a better recognition of each other regarding very important, from the public health point of view areas such as microbiological hazards in the food production chain and finally will help to improve food quality and safety for consumers.

Keywords: cooperation, European Union, food chain safety, food law, microbiological risk, Microrisk, Poland, Ukraine

Procedia PDF Downloads 350
382 IoT Continuous Monitoring Biochemical Oxygen Demand Wastewater Effluent Quality: Machine Learning Algorithms

Authors: Sergio Celaschi, Henrique Canavarro de Alencar, Claaudecir Biazoli

Abstract:

Effluent quality is of the highest priority for compliance with the permit limits of environmental protection agencies and ensures the protection of their local water system. Of the pollutants monitored, the biochemical oxygen demand (BOD) posed one of the greatest challenges. This work presents a solution for wastewater treatment plants - WWTP’s ability to react to different situations and meet treatment goals. Delayed BOD5 results from the lab take 7 to 8 analysis days, hindered the WWTP’s ability to react to different situations and meet treatment goals. Reducing BOD turnaround time from days to hours is our quest. Such a solution is based on a system of two BOD bioreactors associated with Digital Twin (DT) and Machine Learning (ML) methodologies via an Internet of Things (IoT) platform to monitor and control a WWTP to support decision making. DT is a virtual and dynamic replica of a production process. DT requires the ability to collect and store real-time sensor data related to the operating environment. Furthermore, it integrates and organizes the data on a digital platform and applies analytical models allowing a deeper understanding of the real process to catch sooner anomalies. In our system of continuous time monitoring of the BOD suppressed by the effluent treatment process, the DT algorithm for analyzing the data uses ML on a chemical kinetic parameterized model. The continuous BOD monitoring system, capable of providing results in a fraction of the time required by BOD5 analysis, is composed of two thermally isolated batch bioreactors. Each bioreactor contains input/output access to wastewater sample (influent and effluent), hydraulic conduction tubes, pumps, and valves for batch sample and dilution water, air supply for dissolved oxygen (DO) saturation, cooler/heater for sample thermal stability, optical ODO sensor based on fluorescence quenching, pH, ORP, temperature, and atmospheric pressure sensors, local PLC/CPU for TCP/IP data transmission interface. The dynamic BOD system monitoring range covers 2 mg/L < BOD < 2,000 mg/L. In addition to the BOD monitoring system, there are many other operational WWTP sensors. The CPU data is transmitted/received to/from the digital platform, which in turn performs analyses at periodic intervals, aiming to feed the learning process. BOD bulletins and their credibility intervals are made available in 12-hour intervals to web users. The chemical kinetics ML algorithm is composed of a coupled system of four first-order ordinary differential equations for the molar masses of DO, organic material present in the sample, biomass, and products (CO₂ and H₂O) of the reaction. This system is solved numerically linked to its initial conditions: DO (saturated) and initial products of the kinetic oxidation process; CO₂ = H₂0 = 0. The initial values for organic matter and biomass are estimated by the method of minimization of the mean square deviations. A real case of continuous monitoring of BOD wastewater effluent quality is being conducted by deploying an IoT application on a large wastewater purification system located in S. Paulo, Brazil.

Keywords: effluent treatment, biochemical oxygen demand, continuous monitoring, IoT, machine learning

Procedia PDF Downloads 48
381 The Jury System in the Courts in Nineteenth Century Assam: Power Negotiations and Politics in an Institutional Rubric of a Colonial Regime

Authors: Jahnu Bharadwaj

Abstract:

In the third decade of the 19th century, the political landscape of the Brahmaputra valley changed at many levels. The establishment of East India Company’s authority in ‘Assam’ was complete with the Treaty of Yandaboo. The whole phenomenon of the annexation of Assam into the British Indian Empire led to several administrative reorganizations and reforms under the new regime. British colonial rule was distinguished by new systems and institutions of governance. This paper broadly looks at the historical proceedings of the introduction of the Rule of Law and a new legal structure in the region of ‘Assam’. With numerous archival data, this paper seeks to chiefly examine the trajectory of an important element in the new legal apparatus, i.e. the jury in the British criminal courts introduced in the newly annexed region. Right from the beginning of colonial legal innovations with the establishment of the panchayats and the parallel courts in Assam, the jury became an important element in the structure of the judicial system. In both civil and criminal courts, the jury was to be formed from the learned members of the ‘native’ society. In the working of the criminal court, the jury became significantly powerful and influential. The structure meant that the judge or the British authority eventually had no compulsion to obey the verdict of the jury. However, the structure also provided that the jury had a considerable say in matters of the court proceedings, and their verdict had significant weight. This study seeks to look at certain important criminal cases pertaining to the nineteenth century and the functioning of the jury in those cases. The power play at display between the British officials, judges and the members of the jury would be helpful in highlighting the important deliberations and politics that were in place in the functioning of the British criminal legal apparatus in colonial Assam. The working and the politics of the members of the jury in many cases exerted considerable influence in the court proceedings. The interesting negotiations of the British officials or judges also present us with vital insights. By reflecting on the difficulty that the British officials and judges felt with the considerable space for opinion and difference that was provided to important members of the local society, this paper seeks to locate, with evidence, the racial politics at play within the official formulations of the legal apparatus in the colonial rule in Assam. This study seeks to argue that despite the rhetorical claims of legal equality within the Empire, racial consideration and racial politics was a reality even in the making of the structure itself. This in a way helps to enrich our ideas about the racial elements at work in numerous layers sustaining the colonial regime.

Keywords: criminal courts, colonial regime, jury, race

Procedia PDF Downloads 151
380 Family Income and Parental Behavior: Maternal Personality as a Moderator

Authors: Robert H. Bradley, Robert F. Corwyn

Abstract:

There is abundant research showing that socio-economic status is implicated in parenting. However, additional factors such as family context, parent personality, parenting history and child behavior also help determine how parents enact the role of caregiver. Each of these factors not only helps determine how a parent will act in a given situation, but each can serve to moderate the influence of the other factors. Personality has long been studied as a factor that influences parental behavior, but it has almost never been considered as a moderator of family contextual factors. For this study, relations between three maternal personality characteristics (agreeableness, extraversion, neuroticism) and four aspects of parenting (harshness, sensitivity, stimulation, learning materials) were examined when children were 6 months, 36 months, and 54 months old and again at 5th grade. Relations between these three aspects of personality and the overall home environment were also examined. A key concern was whether maternal personality characteristics moderated relations between household income and the four aspects of parenting and between household income and the overall home environment. The data for this study were taken from the NICHD Study of Early Child Care and Youth Development (NICHD SECCYD). The total sample consisted of 1364 families living in ten different sites in the United States. However, the samples analyzed included only those with complete data on all four parenting outcomes (i.e., sensitivity, harshness, stimulation, and provision of learning materials), income, maternal education and all three measures of personality (i.e., agreeableness, neuroticism, extraversion) at each age examined. Results from hierarchical regression analysis showed that mothers high in agreeableness were more likely to demonstrate sensitivity and stimulation as well as provide more learning materials to their children but were less likely to manifest harshness. Maternal agreeableness also consistently moderated the effects of low income on parental behavior. Mothers high in extraversion were more likely to provide stimulation and learning materials, with extraversion serving as a moderator of low income on both. By contrast, mothers high in neuroticism were less likely to demonstrate positive aspects of parenting and more likely to manifest negative aspects (e.g., harshness). Neuroticism also served to moderate the influence of low income on parenting, especially for stimulation and learning materials. The most consistent effects of parent personality were on the overall home environment, with significant main and interaction effects observed in 11 of the 12 models tested. These findings suggest that it may behoove professional who work with parents living in adverse circumstances to consider parental personality in helping to better target prevention or intervention efforts aimed at supporting parental efforts to act in ways that benefit children.

Keywords: home environment, household income, learning materials, personality, sensitivity, stimulation

Procedia PDF Downloads 188
379 No-Par Shares Working in European LLCs

Authors: Agnieszka P. Regiec

Abstract:

Capital companies are based on monetary capital. In the traditional model, the capital is the sum of the nominal values of all shares issued. For a few years within the European countries, the limited liability companies’ (LLC) regulations are leaning towards liberalization of the capital structure in order to provide higher degree of autonomy regarding the intra-corporate governance. Reforms were based primarily on the legal system of the USA. In the USA, the tradition of no-par shares is well-established. Thus, as a point of reference, the American legal system is being chosen. Regulations of Germany, Great Britain, France, Netherlands, Finland, Poland and the USA will be taken into consideration. The analysis of the share capital is important for the development of science not only because the capital structure of the corporation has significant impact on the shareholders’ rights, but also it reflects on relationships between creditors of the company and the company itself. Multi-level comparative approach towards the problem will allow to present a wide range of the possible outcomes stemming from the novelization. The dogmatic method was applied. The analysis was based on the statutes, secondary sources and judicial awards. Both the substantive and the procedural aspects of the capital structure were considered. In Germany, as a result of the regulatory competition, typical for the EU, the structure of LLCs was reshaped. New LLC – Unternehmergesellschaft, which does not require a minimum share capital, was introduced. The minimum share capital for Gesellschaft mit beschrankter Haftung was lowered from 25 000 to 10 000 euro. In France the capital structure of corporations was also altered. In 2003, the minimum share capital of société à responsabilité limitée (S.A.R.L.) was repealed. In 2009, the minimum share capital of société par actions simplifiée – in the “simple” version of S.A.R.L. was also changed – there is no minimum share capital required by a statute. The company has to, however, indicate a share capital without the legislator imposing the minimum value of said capital. In Netherlands the reform of the Besloten Vennootschap met beperkte aansprakelijkheid (B.V.) was planned with the following change: repeal of the minimum share capital as the answer to the need for higher degree of autonomy for shareholders. It, however, preserved shares with nominal value. In Finland the novelization of yksityinen osakeyhtiö took place in 2006 and as a result the no-par shares were introduced. Despite the fact that the statute allows shares without face value, it still requires the minimum share capital in the amount of 2 500 euro. In Poland the proposal for the restructuration of the capital structure of the LLC has been introduced. The proposal provides among others: devaluation of the capital to 1 PLN or complete liquidation of the minimum share capital, allowing the no-par shares to be issued. In conclusion: American solutions, in particular, balance sheet test and solvency test provide better protection for creditors; European no-par shares are not the same as American and the existence of share capital in Poland is crucial.

Keywords: balance sheet test, limited liability company, nominal value of shares, no-par shares, share capital, solvency test

Procedia PDF Downloads 161
378 Working From Home: On the Relationship Between Place Attachment to Work Place, Extraversion and Segmentation Preference to Burnout

Authors: Diamant Irene, Shklarnik Batya

Abstract:

In on to its widespread effects on health and economic issues, Covid-19 shook the work and employment world. Among the prominent changes during the pandemic is the work-from-home trend, complete or partial, as part of social distancing. In fact, these changes accelerated an existing tendency of work flexibility already underway before the pandemic. Technology and means of advanced communications led to a re-assessment of “place of work” as a physical space in which work takes place. Today workers can remotely carry out meetings, manage projects, work in groups, and different research studies point to the fact that this type of work has no adverse effect on productivity. However, from the worker’s perspective, despite numerous advantages associated with work from home, such as convenience, flexibility, and autonomy, various drawbacks have been identified such as loneliness, reduction of commitment, home-work boundary erosion, all risk factors relating to the quality of life and burnout. Thus, a real need has arisen in exploring differences in work-from-home experiences and understanding the relationship between psychological characteristics and the prevalence of burnout. This understanding may be of significant value to organizations considering a future hybrid work model combining in-office and remote working. Based on Hobfoll’s Theory of Conservation of Resources, we hypothesized that burnout would mainly be found among workers whose physical remoteness from the workplace threatens or hinders their ability to retain significant individual resources. In the present study, we compared fully remote and partially remote workers (hybrid work), and we examined psychological characteristics and their connection to the formation of burnout. Based on the conceptualization of Place Attachment as the cognitive-emotional bond of an individual to a meaningful place and the need to maintain closeness to it, we assumed that individuals characterized with Place Attachment to the workplace would suffer more from burnout when working from home. We also assumed that extrovert individuals, characterized by the need of social interaction at the workplace and individuals with segmentationpreference – a need for separation between different life domains, would suffer more from burnout, especially among fully remote workers relative to partially remote workers. 194 workers, of which 111 worked from home in full and 83 worked partially from home, aged 19-53, from different sectors, were tested using an online questionnaire through social media. The results of the study supported our assumptions. The repercussions of these findings are discussed, relating to future occupational experience, with an emphasis on suitable occupational adjustment according to the psychological characteristics and needs of workers.

Keywords: working from home, burnout, place attachment, extraversion, segmentation preference, Covid-19

Procedia PDF Downloads 166
377 Validating Quantitative Stormwater Simulations in Edmonton Using MIKE URBAN

Authors: Mohamed Gaafar, Evan Davies

Abstract:

Many municipalities within Canada and abroad use chloramination to disinfect drinking water so as to avert the production of the disinfection by-products (DBPs) that result from conventional chlorination processes and their consequential public health risks. However, the long-lasting monochloramine disinfectant (NH2Cl) can pose a significant risk to the environment. As, it can be introduced into stormwater sewers, from different water uses, and thus freshwater sources. Little research has been undertaken to monitor and characterize the decay of NH2Cl and to study the parameters affecting its decomposition in stormwater networks. Therefore, the current study was intended to investigate this decay starting by building a stormwater model and validating its hydraulic and hydrologic computations, and then modelling water quality in the storm sewers and examining the effects of different parameters on chloramine decay. The presented work here is only the first stage of this study. The 30th Avenue basin in Southern Edmonton was chosen as a case study, because the well-developed basin has various land-use types including commercial, industrial, residential, parks and recreational. The City of Edmonton has already built a MIKE-URBAN stormwater model for modelling floods. Nevertheless, this model was built to the trunk level which means that only the main drainage features were presented. Additionally, this model was not calibrated and known to consistently compute pipe flows higher than the observed values; not to the benefit of studying water quality. So the first goal was to complete modelling and updating all stormwater network components. Then, available GIS Data was used to calculate different catchment properties such as slope, length and imperviousness. In order to calibrate and validate this model, data of two temporary pipe flow monitoring stations, collected during last summer, was used along with records of two other permanent stations available for eight consecutive summer seasons. The effect of various hydrological parameters on model results was investigated. It was found that model results were affected by the ratio of impervious areas. The catchment length was tested, however calculated, because it is approximate representation of the catchment shape. Surface roughness coefficients were calibrated using. Consequently, computed flows at the two temporary locations had correlation coefficients of values 0.846 and 0.815, where the lower value pertained to the larger attached catchment area. Other statistical measures, such as peak error of 0.65%, volume error of 5.6%, maximum positive and negative differences of 2.17 and -1.63 respectively, were all found in acceptable ranges.

Keywords: stormwater, urban drainage, simulation, validation, MIKE URBAN

Procedia PDF Downloads 273
376 Brazilian Transmission System Efficient Contracting: Regulatory Impact Analysis of Economic Incentives

Authors: Thelma Maria Melo Pinheiro, Guilherme Raposo Diniz Vieira, Sidney Matos da Silva, Leonardo Mendonça de Oliveira Queiroz, Mateus Sousa Pinheiro, Danyllo Wenceslau de Oliveira Lopes

Abstract:

The present article has the objective to describe the regulatory impact analysis (RIA) of the contracting efficiency of the Brazilian transmission system usage. This contracting is made by users connected to the main transmission network and is used to guide necessary investments to supply the electrical energy demand. Therefore, an inefficient contracting of this energy amount distorts the real need for grid capacity, affecting the sector planning accuracy and resources optimization. In order to provide this efficiency, the Brazilian Electricity Regulatory Agency (ANEEL) homologated the Normative Resolution (NR) No. 666, from July 23th of 2015, which consolidated the procedures for the contracting of transmission system usage and the contracting efficiency verification. Aiming for a more efficient and rational transmission system contracting, the resolution established economic incentives denominated as Inefficiency installment for excess (IIE) and inefficiency installment for over-contracting (IIOC). The first one, IIE, is verified when the contracted demand exceeds the established regulatory limit; it is applied to consumer units, generators, and distribution companies. The second one, IIOC, is verified when the distributors over-contract their demand. Thus, the establishment of the inefficiency installments IIE and IIOC intends to avoid the agent contract less energy than necessary or more than it is needed. Knowing that RIA evaluates a regulatory intervention to verify if its goals were achieved, the results from the application of the above-mentioned normative resolution to the Brazilian transmission sector were analyzed through indicators that were created for this RIA to evaluate the contracting efficiency transmission system usage, using real data from before and after the homologation of the normative resolution in 2015. For this, indicators were used as the efficiency contracting indicator (ECI), excess of demand indicator (EDI), and over-contracting of demand indicator (ODI). The results demonstrated, through the ECI analysis, a decrease of the contracting efficiency, a behaviour that was happening even before the normative resolution of 2015. On the other side, the EDI showed a considerable decrease in the amount of excess for the distributors and a small reduction for the generators; moreover, the ODI notable decreased, which optimizes the usage of the transmission installations. Hence, with the complete evaluation from the data and indicators, it was possible to conclude that IIE is a relevant incentive for a more efficient contracting, indicating to the agents that their contracting values are not adequate to keep their service provisions for their users. The IIOC also has its relevance, to the point that it shows to the distributors that their contracting values are overestimated.

Keywords: contracting, electricity regulation, evaluation, regulatory impact analysis, transmission power system

Procedia PDF Downloads 93
375 Developing of Ecological Internal Insulation Composite Boards for Innovative Retrofitting of Heritage Buildings

Authors: J. N. Nackler, K. Saleh Pascha, W. Winter

Abstract:

WHISCERS™ (Whole House In-Situ Carbon and Energy Reduction Solution) is an innovative process for Internal Wall Insulation (IWI) for energy-efficient retrofitting of heritage building, which uses laser measuring to determine the dimensions of a room, off-site insulation board cutting and rapid installation to complete the process. As part of a multinational investigation consortium the Austrian part adapted the WHISCERS system to local conditions of Vienna where most historical buildings have valuable stucco facades, precluding the application of an external insulation. The Austrian project contribution addresses the replacement of commonly used extruded polystyrene foam (XPS) with renewable materials such as wood and wood products to develop a more sustainable IWI system. As the timber industry is a major industry in Austria, a new innovative and more sustainable IWI solution could also open up new markets. The first approach of investigation was the Life Cycle Assessment (LCA) to define the performance of wood fibre board as insulation material in comparison to normally used XPS-boards. As one of the results the global-warming potential (GWP) of wood-fibre-board is 15 times less the equivalent to carbon dioxide while in the case of XPS it´s 72 times more. The hygrothermal simulation program WUFI was used to evaluate and simulate heat and moisture transport in multi-layer building components of the developed IWI solution. The results of the simulations prove in examined boundary conditions of selected representative brickwork constructions to be functional and usable without risk regarding vapour diffusion and liquid transport in proposed IWI. In a further stage three different solutions were developed and tested (1 - glued/mortared, 2 - with soft board, connected to wall with gypsum board as top layer, 3 - with soft board and clay board as top layer). All three solutions presents a flexible insulation layer out of wood fibre towards the existing wall, thus compensating irregularities of the wall surface. From first considerations at the beginning of the development phase, three different systems had been developed and optimized according to assembly technology and tested as small specimen in real object conditions. The built prototypes are monitored to detect performance and building physics problems and to validate the results of the computer simulation model. This paper illustrates the development and application of the Internal Wall Insulation system.

Keywords: internal insulation, wood fibre, hygrothermal simulations, monitoring, clay, condensate

Procedia PDF Downloads 197
374 Photoswitchable and Polar-Dependent Fluorescence of Diarylethenes

Authors: Sofia Lazareva, Artem Smolentsev

Abstract:

Fluorescent photochromic materials collect strong interest due to their possible application in organic photonics such as optical logic systems, optical memory, visualizing sensors, as well as characterization of polymers and biological systems. In photochromic fluorescence switching systems the emission of fluorophore is modulated between ‘on’ and ‘off’ via the photoisomerization of photochromic moieties resulting in effective resonance energy transfer (FRET). In current work, we have studied both photochromic and fluorescent properties of several diarylethenes. It was found that coloured forms of these compounds are not fluorescent because of the efficient intramolecular energy transfer. Spectral and photochromic parameters of investigated substances have been measured in five solvents having different polarity. Quantum yields of photochromic transformation A↔B ΦA→B and ΦB→A as well as B isomer extinction coefficients were determined by kinetic method. It was found that the photocyclization reaction quantum yield of all compounds decreases with the increase of solvent polarity. In addition, the solvent polarity is revealed to affect fluorescence significantly. Increasing of the solvent dielectric constant was found to result in a strong shift of emission band position from 450 nm (nhexane) to 550 nm (DMSO and ethanol) for all three compounds. Moreover, the emission intensive in polar solvents becomes weak and hardly detectable in n-hexane. The only one exception in the described dependence is abnormally low fluorescence quantum yield in ethanol presumably caused by the loss of electron-donating properties of nitrogen atom due to the protonation. An effect of the protonation was also confirmed by the addition of concentrated HCl in solution resulting in a complete disappearance of the fluorescent band. Excited state dynamics were investigated by ultrafast optical spectroscopy methods. Kinetic curves of excited states absorption and fluorescence decays were measured. Lifetimes of transient states were calculated from the data measured. The mechanism of ring opening reaction was found to be polarity dependent. Comparative analysis of kinetics measured in acetonitrile and hexane reveals differences in relaxation dynamics after the laser pulse. The most important fact is the presence of two decay processes in acetonitrile, whereas only one is present in hexane. This fact supports an assumption made on the basis of steady-state preliminary experiments that in polar solvents occur stabilization of TICT state. Thus, results achieved prove the hypothesis of two channel mechanism of energy relaxation of compounds studied.

Keywords: diarylethenes, fluorescence switching, FRET, photochromism, TICT state

Procedia PDF Downloads 651
373 Bacteriophage Is a Novel Solution of Therapy Against S. aureus Having Multiple Drug Resistance

Authors: Sanjay Shukla, A. Nayak, R. K. Sharma, A. P. Singh, S. P. Tiwari

Abstract:

Excessive use of antibiotics is a major problem in the treatment of wounds and other chronic infections, and antibiotic treatment is frequently non-curative, thus alternative treatment is necessary. Phage therapy is considered one of the most promising approaches to treat multi-drug resistant bacterial pathogens. Infections caused by Staphylococcus aureus are very efficiently controlled with phage cocktails, containing a different individual phages lysate infecting a majority of known pathogenic S. aureus strains. The aim of the present study was to evaluate the efficacy of a purified phage cocktail for prophylactic as well as therapeutic application in mouse model and in large animals with chronic septic infection of wounds. A total of 150 sewage samples were collected from various livestock farms. These samples were subjected for the isolation of bacteriophage by the double agar layer method. A total of 27 sewage samples showed plaque formation by producing lytic activity against S. aureus in the double agar overlay method out of 150 sewage samples. In TEM, recovered isolates of bacteriophages showed hexagonal structure with tail fiber. In the bacteriophage (ØVS) had an icosahedral symmetry with the head size 52.20 nm in diameter and long tail of 109 nm. Head and tail were held together by connector and can be classified as a member of the Myoviridae family under the order of Caudovirale. Recovered bacteriophage had shown the antibacterial activity against the S. aureus in vitro. Cocktail (ØVS1, ØVS5, ØVS9, and ØVS 27) of phage lysate were tested to know in vivo antibacterial activity as well as the safety profile. Result of mice experiment indicated that the bacteriophage lysate were very safe, did not show any appearance of abscess formation, which indicates its safety in living system. The mice were also prophylactically protected against S. aureus when administered with cocktail of bacteriophage lysate just before the administration of S. aureuswhich indicates that they are good prophylactic agent. The S. aureusinoculated mice were completely recovered by bacteriophage administration with 100% recovery, which was very good as compere to conventional therapy. In the present study, ten chronic cases of the wound were treated with phage lysate, and follow up of these cases was done regularly up to ten days (at 0, 5, and 10 d). The result indicated that the six cases out of ten showed complete recovery of wounds within 10 d. The efficacy of bacteriophage therapy was found to be 60% which was very good as compared to the conventional antibiotic therapy in chronic septic wounds infections. Thus, the application of lytic phage in single dose proved to be innovative and effective therapy for the treatment of septic chronic wounds.

Keywords: phage therapy, S aureus, antimicrobial resistance, lytic phage, and bacteriophage

Procedia PDF Downloads 95
372 Understanding Evidence Dispersal Caused by the Effects of Using Unmanned Aerial Vehicles in Active Indoor Crime Scenes

Authors: Elizabeth Parrott, Harry Pointon, Frederic Bezombes, Heather Panter

Abstract:

Unmanned aerial vehicles (UAV’s) are making a profound effect within policing, forensic and fire service procedures worldwide. These intelligent devices have already proven useful in photographing and recording large-scale outdoor and indoor sites using orthomosaic and three-dimensional (3D) modelling techniques, for the purpose of capturing and recording sites during and post-incident. UAV’s are becoming an established tool as they are extending the reach of the photographer and offering new perspectives without the expense and restrictions of deploying full-scale aircraft. 3D reconstruction quality is directly linked to the resolution of captured images; therefore, close proximity flights are required for more detailed models. As technology advances deployment of UAVs in confined spaces is becoming more common. With this in mind, this study investigates the effects of UAV operation within active crimes scenes with regard to the dispersal of particulate evidence. To date, there has been little consideration given to the potential effects of using UAV’s within active crime scenes aside from a legislation point of view. Although potentially the technology can reduce the likelihood of contamination by replacing some of the roles of investigating practitioners. There is the risk of evidence dispersal caused by the effect of the strong airflow beneath the UAV, from the downwash of the propellers. The initial results of this study are therefore presented to determine the height of least effect at which to fly, and the commercial propeller type to choose to generate the smallest amount of disturbance from the dataset tested. In this study, a range of commercially available 4-inch propellers were chosen as a starting point due to the common availability and their small size makes them well suited for operation within confined spaces. To perform the testing, a rig was configured to support a single motor and propeller powered with a standalone mains power supply and controlled via a microcontroller. This was to mimic a complete throttle cycle and control the device to ensure repeatability. By removing the variances of battery packs and complex UAV structures to allow for a more robust setup. Therefore, the only changing factors were the propeller and operating height. The results were calculated via computer vision analysis of the recorded dispersal of the sample particles placed below the arm-mounted propeller. The aim of this initial study is to give practitioners an insight into the technology to use when operating within confined spaces as well as recognizing some of the issues caused by UAV’s within active crime scenes.

Keywords: dispersal, evidence, propeller, UAV

Procedia PDF Downloads 141
371 A Conceptual Study for Investigating the Preliminary State of Energy at the Birth of Universe and Understanding Its Emergence From the State of Nothing

Authors: Mahmoud Reza Hosseini

Abstract:

The universe is in a continuous expansion process, resulting in the reduction of its density and temperature. Also, by extrapolating back from its current state, the universe at its early times is studied known as the big bang theory. According to this theory, moments after creation, the universe was an extremely hot and dense environment. However, its rapid expansion due to nuclear fusion led to a reduction in its temperature and density. This is evidenced through the cosmic microwave background and the universe structure at a large scale. However, extrapolating back further from this early state reaches singularity which cannot be explained by modern physics and the big bang theory is no longer valid. In addition, one can expect a nonuniform energy distribution across the universe from a sudden expansion. However, highly accurate measurements reveal an equal temperature mapping across the universe which is contradictory to the big bang principles. To resolve this issue, it is believed that cosmic inflation occurred at the very early stages of the birth of the universe. According to the cosmic inflation theory, the elements which formed the universe underwent a phase of exponential growth due to the existence of a large cosmological constant. The inflation phase allows the uniform distribution of energy so that an equal maximum temperature could be achieved across the early universe. Also, the evidence of quantum fluctuations of this stage provides a means for studying the types of imperfections the universe would begin with. Although well-established theories such as cosmic inflation and the big bang together provide a comprehensive picture of the early universe and how it evolved into its current state, they are unable to address the singularity paradox at the time of universe creation. Therefore, a practical model capable of describing how the universe was initiated is needed. This research series aims at addressing the singularity issue by introducing a state of energy called a “neutral state” possessing an energy level which is referred to as the “base energy”. The governing principles of base energy are discussed in detail in our second paper in the series “A Conceptual Study for Addressing the Singularity of the Emerging Universe” which is discussed in detail. To establish a complete picture, the origin of the base energy should be identified and studied. In this research paper, the mechanism which led to the emergence of this natural state and its corresponding base energy is proposed. In addition, the effect of the base energy in the space-time fabric is discussed. Finally, the possible role of the base energy in quantization and energy exchange is investigated. Therefore, the proposed concept in this research series provides a road map for enhancing our understating of the universe's creation from nothing and its evolution and discusses the possibility of base energy as one of the main building blocks of this universe.

Keywords: big bang, cosmic inflation, birth of universe, energy creation, universe evolution

Procedia PDF Downloads 12
370 Facies Sedimentology and Astronomic Calibration of the Reinech Member (Lutetian)

Authors: Jihede Haj Messaoud, Hamdi Omar, Hela Fakhfakh Ben Jemia, Chokri Yaich

Abstract:

The Upper Lutetian alternating marl–limestone succession of Reineche Member was deposited over a warm shallow carbonate platform that permits Nummulites proliferation. High-resolution studies of 30 meters thick Nummulites-bearing Reineche Member, cropping out in Central Tunisia (Jebel Siouf), have been undertaken, regarding pronounced cyclical sedimentary sequences, in order to investigate the periodicity of cycles and their related orbital-scale oceanic and climatic changes. The palaeoenvironmental and palaeoclimatic data are preserved in several proxies obtainable through high-resolution sampling and laboratories measurement and analysis as magnetic susceptibility (MS) and carbonates contents in conjunction with a wireline logging tools. The time series analysis of proxies permits to establish cyclicity orders present in the studied intervals which could be linked to the orbital cycles. MS records provide high-resolution proxies for relative sea level change in Late Lutetian strata. The spectral analysis of MS fluctuations confirmed the orbital forcing by the presence of the complete suite of orbital frequencies in the precession of 23 ka, the obliquity of 41 ka, and notably the two modes of eccentricity of 100 and 405 ka. Regarding the two periodic sedimentary cycles detected by wavelet analysis of proxy fluctuations which coincide with the long-term 405 ka eccentricity cycle, the Reineche Member spanned 0,8 Myr. Wireline logging tools as gamma ray and sonic were used as a proxies to decipher cyclicity and trends in sedimentation and contribute to identifying and correlate units. There are used to constraint the highest frequency cyclicity modulated by a long term wavelength cycling apparently controlled by clay content. Interpreted as a result of variations in carbonate productivity, it has been suggested that the marl-limestone couplets, represent the sedimentary response to the orbital forcing. The calculation of cycle durations through Reineche Member, is used as a geochronometer and permit the astronomical calibration of the geologic time scale. Furthermore, MS coupled with carbonate contents, and fossil occurrences provide strong evidence for combined detrital inputs and marine surface carbonate productivity cycles. These two synchronous processes were driven by the precession index and ‘fingerprinted’ in the basic marl–limestone couplets, modulated by orbital eccentricity.

Keywords: magnetic susceptibility, cyclostratigraphy, orbital forcing, spectral analysis, Lutetian

Procedia PDF Downloads 275
369 Preoperative Anxiety Evaluation: Comparing the Visual Facial Anxiety Scale/Yumul Faces Anxiety Scale, Numerical Verbal Rating Scale, Categorization Scale, and the State-Trait Anxiety Inventory

Authors: Roya Yumul, Chse, Ofelia Loani Elvir Lazo, David Chernobylsky, Omar Durra

Abstract:

Background: Preoperative anxiety has been shown to be caused by the fear associated with surgical and anesthetic complications; however, the current gold standard for assessing patient anxiety, the STAI, is problematic to use in the preoperative setting given the duration and concentration required to complete the 40-item extensive questionnaire. Our primary aim in the study is to investigate the correlation of the Visual Facial Anxiety Scale (VFAS) and Numerical Verbal Rating Scale (NVRS) to State-Trait Anxiety Inventory (STAI) to determine the optimal anxiety scale to use in the perioperative setting. Methods: A clinical study of patients undergoing various surgeries was conducted utilizing each of the preoperative anxiety scales. Inclusion criteria included patients undergoing elective surgeries, while exclusion criteria included patients with anesthesia contraindications, inability to comprehend instructions, impaired judgement, substance abuse history, and those pregnant or lactating. 293 patients were analyzed in terms of demographics, anxiety scale survey results, and anesthesia data via Spearman Coefficients, Chi-Squared Analysis, and Fischer’s exact test utilized for comparison analysis. Results: Statistical analysis showed that VFAS had a higher correlation to STAI than NVRS (rs=0.66, p<0.0001 vs. rs=0.64, p<0.0001). The combined VFAS-Categorization Scores showed the highest correlation with the gold standard (rs=0.72, p<0.0001). Subgroup analysis showed similar results. STAI evaluation time (247.7 ± 54.81 sec) far exceeds VFAS (7.29 ± 1.61 sec), NVRS (7.23 ± 1.60 sec), and Categorization scales (7.29 ± 1.99 sec). Patients preferred VFAS (54.4%), Categorization (11.6%), and NVRS (8.8%). Anesthesiologists preferred VFAS (63.9%), NVRS (22.1%), and Categorization Scales (14.0%). Of note, the top five causes of preoperative anxiety were determined to be waiting (56.5%), pain (42.5%), family concerns (40.5%), no information about surgery (40.1%), or anesthesia (31.6%). Conclusions: Combined VFAS-Categorization Score (VCS) demonstrates the highest correlation to the gold standard, STAI. Both VFAS and Categorization tests also take significantly less time than STAI, which is critical in the preoperative setting. Among both patients and anesthesiologists, VFAS was the most preferred scale. This forms the basis of the Yumul FACES Anxiety Scale, designed for quick quantization and assessment in the preoperative setting while maintaining a high correlation to the golden standard. Additional studies using the formulated Yumul FACES Anxiety Scale are merited.

Keywords: numerical verbal anxiety scale, preoperative anxiety, state-trait anxiety inventory, visual facial anxiety scale

Procedia PDF Downloads 110
368 Need for Eye Care Services, Clinical Characteristics, Surgical Outcome and Prognostic Predictors of Cataract in Adult Participants with Intellectual Disability

Authors: Yun-Shan Tsai, Si-Ping Lin, En-Chieh Lin, Xin-Hong Chen, Shin-Yun Ho, Shin-Hong Huang, Ching-ju Hsieh

Abstract:

Background and significance: Uncorrected refractive errors and cataracts are the main visually debilitating ophthalmological abnormalities in adult participants with intellectual disability (ID). However, not all adult participants with ID may receive a regular and timely ophthalmological assessment. Consequently, some of the ocular diseases may not be diagnosed until late, thereby causing unnecessary ocular morbidity. In addition, recent clinical practice and researches have also suggested that eye-care services for this group are neglected. Purpose: To investigate the unmet need for eye care services, clinical characteristics of cataract, visual function, surgical outcome and prognostic predictors in adult participants with ID at Taipei City Hospital in Taiwan. Methods: This is a one-year prospective clinical study. We recruited about 120 eyes of 60 adult participants with ID who were received cataract surgery. Caregivers of all participants received a questionnaire on current eye care services. Clinical demographic data, such as age, gender, and associated systemic diseases or syndromes, were collected. All complete ophthalmologic examinations were performed 1 month preoperatively and 3 months postoperatively, including ocular biometry, visual function, refractive status, morphology of cataract, associated ocular features, anesthesia methods, surgical types, and complications. Morphology of cataract, visual and surgical outcome was analyzed. Results: A total of 60 participants with mean age 43.66 ± 13.94 years, including 59.02% male and 40.98% female, took part in comprehensive eye-care services. The prevalence of unmet need for eye care services was high (about 70%). About 50% of adult participants with ID have bilateral cataracts at the time of diagnosis. White cataracts were noted in about 30% of all adult participants with ID at the time of presentation. Associated ocular disorders were included myopic maculopathy (4.54%), corneal disorders (11.36%), nystagmus (20.45%), strabismus (38.64%) and glaucoma (2.27%). About 26.7% of adult participants with ID underwent extracapsular cataract extraction whereas a phacoemulsification was performed in 100% of eyes. Intraocular lens implantation was performed in all eyes. The most common postoperative complication was posterior capsular opacification (30%). The mean best-corrected visual acuity was significantly improved from preoperatively (mean log MAR 0.48 ± 0.22) to at 3 months postoperatively (mean log MAR 0.045 ± 0.22) (p < .05). Conclusions: Regular follow up will help address the need for eye-care services in participants with ID. A high incidence of bilateral cataracts, as well as white cataracts, was observed in adult participants with ID. Because of early diagnosis and early intervention of cataract, the visual and surgical outcomes of cataract are good, but the visual outcomes are suboptimal due to associated ocular comorbidities.

Keywords: adult participants with intellectual disability, cataract, cataract surgery

Procedia PDF Downloads 282
367 Assessment of N₂ Fixation and Water-Use Efficiency in a Soybean-Sorghum Rotation System

Authors: Mmatladi D. Mnguni, Mustapha Mohammed, George Y. Mahama, Alhassan L. Abdulai, Felix D. Dakora

Abstract:

Industrial-based nitrogen (N) fertilizers are justifiably credited for the current state of food production across the globe, but their continued use is not sustainable and has an adverse effect on the environment. The search for greener and sustainable technologies has led to an increase in exploiting biological systems such as legumes and organic amendments for plant growth promotion in cropping systems. Although the benefits of legume rotation with cereal crops have been documented, the full benefits of soybean-sorghum rotation systems have not been properly evaluated in Africa. This study explored the benefits of soybean-sorghum rotation through assessing N₂ fixation and water-use efficiency of soybean in rotation with sorghum with and without organic and inorganic amendments. The field trials were conducted from 2017 to 2020. Sorghum was grown on plots previously cultivated to soybean and vice versa. The succeeding sorghum crop received fertilizer amendments [organic fertilizer (5 tons/ha as poultry litter, OF); inorganic fertilizer (80N-60P-60K) IF; organic + inorganic fertilizer (OF+IF); half organic + inorganic fertilizer (HIF+OF); organic + half inorganic fertilizer (OF+HIF); half organic + half inorganic (HOF+HIF) and control] and was arranged in a randomized complete block design. The soybean crop succeeding fertilized sorghum received a blanket application of triple superphosphate at 26 kg P ha⁻¹. Nitrogen fixation and water-use efficiency were respectively assessed at the flowering stage using the ¹⁵N and ¹³C natural abundance techniques. The results showed that the shoot dry matter of soybean plants supplied with HOF+HIF was much higher (43.20 g plant-1), followed by OF+HIF (36.45 g plant⁻¹), and HOF+IF (33.50 g plant⁻¹). Shoot N concentration ranged from 1.60 to 1.66%, and total N content from 339 to 691 mg N plant⁻¹. The δ¹⁵N values of soybean shoots ranged from -1.17‰ to -0.64‰, with plants growing on plots previously treated to HOF+HIF exhibiting much higher δ¹⁵N values, and hence lower percent N derived from N₂ fixation (%Ndfa). Shoot %Ndfa values varied from 70 to 82%. The high %Ndfa values obtained in this study suggest that the previous year’s organic and inorganic fertilizer amendments to sorghum did not inhibit N₂ fixation in the following soybean crop. The amount of N-fixed by soybean ranged from 106 to 197 kg N ha⁻¹. The treatments showed marked variations in carbon (C) content, with HOF+HIF treatment recording the highest C content. Although water-use efficiency varied from -29.32‰ to -27.85‰, shoot water-use efficiency, C concentration, and C:N ratio were not altered by previous fertilizer application to sorghum. This study provides strong evidence that previous HOF+HIF sorghum residues can enhance N nutrition and water-use efficiency in nodulated soybean.

Keywords: ¹³C and ¹⁵N natural abundance, N-fixed, organic and inorganic fertilizer amendments, shoot %Ndfa

Procedia PDF Downloads 143
366 Semiconductor Properties of Natural Phosphate Application to Photodegradation of Basic Dyes in Single and Binary Systems

Authors: Y. Roumila, D. Meziani, R. Bagtache, K. Abdmeziem, M. Trari

Abstract:

Heterogeneous photocatalysis over semiconductors has proved its effectiveness in the treatment of wastewaters since it works under soft conditions. It has emerged as a promising technique, giving rise to less toxic effluents and offering the opportunity of using sunlight as a sustainable and renewable source of energy. Many compounds have been used as photocatalysts. Though synthesized ones are intensively used, they remain expensive, and their synthesis involves special conditions. We thus thought of implementing a natural material, a phosphate ore, due to its low cost and great availability. Our work is devoted to the removal of hazardous organic pollutants, which cause several environmental problems and health risks. Among them, dye pollutants occupy a large place. This work relates to the study of the photodegradation of methyl violet (MV) and rhodamine B (RhB), in single and binary systems, under UV light and sunlight irradiation. Methyl violet is a triarylmethane dye, while RhB is a heteropolyaromatic dye belonging to the Xanthene family. In the first part of this work, the natural compound was characterized using several physicochemical and photo-electrochemical (PEC) techniques: X-Ray diffraction, chemical, and thermal analyses scanning electron microscopy, UV-Vis diffuse reflectance measurements, and FTIR spectroscopy. The electrochemical and photoelectrochemical studies were performed with a Voltalab PGZ 301 potentiostat/galvanostat at room temperature. The structure of the phosphate material was well characterized. The photo-electrochemical (PEC) properties are crucial for drawing the energy band diagram, in order to suggest the formation of radicals and the reactions involved in the dyes photo-oxidation mechanism. The PEC characterization of the natural phosphate was investigated in neutral solution (Na₂SO₄, 0.5 M). The study revealed the semiconducting behavior of the phosphate rock. Indeed, the thermal evolution of the electrical conductivity was well fitted by an exponential type law, and the electrical conductivity increases with raising the temperature. The Mott–Schottky plot and current-potential J(V) curves recorded in the dark and under illumination clearly indicate n-type behavior. From the results of photocatalysis, in single solutions, the changes in MV and RhB absorbance in the function of time show that practically all of the MV was removed after 240 mn irradiation. For RhB, the complete degradation was achieved after 330 mn. This is due to its complex and resistant structure. In binary systems, it is only after 120 mn that RhB begins to be slowly removed, while about 60% of MV is already degraded. Once nearly all of the content of MV in the solution has disappeared (after about 250 mn), the remaining RhB is degraded rapidly. This behaviour is different from that observed in single solutions where both dyes are degraded since the first minutes of irradiation.

Keywords: environment, organic pollutant, phosphate ore, photodegradation

Procedia PDF Downloads 107
365 Assessing the Material Determinants of Cavity Polariton Relaxation using Angle-Resolved Photoluminescence Excitation Spectroscopy

Authors: Elizabeth O. Odewale, Sachithra T. Wanasinghe, Aaron S. Rury

Abstract:

Cavity polaritons form when molecular excitons strongly couple to photons in carefully constructed optical cavities. These polaritons, which are hybrid light-matter states possessing a unique combination of photonic and excitonic properties, present the opportunity to manipulate the properties of various semiconductor materials. The systematic manipulation of materials through polariton formation could potentially improve the functionalities of many optoelectronic devices such as lasers, light-emitting diodes, photon-based quantum computers, and solar cells. However, the prospects of leveraging polariton formation for novel devices and device operation depend on more complete connections between the properties of molecular chromophores, and the hybrid light-matter states they form, which remains an outstanding scientific goal. Specifically, for most optoelectronic applications, it is paramount to understand how polariton formation affects the spectra of light absorbed by molecules coupled strongly to cavity photons. An essential feature of a polariton state is its dispersive energy, which occurs due to the enhanced spatial delocalization of the polaritons relative to bare molecules. To leverage the spatial delocalization of cavity polaritons, angle-resolved photoluminescence excitation spectroscopy was employed in characterizing light emission from the polaritonic states. Using lasers of appropriate energies, the polariton branches were resonantly excited to understand how molecular light absorption changes under different strong light-matter coupling conditions. Since an excited state has a finite lifetime, the photon absorbed by the polariton decays non-radiatively into lower-lying molecular states, from which radiative relaxation to the ground state occurs. The resulting fluorescence is collected across several angles of excitation incidence. By modeling the behavior of the light emission observed from the lower-lying molecular state and combining this result with the output of angle-resolved transmission measurements, inferences are drawn about how the behavior of molecules changes when they form polaritons. These results show how the intrinsic properties of molecules, such as the excitonic lifetime, affect the rate at which the polaritonic states relax. While it is true that the lifetime of the photon mediates the rate of relaxation in a cavity, the results from this study provide evidence that the lifetime of the molecular exciton also limits the rate of polariton relaxation.

Keywords: flourescece, molecules in cavityies, optical cavity, photoluminescence excitation, spectroscopy, strong coupling

Procedia PDF Downloads 42
364 Challenges of Strategies for Improving Sustainability in Urban Historical Context in Developing Countries: The Case of Shiraz Bein Al-Haramein

Authors: Amir Hossein Ashari, Sedighe Erfan Manesh

Abstract:

One of the problems in developing countries is renovating the historical context and inducing behaviors appropriate to modern life to such a context. This study was conducted using field and library methods in 2012. Similar cases carried out in Iran and developing countries were compared to unveil the strengths and weaknesses of these projects. At present, in the historical context of Shiraz, the distance between two religious shrines of Shahcheragh (Ahmad ibn Musa) and Astaneh (Sayed Alaa al-Din Hossein), which are significant places in religious, cultural, social, and economic terms, is an area full of historic places called Bein Al-Haramein. Unfortunately, some of these places have been worn out and are not appropriate for common uses. The basic strategy of Bein Al-Haramein was to improve social development of Shiraz, to enhance the vitality and dynamism of the historical context of Bein Al-Haramein and to create tourist attractions in order to boost the city's economic and social stability. To this end, the project includes the huge Bein Al-Haramein Commercial Complex which is under construction now. To construct the complex, officials have decided to demolish places of historical value which can lead to irreparable consequences. Iranian urban design has always been based on three elements of bazaars, mosques and government facilities with bazaars being the organic connector of the other elements. Therefore, the best strategy in the above case is to provide for a commercial connection between the two poles. Although this strategy is included in the project, lack of attention to renovation principles in this area and complete destruction of the context will lead to its irreversible damage and will destroy its cultural and historical identity. In urban planning of this project, some important issues have been neglected including: preserving valuable buildings and special old features of the city, rebuilding worn buildings and context to attract trust and confidence of the people, developing new models according to changes, improving the structural position of old context with minimal degradation, attracting partnerships of residents and protecting their rights and finally using potential facilities of the old context. The best strategy for achieving sustainability in Bein Al-Haramein can be the one used in the distance between Santa Maria Novella and Santa Maria Del Fiore churches in historical context where while protecting the historic context and constructions, old buildings were renovated and given different commercial and service uses making them sustainable and dynamic places. Similarly, in Bein Al-Haramein, renovating old constructions and monuments and giving different commercial and other uses to them can help improve the economic and social sustainability of the area.

Keywords: Bein Al-Haramein, sustainability, historical context, historical context

Procedia PDF Downloads 419
363 Development of a Quick On-Site Pass/Fail Test for the Evaluation of Fresh Concrete Destined for Application as Exposed Concrete

Authors: Laura Kupers, Julie Piérard, Niki Cauberg

Abstract:

The use of exposed concrete (sometimes referred to as architectural concrete), keeps gaining popularity. Exposed concrete has the advantage to combine the structural properties of concrete with an aesthetic finish. However, for a successful aesthetic finish, much attention needs to be paid to the execution (formwork, release agent, curing, weather conditions…), the concrete composition (choice of the raw materials and mix proportions) as well as to its fresh properties. For the latter, a simple on-site pass/fail test could halt the casting of concrete not suitable for architectural concrete and thus avoid expensive repairs later. When architects opt for an exposed concrete, they usually want a smooth, uniform and nearly blemish-free surface. For this choice, a standard ‘construction’ concrete does not suffice. An aesthetic surface finishing requires the concrete to contain a minimum content of fines to minimize the risk of segregation and to allow complete filling of more complex shaped formworks. The concrete may neither be too viscous as this makes it more difficult to compact and it increases the risk of blow holes blemishing the surface. On the other hand, too much bleeding may cause color differences on the concrete surface. An easy pass/fail test, which can be performed on the site just before the casting, could avoid these problems. In case the fresh concrete fails the test, the concrete can be rejected. Only in case the fresh concrete passes the test, the concrete would be cast. The pass/fail tests are intended for a concrete with a consistency class S4. Five tests were selected as possible onsite pass/fail test. Two of these tests already exist: the K-slump test (ASTM C1362) and the Bauer Filter Press Test. The remaining three tests were developed by the BBRI in order to test the segregation resistance of fresh concrete on site: the ‘dynamic sieve stability test’, the ‘inverted cone test’ and an adapted ‘visual stability index’ (VSI) for the slump and flow test. These tests were inspired by existing tests for self-compacting concrete, for which the segregation resistance is of great importance. The suitability of the fresh concrete mixtures was also tested by means of a laboratory reference test (resistance to segregation) and by visual inspection (blow holes, structure…) of small test walls. More than fifteen concrete mixtures of different quality were tested. The results of the pass/fail tests were compared with the results of this laboratory reference test and the test walls. The preliminary laboratory results indicate that concrete mixtures ‘suitable’ for placing as exposed concrete (containing sufficient fines, a balanced grading curve etc.) can be distinguished from ‘inferior’ concrete mixtures. Additional laboratory tests, as well as tests on site, will be conducted to confirm these preliminary results and to set appropriate pass/fail values.

Keywords: exposed concrete, testing fresh concrete, segregation resistance, bleeding, consistency

Procedia PDF Downloads 403
362 Economic Impact of Rana Plaza Collapse

Authors: Md. Omar Bin Harun Khan

Abstract:

The collapse of the infamous Rana Plaza, a multi-storeyed commercial building in Savar, near Dhaka, Bangladesh has brought with it a plethora of positive and negative consequences. Bangladesh being a key player in the export of clothing, found itself amidst a wave of economic upheaval following this tragic incident that resulted in numerous Bangladeshis, most of whom were factory workers. This paper compares the consequences that the country’s Ready Made Garments (RMG) sector is facing now, two years into the incident. The paper presents a comparison of statistical data from study reports and brings forward perspectives from all dimensions of Labour, Employment and Industrial Relations in Bangladesh following the event. The paper brings across the viewpoint of donor organizations and donor countries, the impacts of several initiatives taken by foreign organizations like the International Labour Organization, and local entities like the Bangladesh Garment Manufacturers and Exporters Association (BGMEA) in order to reinforce compliance and stabilize the shaky foundation that the RMG sector had found itself following the collapse. Focus of the paper remains on the stance taken by the suppliers in Bangladesh, with inputs from buying houses and factories, and also on the reaction of foreign brands. The paper also focuses on the horrific physical, mental and financial implications sustained by the victims and their families, and the consequent uproar from workers in general regarding compliance with work safety and workers’ welfare conditions. The purpose is to get across both sides of the scenario: the economic impact that suppliers / factories/ sellers/ buying houses/exporters have faced in Bangladesh as a result of complete loss of reliability on them regarding working standards; and also to cover the aftershock felt on the other end of the spectrum by the importers/ buyers, particularly the foreign entities, in terms of the sudden accountability of being affiliated with non- compliant factories. The collapse of Rana Plaza has received vast international attention and strong criticism. Nevertheless, the almost immediate strengthening of labourrights and the wholesale reform undertaken on all sides of the supply chain, evidence a move of all local and foreign stakeholders towards greater compliance and taking of precautionary steps for prevention of further disasters. The tragedy that Rana Plaza embodies served as a much-needed epiphany for the soaring RMG Sector of Bangladesh. Prompt co-operation on the part of all stakeholders and regulatory bodies now show a move towards sustainable development, which further ensures safeguarding against any future irregularities and pave the way for steady economic growth.

Keywords: economy, employment standards, Rana Plaza, RMG

Procedia PDF Downloads 305
361 Detailed Quantum Circuit Design and Evaluation of Grover's Algorithm for the Bounded Degree Traveling Salesman Problem Using the Q# Language

Authors: Wenjun Hou, Marek Perkowski

Abstract:

The Traveling Salesman problem is famous in computing and graph theory. In short, it asks for the Hamiltonian cycle of the least total weight in a given graph with N nodes. All variations on this problem, such as those with K-bounded-degree nodes, are classified as NP-complete in classical computing. Although several papers propose theoretical high-level designs of quantum algorithms for the Traveling Salesman Problem, no quantum circuit implementation of these algorithms has been created up to our best knowledge. In contrast to previous papers, the goal of this paper is not to optimize some abstract complexity measures based on the number of oracle iterations, but to be able to evaluate the real circuit and time costs of the quantum computer. Using the emerging quantum programming language Q# developed by Microsoft, which runs quantum circuits in a quantum computer simulation, an implementation of the bounded-degree problem and its respective quantum circuit were created. To apply Grover’s algorithm to this problem, a quantum oracle was designed, evaluating the cost of a particular set of edges in the graph as well as its validity as a Hamiltonian cycle. Repeating the Grover algorithm with an oracle that finds successively lower cost each time allows to transform the decision problem to an optimization problem, finding the minimum cost of Hamiltonian cycles. N log₂ K qubits are put into an equiprobablistic superposition by applying the Hadamard gate on each qubit. Within these N log₂ K qubits, the method uses an encoding in which every node is mapped to a set of its encoded edges. The oracle consists of several blocks of circuits: a custom-written edge weight adder, node index calculator, uniqueness checker, and comparator, which were all created using only quantum Toffoli gates, including its special forms, which are Feynman and Pauli X. The oracle begins by using the edge encodings specified by the qubits to calculate each node that this path visits and adding up the edge weights along the way. Next, the oracle uses the calculated nodes from the previous step and check that all the nodes are unique. Finally, the oracle checks that the calculated cost is less than the previously-calculated cost. By performing the oracle an optimal number of times, a correct answer can be generated with very high probability. The oracle of the Grover Algorithm is modified using the recalculated minimum cost value, and this procedure is repeated until the cost cannot be further reduced. This algorithm and circuit design have been verified, using several datasets, to generate correct outputs.

Keywords: quantum computing, quantum circuit optimization, quantum algorithms, hybrid quantum algorithms, quantum programming, Grover’s algorithm, traveling salesman problem, bounded-degree TSP, minimal cost, Q# language

Procedia PDF Downloads 156